record_id,title,abstract,year,label_included,duplicate_record_id 1,Diagnostic Maintenance: A Technique Using a Computer,"A possible technique and the attending software needs for advanced computer controlled checkout equipment, based on transfer properties, are presented in this paper. By employing the transfer function, single measurements can be used to designate the status of relatively complex entities and equipment descriptive matrices can be readily formed. In many cases, such an analysis allows for deeper piece-part fault localization than would be possible from test terminals alone. Matrix manipulations are used to perform tests, localize faults and predict performance. A hierarchical listing of matrices is used to give the computer a description of the unit under test. In addition, the problem of providing a practical method of solving problems under access points restrictions has also been taken into account. Based on these restrictions and on the proposed mathematical tools, a program flow chart is shown to outline these functions. Table-updating (learning) features are incorporated into the program. Essentially, the program consists of a man-written executive program and subroutines which are employed by a computer to derive the actual checkout program used by the checkout computer. This goal is accomplished by applying, under computer control, specific parameters to general routines. The stimuli selection is accomplished by the computer by cross-referencing given restrictions against circuit type and functional data. In formulating the checkout technique, both man-performed and machine-performed checkout processes were examined in order to combine the desirable features of each into an optimum technique.",1963,0, 2,A Very High Speed Electro-Optical-Mechanical Phototypesetting Machine,"Adanced-performance automated phototypesetting systems which have been introduced to the market recently fall into two general classifications¿the electro-optical-mechanical and the all-electric configurations. Each category has its own advantages in performance, price, and application. This paper describes an advanced electro-optical-mechanical machine, now in field test, which has many important advantages over existing machines: speed, exceedingly high font capacity, very high production/cost ratio, and excellent quality of type production, reliability, and versatility. These advantages are gained by designing a built-in digital computer to perform many of the mechanical and optical functions ordinarily associated with existing machines. During the exposure of a line of type, no optical components are in motion and only one mechanical assembly is in relatively slow continuous rotary motion. (See Fig. 3 of the text for the system configuration). The extremely large font capacity makes the machine ideal for typesetting oriental languages and complex scientific publications.",1968,0, 3,Recognition of Handprinted Numerals by Two-Stage Feature Extraction,"An optical character recognition system for handprinted numerals of noisy and low-resolution measurement is proposed. The system consists of the two-stage feature extraction process. In the first stage a set of primary features insensitive to the quality and format of a black-white bit pattern are extracted. In the second stage, a set of properties capable of discriminating the character classes is derived from primary features. The system is simple and reliable in that only three kinds of primary features are needed to be detected. The recognition is based on the decision tree which tests the logic statements of secondary features.",1970,0, 4,Computer Diagnosis Using the Blocking Gate Approach,"In previous papers [3]–[5] the authors considered the application of graph theory to represent and analyze a computer system. From such analysis of the graph (thus the system), we have shown that faults can be detected and located by means of strategically placed test points within the system.",1971,0, 5,Simulation modeling for air quality control,"Simulation modeling will have a major role in air quality programs for forecasting, developing both temporary and long-range controls and planning regional land use. Such air quality models use pollutant emissions and meteorological measurements to simulate concentrations of atmospheric pollutants. Box-and Gaussian-type models have serious short comings, but the new generation models (such as NEXUS) are approaching an adequate resolution and accuracy for routine applications. The NEXUS model simulated pollutant concentrations within 20 percent of the observed values.",1971,0, 6,Associative memories and processors: An overview and selected bibliography,"A survey of associative processing, techniques is presented, together with a guide to the published literature in this field. Some familiarity with the basic concepts of associative-processing is assumed. The references have been divided into four groups dealing with architectural concepts, hardware implementation, software considerations, and application areas. The discussion of architectural concepts consists of a classification of associative devices into four major categories (fully parallel, bit-serial, word-serial, and block-oriented) and an enumeration of techniques for dealing with multiple responses and hardware faults. With respect to hardware implementation, considerations are given to the basic operations implemented, hardware elements used (e.g., cryoelectrics, magnetic elements, and semiconductors), and physical characteristics such as speed, size, and cost. The discussion of software aspects of associative devices deals with synthesis of algorithms, programming problems, and software simulation. The application areas discussed include solution of some mathematical systems, radar signal processing, information storage and retrieval, and performance of certain control functions in computer systems.",1973,0, 7,Computer integration in the Kieler Multisonde,"The Multisonde is capable of measuring six parameters simultanously, which shall be exten- ded to an amount of sixteen parameters. Possible higher data rates can only be managed by using a digital magnetic tape unit and the easiest way to couple it to the Multisonde is by a minicomputer.This enables also an immediate check of the tape after having registrated one profile, and so prevents that a possible interface defect, leading to an incorrect recording, is not detected during the entire expedition. This paper discusses a real-time computer integrated to the multisonde for faster data processing.",1974,0, 8,Fault-Tolerant Computing: An Introduction and a Perspective,"FAULT-TOLERANT computing has been defined as ""the ability to execute specified algorithms correctly regardless of hardware failures, total system flaws, or program fallacies"" [1]. To the extent that a system falls short of meeting the requirements of this definition, it can be labeled a partially fault-tolerant system [2]. Thus the definition of fault-tolerant computing provides a standard against which to measure all systems having a degree of fault tolerance. In particular, one can classify systems according to: 1), the amount of manual intervention required in performing three basic functions, and 2) the class of faults covered by three basic functions involved in fault tolerance: system validation, fault diagnosis, and fault masking or recovery. The word ""fault"" here is used to inclusively describe ""failures, flaws, and fallacies"" in the original definition. The first function is involved in the design and production of the system hardware and software, while the last two functions are embodied in the system itself. Likewise, the first function is directed to handling faults arising from design and production errors, whereas the last two functions are aimed at faults due to random hardware failures.",1975,0, 9,Methods for the Determination of Specific Organic Pollutants in Water and Waste Water,"This paper reviews methods for sample collection and pretreatment and for isolation and determination of specific organic compounds with emphasis on methods for determination. Methods for organochlorine, organophosphorus, and organonitrogen pesticides and phenoxy acid herbicides are presented. Methods for the determination of other chlorinated organics such as polychlorinated biphenyls, organic solvents, and other selected organic compounds are also presented. Gas chromatography is the most widely applicable and popular method for detecting and measuring specific organic compounds in water, waste water, and other environmental media. When coupled with selective detectors, gas chromatography becomes the most sensitive and selective method for qualitative and quantitative determination of organic compounds that is available. The recent development and application of the computer-controlled gas chromatograph-mass spectrometer adds a much needed extra dimension to organic analysis, that of unequivocal identification of compounds that can be only tentatively identified by other means. Gas chromatographic methods are selectively applied either directly on the raw sample or after concentration by adsorption or solvent extraction and evaporation. A variety of other determinative methods such as infrared, ultraviolet, and fluorescent spectroscopy; liquid chromatography; and thin-layer chromatography have application in the broad spectrum of organic analysis. Selected examples of the application of these methods are presented.",1975,0, 10,Concurrent software fault detection,"A graph theoretic model for software systems is presented which permits a system to be characterized by its set of allowable execution sequences. It is shown how a system can be structured so that every execution sequence affected by a control fault is obviously in error, i.e., not in the allowable set defined by the system model. Faults are detected by monitoring the execution sequence of every transaction processed by the system and comparing its execution sequence to the set of allowable sequences. Algorithms are presented both for structuring a system so that all faults can be detected and for fault detection concurrent with system operation. Simulation results are presented which support the theoretical development of this paper.",1975,0, 11,Safeguard data-processing system: Maintenance and diagnostic subsystem,"The Safeguard Maintenance and Diagnostic Subsystem (M & DSS) is a unique, independent, hardware group within the data-processing system through which the nonreal-time functions of fault detection and isolation are performed. In this paper, the M & DSS hardware and fault detection software are described and system performance is reviewed.",1975,0, 12,Microprocessor Controlled Data Acquisition Systems,"The low cost and flexibility of microcomputers have created .a burst of activity in intelligent data acquisition systems. Radio frequency surveillance, electronic warfare systems, and navigational control are a few of the more visible applications. The power of well-designed microcomputers, when coupled with present day electronic detection and control equipment, is creating a state-of-the-art technological breakthrough in system designs. Noise control, pollution analysis, environmental quality in general, radar tracking and control, radio frequency analysis and interference detection are all candidates for distributed data acquisitions systems. This paper describes the use of microprocessors to control a radio frequency detection system.",1975,0, 13,Some preliminary experiments in the recognition of connected digits,"This paper describes an implementation of a speaker independent system which can recognize connected digits. The overall recognition system consists of two separate but interrelated parts. The function of the first part of the system is to segment the digit string into the individual digits which comprise the string; the second part of the system then recognizes the individual digits based on the results of the segmentation. The segmentation of the digits is based on a voiced-unvoiced analysis of the digit string, as well as information about the location and amplitude of minima in the energy contour of the utterance. The digit recognition strategy is similar to the algorithm used by Sambur and Rabiner [1] for isolated digits, but with several important modifications due to the impreciseness with which the exact digit boundaries can be located. To evaluate the accuracy of the system in segmenting and recognizing digit strings a series of experiments was conducted. Using high-quality recordings from a soundproof booth the segmentation accuracy was found to be about 99 percent, and the recognition accuracy was about 91 percent across ten speakers (five male, five female). With recordings made in a noisy computer room the segmentation accuracy remained close to 99 percent, and the recognition accuracy was about 87 percent across another group of ten speakers (five male, five female).",1976,0, 14,Fault-Tolerant Computing: A Introduction,"THE field of fault-tolerant computing is concerned with the analysis, design, verification, and diagnosis of computing systems that are subject to faults. A ""computing system,"" in this general context, can be a hardware system, a software system, or a computer which includes both hardware and internal software. A ""fault"" can reside in either hardware or software and can occur in the process of designing and implementing the system, or in the process of using the system once it is implemented. Major areas of technical interest include: 1) the design and analysis of computers which are able to execute specified algorithms correctly (according to specified correctness criteria) in the presence of hardware and/or software faults; 2) the testing and verification of the initial correctness of hardware and software systems prior to utilization; 3) the design and implementation of on-line fault detection, fault location, and system reconfiguration procedures that can be used to recover from hardware and software faults, to perform system maintenance, and to maintain security; and 4) the development of models, measures, and techniques for evaluating the reliability, availability and, in general, the effectiveness of fault-tolerant computing systems.",1976,0, 15,Redundancy management of shuttle flight control sensors,"Aerospace programs such as the Space Shuttle increasingly demand that avionics be fault tolerant. This paper describes the concept used for sustaining two system failures in the Shuttle Flight Control System (FCS) using redundancy management of sensors as the specific area of emphasis. Moreover, an analytical approach is described which is capable of verifying performance on a statistical basis, addressing performance requirements concerned with ?? Avoidance of nuisance failures. ?? Detectability of faults. ?? Maximum vehicle transients incurred as a result of a failure. This approach will be the basis for preflight verification of Space Shuttle FCS redundancy management implemented in software.",1976,0, 16,Mainframe Implementation With Off-The-Shelf LSI Modules,"Bit-sliced microprocessor parallel design for Sperry Univac 1108 improves performance, reduces build cost, and detects most faults.",1978,0, 17,Evaluation of Maintenance Software in Real-Time Systems,"Statistical theory can be applied to data collected by physical fault insertion of small, random fault samples to estimate the critical parameters of software responsible for fault recovery, detection, and resolution in real-time systems. Estimates of the critical parameters can be used to determine if system reliability objectives are satisfied. This method of software evaluation, modified because of the impracticality of selecting and physically inserting truly random fault samples, was verified against a diagnostic/ Trouble Locating Program (TLP) whose critical parameters were known and was then used to evaluate and optimize a new diagnostic/ TLP program of unknown quality. Empirical evidence indicates that there is a close correlation between system performance against real faults and against the selected subset of real faults used for sampling.",1978,0, 18,Controlling the Software Life Cycle—The Project Management Task,"This paper describes project management methods used for controlling the life cycle of large-scale software systems deployed in multiple instaltations over a wide geographic area. A set of management milestones is offered along with requirements and techniques for establishing and maintaining control of the project. In particular, a quantitative measure of software quality is proposed based upon functional value, availability, and maintenance costs. Conclusions drawn are based on the study of several cases and are generally applicable to both commercial and military systems.",1978,0, 19,An Optimal Approach to Fault Tolerant Software Systems Design,"A systematic method of providing software system fault recovery with maximal fault coverage subject to resource constraints of overall recovery cost and additional fault rate is presented. This method is based on a model for software systems which provides a measure of the fault coverage properties of the system in the presence of computer hardware faults. Techniques for system parameter measurements are given. An optimization problem results which is a doubly-constrained 0,1 Knapsack problem. Quantitative results are presented demonstrating the effectiveness of the approach.",1978,0, 20,A survey and methodology of reconfigurable multi-module systems,"
First Page of the Article
",1978,0, 21,Algorithms for Monitoring Changes in Quality of Communication Links,"The automation of network service observing through message classification software permits a great increase in the rate at which observations can be made, thus allowing the monitoring of network links as well as switching nodes. This paper develops and analyzes a quality control algorithm which is suitable for monitoring the performance of direct (end-office to endoffice) links via high-volume service observing data. This algorithm processes a sample of successful as well as unsuccessful call attempts sequentially and in real time to quickly and effectively detect deteriorations in the performance of a link. Discussion of the performance qualities and the robustness of this algorithm is presented. The discussion is supported by quantitative results.",1979,0, 22,Self-Testing Computers,"Built-in-test techniques exploit hardware redundancy to provide continuous on-line monitoring of computer performance. Hardware's declining cost makes these techniques attractive, especially for modular computers.",1979,0, 23,Measuring Improvements in Program Clarity,"The sharply rising cost incurred during the production of quality software has brought with it the need for the development of new techniques of software measurement. In particular, the ability to objectively assess the clarity of a program is essential in order to rationally develop useful engineering guidelines for efficient software production and language development.",1979,0, 24,Software Failure Modes and Effects Analysis,"This concept paper discusses the possible use of failure modes and effects analysis (FMEA) as a means to produce more reliable software. FMEA is a fault avoidance technique whose objective is to identify hazards in requirements that have the potential to either endanger mission success or significantly impact life-cycle costs. FMEA techniques can be profitably applied during the analysis stage to identify potential hazards in requirements and design. As hazards are identified, software defenses can be developed using fault tolerant or self-checking techniques to reduce the probability of their occurrence once the program is implemented. Critical design features can also be demonstrated a priori analytically using proof of correctness techniques prior to their implementation if warranted by cost and criticality.",1979,0, 25,Technical Control of the Digital Transmission Facilities of the Defense Communications System,"As an aid in evaluating technical control techniques, an emulation facility that automatically performs the status monitoring, performance assessment, and fault isolation technical control functions as they apply to a future, predominantly digital Defense Communications System (DCS) has been developed. This emulation facility is a multicomputer system which automatically monitors and isolates faults for digital transmission equipments (power generators, RF distributions, digital radios, second level multiplexers, first level multiplexers, submultiplexers, and key generator units). The status monitoring and performance assessment functions are performed by two processors, the adaptive channel estimator (ACE) and an LSI 11/03, the composite being referred to as the CPMAS-D unit. When the software residing in the CPMAS-D unit detects a monitor point transition (alarm to/from nonalarm), it transmits the monitor point information to the CPMAS emulator, a PDP 11/60 minicomputer. These messages, called exception reports, enable the CPMAS emulator to perform its prime mission: fault isolation. A unique fault isolation algorithm has been developed for test with this emulation facility. The algorithm consists of three discrete steps. First, the equipment alarms are mapped into their effect upon each transmission path (link, supergroup, group, or channel). Second, the stations with the faulty equipment are located by deleting the impact of sympathetic alarms. Third, the faulty equipment is identified using the equipment alarm status. Testing of the fault isolation algorithm is enhanced by an emulated network consisting of up to 16 stations, 2048 equipments, and two nodal control areas. Monitor point simulators and T1-4000 multiplexers, which provide simulated and real inputs to two CPMAS-D units, are also part of the emulation facility. Technical control terminals are provided to evaluate man-machine operation in an automated technical control environment. An adaptive channel estimator (ACE) field test was conducted at the Rome Air Development Center (RADC) test facilities so that the ACE development unit could be evaluated as a means of assessing performance of high-speed digital radio transmission systems. Also, the fault isolation algorithm was tested using the pr- eviously described emulation facility. The test data and subsequent technique evaluations are discussed in this paper.",1980,0, 26,A highly reliable mask inspection system,"An automatic system for inspecting micro mask defects with 1-µm minimum detectable size has been developed. An outline of the system is as follows: The pattern image obtained with a pickup tube is converted into binary video signals which are transferred into two parallel logic circuits for detecting pattern defects. One is based on the pattern-analyzing method, for which one of four algorithms for detecting micro defects is presented in detail. The other is based on the design-pattern data-comparing method, where the data compression scheme and a new idea for avoiding mask alignment errors are adopted. A software system outline, very important in assisting the hardware functions in this system, is also presented. The results of experiments for determining system performance indicate that the system can detect â‰?-µm diameter defects or loss patterns with high probability by complimentary use of the two methods. A 4-in by 4-in mask can be inspected within 100 rain.",1980,0, 27,Some Stability Measures for Software Maintenance,"Software maintenance is the dominant factor contributing to the high cost of software. In this paper, the software maintenance process and the important software quality attributes that affect the maintenance effort are discussed. One of the most important quality attributes of software maintainability is the stability of a program, which indicates the resistance to the potential ripple effect that the program would have when it is modified. Measures for estimating the stability of a program and the modules of which the program is composed are presented, and an algorithm for computing these stability measures is given. An algorithm for normalizing these measures is also given. Applications of these measures during the maintenance phase are discussed along with an example. An indirect validation of these stability measures is also given. Future research efforts involving application of these measures during the design phase, program restructuring based on these measures, and the development of an overall maintainability measure are also discussed.",1980,0, 28,Experience from Quality Assurance in Nuclear Power Plant Protection System Software Validation,"Validation of a digital computer program to be used in a nuclear power plant protection system must meet quality assurance requirements. Digital systems have not traditionally been used on nuclear reactor protection systems. Licensing of digital system software requires providing assurance that the software performs its intended function. To provide added assurance, the Babcock and Wilcox Company performed software validation on the digital program intended for use on a protection system. Software validation of the Reactor Protection System-II digital program presented a multi-faceted challenge. Quality assurance requirements were imposed on the project. Certain validation ground rules were specified. No known methods existed for proving program correctness for nontrivial software. No precedence had been set to estimate the quality or quantity of testing required as a method of validation. Project schedule constraints were imposed. The need for more documentation than normally furnished was recognized, but how much and what kind was not clear. This paper relates how this challenge was met through a discussion of how the project was performed and the lessons learned through those experiences. A test method was devised within validation ground rules and project schedule constraints to validate that software performed the specified functions. Orderly methods of testing and evaluating were implemented and documented in compliance with a plan to provide auditable, traceable evidence of the validation effort and the digital component program performance.",1980,0, 29,Capture-Recapture Sampling for Estimating Software Error Content,Mills capture-recapture sampling method allows the estimation of the number of errors in a program by randomly inserting known errors and then testing the program for both inserted and indigenous errors. This correspondence shows how correct confidence limits and maximum likelihood estimates can be obtained from the test results. Both fixed sample size testing and sequential testing are considered.,1981,0, 30,Application of a Methodology for the Development and Validation of Reliable Process Control Software,"This paper discusses the necessity of a good methodology for the development of reliable software, especialy with respect to the final software validation and testing activities. A formal specification development and validation methodology is proposed. This methodology has been applied to the development and validation of a pilot software, incorporating typical features of critical software for nuclear power plant safety protection. The main features of the approach indude the use of a formal specification language and the independent development of two sets of specifications. Analyses on the specifications consists of three-parts: validation against the functional requirements consistency and integrity of the specifications, and dual specification comparison based on a high-level symbolic execution technique. Dual design, implementation, and testing are performed. Automated tools to facilitate the validation and testing activities are developed to support the methodology. These includes the symbolic executor and test data generator/dual program monitor system. The experiences of applying the methodology to the pilot software are discussed, and the impact on the quality of the software is assessed.",1981,0, 31,Application of Kalman Filtering in Computer Relaying,"During the first cycle following a power system fault, a high speed computer relay has to make a decision usually based on the 60 Hz information, which is badly corrupted by noise. The noise in this case is the nonfundamental frequency components in the transient current or voltage, as the case may be. For research and development purposes of computer relaying techniques, the precise nature of the noise signal is required. The autocorrelation function and variance of the noise signal was obtained based on the frequency of occurrence of the different types of faults, and the probability distribution of fault location. A new technique for modelling the signal and the measurements is developed based on Kalman Filtering theory for the optimal estimation of the 60 Hz information. The results indicate that the technique converges to the true 60 Hz quanitities faster than other algorithms that have been used. The new technique also has the lowest computer burden among recently published algorithms and appears to be within the state of the art of current microcomputer technology.",1981,0, 32,The Energy Saver/Doubler Quench Protection Monitor System,"The microprocessor based system to detect quenches in the superconducting magnets is described. Tests conducted over the past two years using a string of twenty superconducting magnets have yielded results having major impact on the design of this system. A network of twenty-four 16-bit microprocessors will monitor the integrated voltage across magnet protection units every 60 Hz line period using isolated voltage to frequency converters. These measurements are compared with the inductive voltage expected. Under transient conditions, the distributed L-C nature of the magnets delays signal propagation as in a transmission line. To achieve the required sensitivity of resistive voltage detection, the inductive voltage for each magnet cell is calculated using two of the six measured dI/dt signals. Prevention of faults to the magnet system by the monitoring connections, minimization of cabling, and matched transient response of all monitoring channels have been considered together in design of the hardware and software.",1981,0, 33,Fault-tolerant computer systems,"The paper reviews the methods by which reliable processing and control can be achieved using fault-tolerant digital computers. The motivation for employing such systems is discussed, together with an indication of current and potential areas of application. The features of fault-tolerant computers are described in general terms, together with a system design procedure specific to the development of a reliable computer. The adherence to a well structured design methodology is particularly important in a fault-tolerant computer to ensure an initially fault-free system. In order to follow this design procedure an intimate knowledge of the following subject areas is required: fault classification, redundancy techniques and their relative merits and reliability modelling and analysis. These topics are covered in the paper, with particular emphasis being placed upon the implementation of hardware, software, data and time-redundancy techniques. Examples of fault-tolerant computers proposed and produced in the last decade are described. The availability of large scale integrated circuits (LSI) and in particular microprocessors will have a profound effect on the development and application of fault-tolerant computers. The implications of using LSI in this area are therefore discussed including a brief description of two fundamentally different approaches to the realisation of a fault tolerant microcomputer.",1981,0, 34,Stochastic Reliability-Growth: A Model for Fault-Removal in Computer-Programs and Hardware-Designs,"An assumption commonly made in early models of software reliability is that the failure rate of a program is a constant multiple of the (unknown) number of faults remaining. This implies that all faults contribute the same amount to the failure rate of the program. The assumption is challenged and an alternative proposed. The suggested model results in earlier fault-fixes having a greater effect than later ones (the faults which make the greatest contribution to the overall failure rate tend to show themselves earlier, and so are fixed earlier), and the DFR property between fault fixes (assurance about programs increases during periods of failure-free operation, as well as at fault fixes). The model is tractable and allows a variety of reliability measures to be calculated. Predictions of total execution time to achieve a target reliability, and total number of fault fixes to target reliability, are obtained. The model might also apply to hardware reliability growth resulting from the elimination of design errors.",1981,0, 35,"Microcomputers: Microprocessors and the M.D.: A new breed of smart medical equipment can diagnose, monitor, analyze, and rehabilitate","Microprocessor-based electrocardiography instruments are helping combat the No.1 killer of people in the United States: heart disease. The new ECG instruments not only capture and display real-time and recorded measurements of heart rates, waveforms, and rhythms, but they also process and transfer the data to a remote site for analysis by a cardiologist or a computer. Analysis can point to problems in the electrical conduction networks, heart muscles, valves or blood flow. The latest ECG equipment uses the microprocessor's software-derived intelligence to diagnose some heart defects or to record and warn of abnormal heart rhythms. Microprocessors are also turning up in other medical applications that are making diagnoses and even medication less prone to error. Among these applications are the following: Bedside monitoring of patient circulatory, respiratory, and central nervous systems, along with metabolic and acid/base control systems. Monitoring of heart signals in ambulatory patients through the recording of data on a tape cassette worn by patients. Feedback control of the breathing of bed patients and the automatic infusion of intravenous fluids. Automatic chemistry analyzers for laboratories that allow blood counts to be made by relatively unskilled personnel.",1981,0, 36,An autocorrelation pitch detector with error correction,"A particularly demanding application of pitch detection is in speech training of the deaf. The prevalance of defects such as breathy, creaky or hoarse voice means that pitch detectors are prone to errors. To overcome this an error correction algorithm has been developed for the Dubnowski, Schafer and Rabiner (D.S.R.) pitch detector. The resulting pitch detector overcomes these problems. It is a real-time procedure which can be implemented in hardware or software. Its performance is comparable with the nonreal-time SIFT algorithm and may thus have wider applications.",1982,0, 37,A fault-tolerant/fail-safe command and control system for automated vehicles,"Redundancy and fault-tolerant computer technology are being applied to the development of a command and control system for automated vehicles. An ultrareliable command and control system is described which meets the availability and safety requirements for an automated transit system. The technology presented is applicable to a wide variety of computer-based controls where safety is involved or where interruption of the control process cannot be tolerated. High-performance computer-based controls are being developed by OTIS-TTD and Del Rey Systems to control the operation of automated transit systems. The command and control system will allow economical, flexible, personalized service while operating a large number of closely spaced (short headway) vehicles. The requirements for flexible service and short headway operation preclude the use of traditional failsafe design practices and components. To achieve the required performance, reliability, and safety, redundancy and fault-tolerant computer techniques are used. This paper describes how the reliability requirements for command and control systems are achieved through the application of fault tolerant computing. Three alternative computer architectures are described. Reliability analyses have been performed for each candidate architecture, and the results are presented. Based on the reliability analyses, a triple redundant computer is selected. Automatic failure detection and recovery is accomplished by software, thus allowing off-the-shelf hardware to be used.",1982,0, 38,Databases and Units of Measure,"A serious impediment to the integrated use of databases across international boundaries, scientific disciplines and application areas is the use of different units of measure, e.g., miles or kilometers for distance, dollars or rupees for currency, and Btu or kWh for energy. Units of measure are not specified in the data definition language but associated by convention with values in the database. Inconsistent usage of values with different units cannot be checked atuomatically. Retrieval, storage, and manipulation of data values in units other than those associated with the values requires that the user perform the appropriate conversions. This process is inconvenient and error prone. These problems are illustrated by an international database example.",1982,0, 39,A Remote Data Acquisition and Display System,"A remote data acquisition and display system is described. The system deploys distributed intelligent microprocessors using a structured, high-level language with a simple operating system. The hardware and microprocessor software are modular; thus, the overall system is flexible and can be easily tailored to a wide variety of applications. The remote units accept a variety of analog and digital signals, perform automatic test and calibration of analog inputs, and perform conversion to customary units of measure. Information is digitally multiplexed from the input units to display units which format the data for alphanumeric or graphic displays or for conventional analog meters or recorders. Operator interactions are achieved by means of functional keys on the display or via a maintenance terminal. Software tests and on-line diagnostics are used to detect faults and aid in maintenance and repair.",1982,0, 40,System Development and Technology Aspects of the IBM 3081 Processor Complex,"The IBM 3081 Processor Complex consists of a 3081 Processor Unit and supporting units for processor control, power, and cooling. The Processor Unit, completely implemented in LSI technology, has a dyadic organization of two central processors, each with a 26-ns machine cycle time, and executes System/370 instructions at approximately twice the rate of the IBM 3033. This paper presents an overview of the advances in technology and in the design process, as well as enhancements in the system design that were associated with the development of the IBM 3081. Application of LSI technology to the design of the 3081 Processor Unit, which contains almost 800,000 logic circuits, required extensions to silicon device packaging, interconnection, and cooling technologies. A key achievement in the 3081 is the incorporation of the thermal conduction module (TCM), which contains as many as 45,000 logic circuits, provides a cooling capacity of up to 300 W, and allows the elimination of one complete level of packaging—the card level. Reliability and serviceability objectives required new approaches to error detection and fault isolation. Innovations in system packaging and organization, and extensive design verification by software and hardware models, led to the realization of the increased performance characteristics of the system.",1982,0, 41,Architecture Optimization of Aerospace Computing Systems,"Simultaneous consideration of both performance and reliability issues is important in the choice of computer architectures for real-time aerospace applications. One of the requirements for such a fault-tolerant computer system is the characteristic of graceful degradation. A shared and replicated resources computing system represents such an architecture. In this paper, a combinatorial model is used for the evaluation of the instruction execution rate of a degradable, replicated resources computing system such as a modular multiprocessor system. Next, a method is presented to evaluate the computation reliability of such a system utilizing a reliability graph model and the instruction execution rate. Finally, this computation reliability measure, which simultaneously describes both performance and reliability, is applied as a constraint in an architecture optimization model for such computing systems.",1983,0, 42,Metric Analysis and Data Validation Across Fortran Projects,"The desire to predict the effort in developing or explain the quality of software has led to the proposal of several metrics in the literature. As a step toward validating these metrics, the Software Engineering Laboratory has analyzed the Software Science metrics, cyclomatic complexity, and various standard program measures for their relation to 1) effort (including design through acceptance testing), 2) development errors (both discrete and weighted according to the amount of time to locate and frix), and 3) one another. The data investigated are collected from a production Fortran environment and examined across several projects at once, within individual projects and by individual programmers across projects, with three effort reporting accuracy checks demonstrating the need to validate a database. When the data come from individual programmers or certain validated projects, the metrics' correlations with actual effort seem to be strongest. For modules developed entirely by individual programmers, the validity ratios induce a statistically significant ordering of several of the metrics' correlations. When comparing the strongest correlations, neither Software Science's E metric, cyclomatic complexity nor source lines of code appears to relate convincingly better with effort than the others",1983,0, 43,"Low Contrast Sensitivity of Radiologic, CT, Nuclear Medicine, and Ultrasound Medical Imaging Systems","The physical sensitivity of a medical imaging system is defined as the square of the output signal-to-noise ratio per unit of radiation to the patient, or the information/radiation ratio. This sensitivity is analyzed at two stages: the radiation detection stage, and the image display stage. The signal-to-noise ratio (SNR) of the detection stage is a physical measure of the statistical quality of the raw detected data in the light of the imaging task to be performed. As such it is independent of any software or image processing algorithms which belong properly to the display stage. The fundamental SNR approach is applied to a wide variety of medical imaging applications and measured SNR values for signal detection at a given radiation exposure level are compared to the optimal values allowed by nature. It is found that the engineering falls short of the natural limitations by an inefficiency of about a factor two for most of the individual radiologic system components, allowing for great savings in the exposure required for a given imaging performance when the entire system is optimized. The display of the detected information is evaluated from the point of view of observer efficiency, the fraction of the displayed information that a human observer actually extracts.",1983,0, 44,Effect of the Software Coincidence Timing Window in Time-of-Flight Assisted Positron Emission Tomography,"The effect of the software coincidence timing window (SCW) has been studied and demonstrated experimentally using Super PETT I, a time-of-flight (TOF) assisted positron emission tomograph. This method utilizes the TOF information to eliminate unwanted random and scattered coincidence events outside of a region of interest. The effect is substantial for high count rate studies, and can dramatically improve the noise contribution from measured attenuation factors. The effective coincidence window is reduced to 3.3 nsec for reconstructions of the field of view, and reduced to only 2 nsec for measurement of the attenuation factors.",1983,0, 45,Distributed Circuit Control for the DCS,"Processor-controlled time slot interchange equipments now in production for commercial applications could provide the cross-connect switching element of a Distributed Circuit Control (DCC) capability for a digital Defense Communications System (DCS). Real time automated rerouting of circuits around wartime damage could be effected by employing a routing algorithm implemented in software to generate channel cross-connect instructions for the time slot interchange hardware. Three candidate algorithms, designed originally for call or packet routing, were modified for routing and rerouting end-to-end dedicated circuits under the constraints of the National Communications System (NCS) restoration priority system. This paper focuses on the network modeling effort that was undertaken to provide quantitative data to support a comparative evaluation of the candidate algorithms. Predicted performance data are presented for the modified algorithms applied to a set of symmetric networks and operated in both allocation (routing) and trunk-fail (rerouting) modes. Parameters measured included circuit reroute time, quality of the routes, stability of the algorithm, network resources required, and processor sizing.",1983,0, 46,Interval Reliability for Initiating and Enabling Events,"This paper describes generation and evaluation of logic models such as fault trees for interval reliability. Interval reliability assesses the ability of a system to operate over a specific time interval without failure. The analysis requires that the sequence of events leading to system failure be identified. Two types of events are described: 1) initiating events (cause disturbances or perturbations in system variables) that cause system failure and 2) enabling events (permit initiating events to cause system failure). Control-system failures are treated. The engineering and mathematical concepts are described in terms of a simplified example of a pressure-tank system. Later these same concepts are used in an actual industrial application in which an existing chlorine vaporizer system was modified to improve safety without compromising system availability. Computer codes that are capable of performing the calculations, and pitfalls in computing accident frequency in fault tree analysis, are discussed.",1983,0, 47,Software Performance Modeling and Management,"This paper addresses methods to assess the impact of software on weapon system performance parameters such as reliability and operability/suitability. The latter is emphasized in major weapon-system go-ahead decisions. This paper discusses a system reliability model primarily intended to provide management insight and guidelines for identifying out-of-tolerance situations and needed corrective actions. Guidelines are discussed for judging if ``independent verification and test'' and ``weapon-system proof-of-compliance testing'' are successful. Guidelines are provided for comparing software and hardware in terms of total valid problems reported, resolution rates, and comparable difficulty of implementing and verifying the resolutions. This is done with respect to severity levels in MIL-STD-1679. The management of the operability/suitability issue is discussed and recommendations are made to both the procuring agency and the prime contractor. Software is an attractive medium in comparison to hardware in implementing complex functions because: a) there are more controllable means to reduce severe software defects, and b) it is easier to effect change. Properly managed software will have minimal difficulties with system reliability and operability/suitability. Proper software management includes: a) the application, during development, of proper design tools such as top-down design, structural programming, and programming teams; b) aggressive, independent testing and problem tracking activities; and c) application of the management elements presented in this paper. Such application requires contractor familiarity with user needs and capabilities as well as with the mission and operations of the system, so as to optimize the man/machine interface.",1983,0, 48,Multivariate Sampling Plans in Quality Control: A Numerical Example,"This paper presents an attributes acceptance sampling procedure where several measurements are available on each item in the sample set. It is thus especially relevant when these measurements on each item are correlated; usual procedures cannot account for correlations. Correlations are considered by using a classification rule to assign the attribute good or bad to each item in the sample set; this rule uses the non-parametric k-nearest neighbor estimator. The test by which the lot is accepted or rejected, based on the outcomes of the rule for all items in the sample set, is the same as in acceptance sampling by attributes. The estimator has only to be compensated for errors at the level of the classification rule. This procedure has been used since 1978 in industry, thanks to a software package which implements it. Applications have mostly been to process quality control and mechanical parts. The procedure has the interesting feature of applying under relaxed assumptions on the distributions of the measurements on the bad items. The numerical example covers such a case. Because the classification rule operates individually on each sample item, nothing is changed in the definitions of the acceptable quality level and the limiting quality level. They apply to the whole lot. This is true because the multivariate measurements on each item are aggregated into a binary attribute on each item, as in classical sampling by attributes.",1983,0, 49,On selectivity properties of discrete-time linear networks,"A definition of the quality factor and the resonant angle for discrete-time networks is presented. The definitions are based on the bilinear transform. Using this transform, the selectivity properties for discrete-time networks, such as switched capacitor (SC) and digital filters, are consistent with analog networks. Expressions for and in terms of polar coordinate parameters and for a complex-conjugate pole pair and and for real-axis poles are given as well as the inverse relations. Canonic expressions for generic forms of a biquadratic function are also given. Finally, an accurate method for measurement of the parameters in second-order SC-filters and digital filters implemented in software is described.",1984,0, 50,Design and Evaluation of a Fault-Tolerant Multiprocessor Using Hardware Recovery Blocks,In this paper we consider the design and evaluation of a fault-tolerant multiprocessor with a rollback recovery mechanism.,1984,0, 51,A Performability Solution Method for Degradable Nonrepairable Systems,"An algorithm is developed for solving a broad class of performability models wherein system performance is identified with ""reward."" More precisely, for a system S and a utilization period T, the performance variable of the model is the reward derived from using S during T. The state behavior of S is represented by a finite-state stochastic process (the base model); reward is determined by reward rates associated with the states of the base model. Restrictions on the base model assume that the system in question is not repaired during utilization. It is also assumed that the corresponding reward model is a nonrecoverable process in the sense that a future state (reward rate) of the model cannot be greater than the present state. For this model class, we obtain a general method for determining the probability distribution function of the performance (reward) variable and, hence the performability of the corresponding system. Moreover, this is done for bounded utilization periods. The result is an integral expression which can be solved either analytically or numerically.",1984,0, 52,A Study of Software Failures and Recovery in the MVS Operating System,"This paper describes an analysis of system detected software errors on the MVS operating system at the Center for Information Technology (CIT), Stanford University. The analysis procedure demonstrates a methodology by which systems with automatic recovery features can be evaluated. Most common error categories are determined and related to the program in execution at the time of the error. The severity of the error is measured by evaluating the criticality of the program for continued system operation. The system recovery and error correction features are then analyzed and an estimate of the system fault tolerance to errors of different levels of severity is made.",1984,0, 53,Fault Tolerance in Binary Tree Architectures,"Binary tree network architectures are applicable in the design of hierarchical computing systems and in specialized high-performance computers. In this correspondence, the reliability and fault tolerance issues in binary tree architecture with spares are considered. Two different fault-tolerance mechanisms are described and studied, namely: 1) scheme with spares; and 2) scheme with performance degradation. Reliability analysis and estimation of the fault-tolerant binary tree structures are performed using the interactive ARIES 82 program. The discussion is restricted to the topological level, and certain extensions of the schemes are also discussed.",1984,0, 54,Dual Gated Nuclear Cardiac Images,"A data acquisition system has been developed to collect camera events simultaneously with continually digitized electrocardiograph signals and respiratory flow measurements. Software processing of the list mode data creates more precisely gated cardiac frames. Additionally, motion blur due to heart movement during breathing is reduced by selecting events within a specific respiratory phase. Thallium myocardium images of a healthy volunteer show increased definition. This technique of combined cardiac and respiratory gating has the potential of improving the detectability of small lesions, and the characterization of cardiac wall motion.",1984,0, 55,Static Data Flow Analysis of PL/I Programs with the PROBE System,"An experimental data flow analyzer for PL/I programs has been implemented within the PROBE system developed at the GM Research Laboratories. PROBE is an experimental software package that examines the internal structure of PL/I programs in order to expose error-prone design and programming features. This paper describes 1) the algorithms and data structures used by the data flow analyzer, 2) the salient aspects of PL/I usage in the analyzed production-level programs, and 3) the results of the data flow analysis.",1984,0, 56,Evaluation of Error Recovery Blocks Used for Cooperating Processes,"Three alternatives for implementing recovery blocks (RB's) are conceivable for backward error recovery in concurrent processing. These are the asynchronous, synchronous, and the pseudorecovery point implementations. Asynchronous RB's are based on the concept of maximum autonomy in each of concurrent processes. Consequently, establishment of RB's in a process is made independently of others and unbounded rollback propagations become a serious problem. In order to completely avoid unbounded rollback propagations, it is necessary to synchronize the establishment of recovery blocks in all cooperating processes. Process autonomy is sacrificed and processes are forced to wait for commitments from others to establish a recovery line, leading to inefficiency in time utilization. As a compromise between asynchronous and synchronous RB's we propose to insert pseudorecovery points (PRP's) so that unbounded rollback propagations may be avoided while maintaining process autonomy. We developed probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables. With these models we have estimated 1) the interval between two successive recovery lines for asynchronous RB's, 2) mean loss in computation power for the synchronized method, and 3) additional overhead and rollback distance in case PRP's are used.",1984,0, 57,A Software Package for Statistical Quality Control of Instrumentation,"The software package contains both usual and new acceptance sampling procedures (essentially by variables) especially for instrumentation and automatic test systems. The main newer procedures implemented in this package are: * By attributes: with finite lot sizes, with classification errors * By variables: bilateral test of the mean measurement, nonparametric sequential, Bayes with minimization of the mean global cost, multidimensional measurements on each sample item, with provision for the internal structure of the items. This software package is aimed at implementation on microprocessors or personal computers, for acceptance sampling by variables and for data logging of measurements. The past four years during which the package has been in use, have been very positive, especially for those applications to electronic instrumentation and automatic test systems, where more standard methods were unsatisfactory or unfit.",1984,0, 58,Performance/Reliability Measures for Fault-Tolerant Computing Systems,Some fault-tolerant computing systems are discussed and existing reliability measures are explained. Some performance/reliability measures are introduced. Several systems are compared by using numerical examples with the new measures.,1984,0, 59,Software and development process quality metrics,"This paper describes a framework for gathering and reporting software and development process quality metrics, as well as data engineering issues related to data collection and report generation. The framework is described in project-independent terms, including a methodology for applying metrics to a given software project. A key aspect of this application is the use of project milestones predicted by a failure rate model. For the purposes of this paper, a software project is one which delivers a software or system product, and involves between thirty and several hundred developers, testers, and project managers.",1984,0, 60,Validation of finite element analysis of the magnetic fields around fine cracks,"Reliable non-destructive testing of crack-like defects in welded structures, using magnetic particle inspection techniques, requires accurate information about the leakage flux resulting from applied current and fields. A range of cracks was prepared in carbon-manganese steel with maximum widths of 10 to 15μm and depths of 0.5 to 4.5mm. The magnetic disturbance caused by such defects was measured using a 0.12μm wide magneto-resistive thin film. Defects of similar dimensions were subsequently modelled using existing finite element software, tuned to meet the requirements imposed by the discretisation of regions having large aspect ratios. The predictions of the models, which were in close agreement with flux measurements, were used to determine the surface forces produced by magnetisation which act upon particles used in magnetic particle inspection.",1985,0, 61,A Statistical Method for Software Quality Control,"This paper proposes a statistical method that can be used to monitor, control, and predict the quality (measured in terms of the failure intensity) of a software system being tested. The method consists of three steps: estimation of the failure intensity (failures per unit of execution time) based on groups of failures, fitting the logarithmic Poisson model to the estimated failure intensity data, and constructing confidence limits for the failure intensity process. The proposed estimation method is validated through a simulation study. A method for predicting the additional execution time required to achieve a failure intensity objective is also discussed. A set of failure data collected from a real-time command and control system is used to demonstrate the proposed method.",1985,0, 62,Bayesian Extensions to a Basic Model of Software Reliability,"A Bayesian analysis of the software reliability model of Jelinski and Moranda is given, based upon Meinhold and Singpurwalla. Important extensions are provided to the stopping rule and prior distribution of the number of defects, as well as permitting uncertainty in the failure rate. It is easy to calculate the predictive distribution of unfound errors at the end of software testing, and to see the relative effects of uncertainty in the number of errors and in the detection efficiency. The behavior of the predictive mode and mean over time are examined as possible point estimators, but are clearly inferior to calculating the full predictive distribution.",1985,0, 63,Software Fault Tolerance: An Evaluation,"In order to assess the effectiveness of software fault tolerance techniques for enhancing the reliability of practical systems, a major experimental project has been conducted at the University of Newcastle upon Tyne. Techniques were developed for, and applied to, a realistic implementation of a real-time system (a naval command and control system). Reliability data were collected by operating this system in a simulated tactical environment for a variety of action scenarios. This paper provides an overview of the project and presents the results of three phases of experimentation. An analysis of these results shows that use of the software fault tolerance approach yielded a substantial improvement in the reliability of the command and control system.",1985,0, 64,An Experimental Study of Software Metrics for Real-Time Software,"The rising costs of software development and maintenance have naturally aroused intere5t in tools and measures to quantify and analyze software complexity. Many software metrics have been studied widely because of the potential usefulness in predicting the complexity and quality of software. Most of the work reported in this area has been related to nonreal-time software. In this paper we report and discuss the results of an experimental investigation of some important metrics and their relationship for a class of 202 Pascal programs used in a real-time distributed processing environment. While some of our observations confirm independent studies, we have noted significant differences. For instance the correlations between McCabe's control complexity measure and Halstead's metrics are low in comparison to a previous study. Studies of the type reported here are important for understanding the relationship between software metrics.",1985,0, 65,Identifying Error-Prone Software—An Empirical Study,"A major portion of the effort expended in developing commercial software today is associated with program testing. Schedule and/ or resource constraints frequently require that testing be conducted so as to uncover the greatest number of errors possible in the time allowed. In this paper we describe a study undertaken to assess the potential usefulness of various product-and process-related measures in identifying error-prone software. Our goal was to establish an empirical basis for the efficient utilization of limited testing resources using objective, measurable criteria. Through a detailed analysis of three software products and their error discovery histories, we have found simple metrics related to the amount of data and the structural complexity of programs to be of value for this purpose.",1985,0, 66,A Technique for Estimating Performance of Fault-Tolerant Programs,A technique is presented for estimating the performance of programs written for execution on fail-stop processors. It is based on modeling the program as a discrete-time Markov chain and then using z-transforms to derive a probability distribution for time to completion.,1985,0, 67,Assessment of Software Reliability Models,This paper proposes a method for assessing software reliability models and its application to the Musa and Littlewood-Verrall models. It is divided into two parts.,1985,0, 68,Performance of a real time low rate voice codec,"A real time low rate voice coder/decoder has been developed and is being tested using speakers in simulated operational environments. The unit uses the TMS32010 to perform the signal processing functions, and is packaged in a manner suitable for an office or benign field environment. The software implements the following algorithms: 2400 bps LPC using the United States Government standard LPC-10 algorithm. 800 bps using vector quantization. 400 bps using frame repeat vector quantization with trellis coding of gain and pitch. The hardware configuration is described, along with a brief explanation of the available test configurations and execution options. A review of the coding algorithms is presented, with a summary of the operational parameters. Test results include measured execution times, program and data memory sizes. Preliminary listening tests indicate that the voice quality at 800 bps is very close to that achieved at 2400 bps with the standard algorithms.",1986,0, 69,Evaluation of natural language processing systems: Issues and approaches,"This paper encompasses two main topics: a broad and general analysis of the issue of performance evaluation of NLP systems and a report on a specific approach developed by the authors and experimented on a sample test case. More precisely, it first presents a brief survey of the major works in the area of NLP systems evaluation. Then, after introducing the notion of the life cycle of an NLP system, it focuses on the concept of performance evaluation and analyzes the scope and the major problems of the investigation. The tools generally used within computer science to assess the quality of a software system are briefly reviewed, and their applicability to the task of evaluation of NLP systems is discussed. Particular attention is devoted to the concepts of efficiency, correctness, reliability, and adequacy, and how all of them basically fail in capturing the peculiar features of performance evaluation of an NLP system is discussed. Two main approaches to performance evaluation are later introduced; namely, black-box- and model-based, and their most important characteristics are presented. Finally, a specific model for performance evaluation proposed by the authors is illustrated, and the results of an experiment with a sample application are reported. The paper concludes with a discussion on research perspectives, open problems, and importance of performance evaluation to industrial applications.",1986,0, 70,Robust Storage Structures for Crash Recovery,"A robust storage structure is intended to provide the ability to detect and possibly correct damage to the structure. One possible source of damage is the partial completion of an update operation, due to a ""crash"" of the program or system performing the update. Since adding redundancy to a structure increases the number of fields which must be changed, it is not clear whether adding redundancy will help or hinder crash recovery. This paper examines some of the general principles of using robust storage structures for crash recovery. It also describes a particular class of linked list structures which can be made arbitrarily robust, and which are all suitable for crash recovery.",1986,0, 71,Development of a High-Performance Stamped Character Reader,"A high-performance optical character reader (OCR) capable of reading low-quality stamped alphanumeric characters, on a metal rod, is described in this paper. Reliable and precise recognition is achieved by means of a special scanner and dedicated recognition module. The scanner revolves around the metal rod and detects characters, thus avoiding rotation of the heavy and long rod. Diffused illumination which produces a clear image, a powerful pattern-matching method¿the Multiple Similarity Method (MSM)¿implemented by the recognition module, and the use of adaptive rescan control results in a correct recognition rate of 99.93 percent. One recognition module can process images from up to five different scanners. The processing speed is about 1.5 s/rod. The design and industrial application of the system are discussed.",1986,0, 72,Probabilistic Short-Circuit Uprating of Station Strain Bus System - Mechanical Aspects,"Design of strength of station structures and clearances to accommodate forces and motions of strain bus during short circuit faults has been based on an energy method and a pendulum swing out model at Ontario Hydro. The energy method was developed some time ago to calculate the forces in a strain bus at the moment of zero motion following a fault. Several recent extensions of this approach are described in this paper. These include: (i) In-depth comparisons of predictions of peak tension, time of peak tension, and force on the structure with values from experiments on simple bus arrangements. (ii) Studies of insulator static and dynamic properties and incorporation of these values in the program. (iii) Extension of the theory to asymmetric faults and to four conductor busses. This method of calculation has recently been used in the probabilistic analysis of the uprating of Ontario Hydro's Middleport Transformer Station (TS) as well as a number of other design applications. This study demonstrates the overall accuracy of this relatively simple approach and its potential for application to more complex arrangements of strain bus. This paper and three companion papers present the probabilistic uprating process as applied to Middleport TS.",1986,0, 73,State Estimation Simulation for the Design of an Optimal On-Line Solution,"A state estimation simulation package (SES) developed for studies and educational purposes was further upgraded with on-line capabilities and implemented as a production grade program into a large-scale network control system. Although conceived for different applications and run-time environments, this transformation was realized smoothly and relative quickly without modifying the algorithmic kernel; the result is an efficient state estimation package (SE) with real-time capabilities. The benefits of this development strategy are: - feasibility studies such as optimal metering are possible with a unified SE approach in the early phase of a new control center project; - the existence of a thoroughly tested SE simulation package accelerates the debugging phase of the on-line algorithm; - the joint effort speeds up the know-how transfer and feed-back between R&D and reduces the overall costs. A short description of the SES structure is given. The requirements and design goals of an on-line SE are outlined. Special emphasis is laid upon functional extensions such as: data base interfaces, the mapping of the network connectivity at equipment (terminal) level into a node/branch model and vice-versa, man-machine-interfaces, etc. Methods for software-quality enhancement, performance measurements and results obtained with the on-line SE on a VAX 11/780 are presented.",1986,0, 74,Effect of System Processes on Error Repetition: A Probabilistic and Measurement Approach,"This paper presents an approach to system reliability modeling where failures and errors are not statistically independent. The repetition of failures and errors until their causes are removed is affected by the system processes and degrades system reliability. Four types of failures are introduced: hardware transients, software and hardware design errors, and program faults. Probability of failure, mean time to failure, and system reliability depend on the type of failure. Actual measurements show that the most critical factor for system reliability is the time after occurrence of a failure when this failure can be repeated in every process that accesses a failed component. An example involving measurements collected in an IBM 4331 installation validates the model and shows its applications. The degradation of system reliability can be appreciable even for very short periods of time. This is why the conditional probability of repetition of failures is introduced. The reliability model allows prediction of system reliability based on the calculation of the mean time to failure. The comparison with the measurement results shows that the model with process dependent repetition of failures approximates system reliability with better accuracy than the model with the assumption of independent failures.",1986,0, 75,Design of a Testbed for Studying Rapidly Reconfigurable Tactical Communication Networks,"This paper describes the design of a testbed for studying rapidly reconfigurble ground and airborne tactical communication networks which use directed beam couplings between nodes. The networks are subject to rapid changes in connectivity and topology caused by links fading in and out or by mobile users moving into and out of the directed beams. Such networks, which provide improved security, jam resistance and probability of intercept, are viable candidates for tactical use in the 1990s. The testbed has been designed around three major subsystems: the prototype network equipment, the monitor, and the stimulator. The prototype network equipment consists of six nodes, with all required software, and direct wire connections between nodes. The monitor subsystem records the significant arrival and departure times for each packet at each node. It stores this data and processes it to determine appropriate performance parameters. The stimulator subsystem generates packets and injects bit errors and link and node faults into the prototype network. The testbed will be used to develop network architectures and protocols for constructing and reconfiguring networks and to study new types of adaptive routing algorithms and error control. Most of the planned studies will make use of the ability of the network to measure transient network behavior.",1986,0, 76,Field monitoring of software maintenance,"Life cycle maintenance cost must be predicted for any new product. Correct management practice then requires the prediction to be checked by measurement, so that planning and budgets can be adjusted. This requires knowledge of the nature of the market, quality of the product and activities of the service organisation, and how they mutually interact. This paper describes this interaction, and the problems of data collection peculiar to each of the three areas. A relational database structure is defined to enforce consistency on the data. Particular attention is paid to the problems of having different copies of the software in different states of repair, having several versions of several products in use simultaneously, and recording running time in order to be able to measure reliability.",1986,0, 77,Experiments in software reliability: Life-critical applications,"Digital computers are being used more frequently for process control applications in which the cost of system failure is high. Consideration of the potentially life-threatening risk, resulting from the high degree of functionality being ascribed to the software components of these systems, has stimulated the recommendation of various designs for tolerating software faults. The author discusses four reliability data gathering experiments which were conducted using a small sample of programs for two problems having ultrareliability requirements: n-version programming for fault detection, and repetitive run modeling for failure and fault rate estimation. The experimental results agree with those of M. Nagel and J.A. Skrivan (1982) in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates.",1986,0, 78,A functional approach to program testing and analysis,"An integrated approach to testing is described which includes both static and dynamic analysis methods and which is based on theoretical results that prove both its effectiveness and efficiency. Programs are viewed as consisting of collections of functions that are joined together using elementary functional forms or complex functional structures. Functional testing is identified as the input-output analysis of functional forms. Classes of faults are defined for these forms, and results are presented which prove the fault-revealing effectiveness of well defined sets of tests. Functional analysis is identified as the analysis of the sequences of operators, functions, and data type transformations which occur in functional structures. Theoretical results are presented which prove that it is only necessary to look at interfaces between pairs of operators and data type transformations in order to detect the presence of operator or data type sequencing errors. The results depend on the definition of normal forms for operator and data type sequencing diagrams.",1986,0, 79,Modeling very large array phase data by the Box-Jenkins method,"The quality of radio astronomical images made with an antenna array depends upon atmospheric behavior. As baselines and frequencies increase, phase variations become increasingly erratic. The phase fluctuations are time dependent and we found them to be correlated In time order in each baseline. We can represent these correlations by stochastic models. Models obtained by the Box-Jenkins method are referred to as auto-regressive integrated moving average processes (ARIMA). ARIMA models of VLA phase provide good short-term predictions that may be useful for improving present calibration techniques. ARIMA models of VLA phase are data dependent and can be used in a variety of situations. A technique that works in all cases can be programmed into a software package such that modeling can be accomplished with no operator interactions. Another important application of ARIMA models involves the use of Kalman filtering to reduce the atmospheric effects when self-calibration does not work well. The performance of the Kalman filter critically depends upon the models of the processes. An ARIMA model of the phase fluctuation can be represented in a state space form as noise in the Kalman filter equations.",1986,0, 80,Knowledge-based management of failures in autonomous systems,"This paper presents the design of a fault management system (FMS) for an unmanned and untethered platform. The system must automatically detect, diagnose, localize and reconfigure the system to cope with failures. Traditional fault tolerant approaches used in telephone switching, manned and unmanned satellites, commercial banking, airline reservations, air traffic control, and others are reviewed. Expert system's technology is used to extend these traditional approaches to achieve a highly reliable design capable of sustaining operation over many months with little or no communication. An existing simulator has been modified to allow fault injection and to model fault propagation. This provides a testbed for evaluating system candidates. A specific fault management hardware and software architecture has been selected. Expert system diagnostic rules, which run on the fault tolerant base, are discussed. Diagnostic rule performance in detecting, localizing, and recovering from Autonomous Systems (AS) sensor, actuator, and computer subsystem failures during AS operation is analyzed.",1987,0, 81,A multiprocessor system for AUV applications,"Autonomous Underwater Vehicles (AUV's) require high processing throughput and fault tolerance combined with low power and volume. High throughput is directly related to mission complexity and the degree of vehicle autonomy. Complex acoustic and optical sensors must be interpreted in real time and decisions based on uncertain data must be formulated and executed. Unexpected occurrances including environmental anomalies and enemy threats force immediate action and revised tactical mission planning. The development of the Autonomous Command and Control Demonstration (ACCD) system as a component development of the Autonomus Underwater Vehicle (AUV) program by Lockheed Advanced Marine Systems (AMS) has estimated peak rates greater than 100 MIPS are required for Command and Control processing. The processing itself will be a combination of signal, algorithmic and symbolic processing. Both deterministic and probabalistic algorithms will require computation. For vehicle operations in a fully autonomous mode, any component failures must be corrected by an automated fault tolerant system. The AUV software architecture, developed as part of a detailed analysis, established the performance requirements for the Command and Control hardware. The characteristics of various hardware architectures such as distributed, common memory, data flow, systolic, and connection type systems were established and assessed to determine the hardware best suited to the software structure. A heterogeneous hypercube was found to be optimum. The hypercube is reconfigurable for fault tolerance, is flexible, and can achieve high throughputs with low power and without special cooling systems. AMS will use a commercially-available hypercube as the basis of their Command and Control Architecture for an Unmanned Underwater Vehicle. This hypercube has a 32 bit architecture, extremely high message passing capability and will have image processing accelerators working in conjunction with standard cube nodes. Both symbolic and algorithmic processing will take place on the standard nodes. In this paper, system requirements, software hierarchy, computer architecture analysis and the resulting hypercube design will be discussed in detail.",1987,0, 82,The Circulation and Water Level Forecast Atlas: A New Product in Development at NOS,"The National Ocean Service (NOS) is developing a new product aimed at improving the prediction of water level and currents in key waterways of the united States. The Circulation and Water Level Forecast Atlas will provide a series of charts and graphs for predicting the tide and tidal current anywhere in a particular estuary, and for providing information on the effects of wind, barometric pressure, and river discharge. The atlas will be produced using a numerical hydrodynamic model implemented for a particular estuary using an extensive oceanographic and meteorological data set. The atlas will represent the circulation at many more locations than the traditional NOS ""Tidal Current Charts,"" which represent currents only at those locations where current data have been obtained and analyzed. The additional locations will include those of special navigational or environmental interest where current measurements cannot be made using traditional moored current meters. The additional spatial coverage resulting from the use of a numerical model will be especially evident in the atlas' tidal height charts, which will depict the changing water level over the entire waterway for each hour of the tidal cycle. On each chart coheight lines will traverse the entire waterway; tide predictions will no longer be limited to a few coastal locations. Use of a numerical hydrodynamical model, combined with the results of statistical data analyses, will also allow the synthesis of important meteorological and river effects in a way that will permit factors be applied to the tide or tidal current prediction based on wind, pressure, or river discharge information available to a user. In a waterway where NOS real-time water level gages are in operation, the atlas will supplement the NOS data retrieval software package called TIDES ABC, by providing a means to extrapolate the real-time data from these gages to other locations in the waterway. The first Circulation and Water Level Forecast At- - las is planned for Delaware River and Bay, where NOS conducted a 2 1/2-year circulation survey. The observations from that survey will be used to implement and assess the skill of the high-resolution numerical model that will be used to produce the atlas.",1987,0, 83,Applications of error intensity measures to bearing estimation,"In this paper we will discuss a new class of performance approximations for maximum likelihood type bearing estimation systems. This class is based on the application of various point process models to a sequence of error prone points along the likelihood trajectory. The point process model is described by two quantities: the intensity function of the local maxima locations over the parameter space, and a selection rule for the global maximum from the set of local maxima. This class gives new estimators to the Mean-Square Error and specializes to the approximation techniques of [1] and [6].",1987,0, 84,"A passenger vehicle onboard computer system for engine fault diagnosis, performance measurement and control","This paper describes a microprocessor based on-board instrumentation system with applications to engine fault diagnosis, performance measurement and control. A non contacting measurement of engine angular position, taken at the flywheel by means of a magnetic sensor, provides a signal proportional to crankshaft angular velocity. The signal is then appropriately digitized and interfaced to a microprocessor, along with timing information regarding engine position. Suitable processing of the signal by the microprocessor provides information in real time regarding crankshaft speed and torque nonuniformity, which is employed to detect engine faults and to yield a measure of engine performance. Experimental verification of a prototype instrument has been carried out on the road in a number of vehicles, with very successful results. A variety of experiments have confirmed the usefulness of the system for applications in fault diagnosis and performance measurement, and the potential of the system for some aspects of engine control.",1987,0, 85,Comparing the Effectiveness of Software Testing Strategies,"This study applies an experimentation methodology to compare three state-of-the-practice software testing techniques: a) code reading by stepwise abstraction, b) functional testing using equivalence partitioning and boundary value analysis, and c) structural testing using 100 percent statement coverage criteria. The study compares the strategies in three aspects of software testing: fault detection effectiveness, fault detection cost, and classes of faults detected. Thirty-two professional programmers and 42 advanced students applied the three techniques to four unit-sized programs in a fractional factorial experimental design. The major results of this study are the following. 1) With the professional programmers, code reading detected more software faults and had a higher fault detection rate than did functional or structural testing, while functional testing detected more faults than did structural testing, but functional and structural testing were not different in fault detection rate. 2) In one advanced student subject group, code reading and functional testing were not different in faults found, but were both superior to structural testing, while in the other advanced student subject group there was no difference among the techniques. 3) With the advanced student subjects, the three techniques were not different in fault detection rate. 4) Number of faults observed, fault detection rate, and total effort in detection depended on the type of software tested. 5) Code reading detected more interface faults than did the other methods. 6) Functional testing detected more control faults than did the other methods.",1987,0, 86,Queueing Analysis of Fault-Tolerant Computer Systems,"In this paper we consider the queueing analysis of a fault-tolerant computer system. The failure/repair behavior of the server is modeled by an irreducible continuous-time Markov chain. Jobs arrive in a Poisson fashion to the system and are serviced according to FCFS discipline. A failure may cause the loss of the work already done on the job in service, if any; in this case the interrupted job is repeated as soon as the server is ready to deliver service. In addition to the delays due to failures and repairs, jobs suffer delays due to queueing. We present an exact queueing analysig of the system and study the steady-state behavior of the number of jobs in the system. As a numerical example, we consider a system with two processors subject to failures and repairs.",1987,0, 87,Test Data Selection and Quality Estimation Based on the Concept of Essential Branches for Path Testing,"A new coverage measure is proposed for efficient and effective software testing. The conventional coverage measure for branch testing has such defects as overestimation of software quality and redundant test data selection because all branches are treated equally. These problems can be avoided by paying attention to only those branches essential for path testing. That is, if one branch is executed whenever another particular branch is executed, the former branch is nonessential for path testing. This is because a path covering the latter branch also covers the former branch. Branches other than such nonessential branches will be referred to as essential branches.",1987,0, 88,Process fault detection using constraint suspension,"In the paper the authors describe two tools for use in an intelligent fault-detection system. The first tool is a method for fault localisation, called constraint suspension, which helps us find and identify the parts suspension, which helps us find and identify the parts of the process which are responsible for any inconsistencies in the measurements. The second tool gives us a flexible framework in which we can build a software model of the process in terms of hierarchies of component parts or objects, it is called object orientation. We use these tools to describe algorithms for automatic fault detection in a physical process plant. The algorithms are tested on a simulation of a liquid level control system.",1987,0, 89,"Towards a constructive quality model. Part 1: Software quality modelling, measurement and prediction","This paper considers the approach taken by the ESPRIT-funded REQUEST project to measuring, modelling and predicting software quality. The paper describes previous work on defining software quality and indicates how the work has been used by REQUEST to begin the formulation of a constructive quality model (COQUAMO). The paper concludes that the original goal of the project to produce a predictive quality model was unrealistic, but that a quality system incorporating prediction, analysis and advice is feasible.",1987,0, 90,An expert system for high voltage discharge tests,"Technical advances and reductions in cost of computer hardware and software have reached the point where personal computers can mimic the behavior of experts in assisting in both the performance of complex testing procedures, and in diagnosing the significance of the results. The computer program described here acts as an expert assisting in the performance and diagnosis of the results of discharge tests.",1987,0, 91,Error localization during software maintenance: generating hierarchical system descriptions from the source code alone,"An empirical study is presented that investigates hierarchical software system descriptions that are based on measures of cohesion and coupling. The study evaluates the effectiveness of the hierarchical descriptions in identifying error-prone system structure. The measurement of cohesion and coupling is based on intrasystem interaction in terms of software data bindings. The measurement of error-proneness is based on software error data collected from high-level system design through system test; some error data from system operation are also included. The data bindings software analysis and supporting tools are described, followed by the data analysis, interpretations of the results, and some conclusions",1988,0, 92,Production process modeling of software maintenance productivity,"Reports on efforts to develop and estimate a preliminary model of the software production process using pilot data from 65 software maintenance projects recently completed by a large regional bank's data processing department. The goals are to measure factors that affect software maintenance productivity, to integrate the quality and productivity dimensions of software measurements, and to examine the productivity of entire projects rather than only the programming phase, which typically accounts for less than half the effort on a software project. Variables relating to the quality of labor employed on the projects are included. To investigate the set of potential productivity factors, the technique of data envelopment analysis (DEA) is used to estimate the relationship between the inputs and products of software maintenance. The general approach to this research is to model software development as a microeconomic production process utilizing inputs and producing products",1988,0, 93,Risk Assessment Methodology for Software Supportability (RAMSS),"Concerns the Risk Assessment Methodology for Software Supportability (RAMSS), the overall approach being tested for use by the US Air Force Operational Test and Evaluation Center. Although the RAMSS is not a mature methodology, elements of the methodology have been used since 1979. The derived baseline database of maintenance actions across 80 systems and more than 300 block releases is being further developed, and evaluation metrics, techniques and procedures are being refined. Risk regression models have been derived for the estimated risk based on productivity factors and a historical maintenance database, and the evaluated risk based on supportability quality criteria and a historical evaluation database. The RAMSS has been derived for the Air Force, but the elements of the methodology apply to any software maintenance application which is based on the concept of version releases",1988,0, 94,A model based on software quality factors which predicts maintainability,"Software quality metric were developed to measure the complexity of software systems. The authors relate the complexity of the system as measured by software metrics to the amount of maintenance necessary to that system. The authors have developed a model which uses several software quality metrics as parameters to predict maintenance activity. A description is given of three classifications of metrics that are used to measure the quality of source code: code metrics, which measure physical characteristics of the software, such as length or number of tokens; structure metrics, which measure the connectivity of the software, such as the flow of information through the program and flow of control; and hybrid metrics, which are a combination of code and structure metrics",1988,0, 95,A classification model of software comprehension,"Summary form only given. Measurements of understandability and maintainability of software have traditionally been developed of the basis of graph models, lexical counts, or information theory. The research reported proposes a theoretical foundation for a different way of evaluating software that can lead to a system of metrics. A model is provided that is based on human factors and information theory and can serve as a framework for development of cognitive-oriented measures of software quality.<>",1988,0, 96,Analysis of Reed-Solomon coded frequency-hopping systems with non ideal interleaving in worst case partial band jamming,"The interleaving span of coded frequency-hopped systems is often constrained to be smaller than the decoder memory length, i.e. nonideal interleaving is performed. Analysis of the performance of a hard-decision decoder and an erasure-control decoder of Reed-Solomon codes is presented, both for ideal and nonideal interleaving. The interference consists of worst-case partial-band noise jamming and thermal noise. The frequency hopping system considered uses orthogonal MFSK modulation and noncoherent demodulation with quality bit output based on Viterbi's ratio-threshold technique. Optimization of the ratio-threshold and the erasure-control parameters is performed in worst-case partial-band jamming, and the resulting performance for several interleaver spans is presented.<>",1988,0, 97,Quality control of antenna calibration and site-attenuation data from open area test sites,"The use of swept-frequency methods coupled with some computation and graphics display capability, plus an understanding of the behavior of anomalies during data processing, is shown to provide an effective way to detect erroneous or suspicious data from antenna calibrations using the standard-site three-antenna method specified in ANSI Standard 63.4 (American National Standards Institute). How data from antenna and site calibration can be used to cross-check their own accuracy is demonstrated. With appropriate software, the evaluation can be done on site, immediately after the data have been taken. The same tools can be applied to evaluate the validity of site-attenuation data",1988,0, 98,Sequoia: a fault-tolerant tightly coupled multiprocessor for transaction processing,"The Sequoia computer is a tightly coupled multiprocessor that avoids most of the fault-tolerance disadvantages of tight coupling by using a fault-tolerant hardware-design approach. An overview is give of how the hardware architecture and operating system (OS) work together to provide a high degree of fault tolerance with good system performance. A description of hardware is followed by a discussion of the multiprocessor synchronization problem. Kernel support for fault recovery and the recovery process itself are examined. It is shown the kernel, through a combination of locking, shadowed memory, and controlled flushing of non-write-through cache, maintains a consistent main memory state recoverable from any single-point failure. The user shared memory is also discussed.<>",1988,0, 99,Applications of software reliability measurement to instruments and calculators,Software reliability growth models have been applied to several projects at two divisions of the Hewlett-Packard Corporation. The primary model used is the basic execution time model described by J.D. Musa et al. (1987). The model has been used to: predict software quality assurance duration prior to the beginning of testing; provide updated estimates of product reliability and projected release date; estimate software reliability at the completion of testing; and predict the customer's software product reliability experience after release. Model implementation and performance are discussed. Difficulties in the application of the model and suggested strategies for overcoming these difficulties are described.<>,1988,0, 100,An evaluation of software structure metrics,"Evaluates some software design metrics, based on the information flow metrics of S. Henry and D. Kafura (1981, 1984), using data from a communications system. The ability of the design metrics to identify change-prone, error-prone, and complex programs was contrasted with that of simple code metrics. It was found that the design metrics were not as good at identifying change-prone, fault-prone, and complex programs as simple code metrics (i.e. lines of code and number of branches). It was also observed that the compound metrics, built up from several different basic counts, can obscure underlying effects, and thus make it more difficult to use metrics constructively",1988,0, 101,Five years experience with a new method of field testing cross and quadrature polarized mho distance relays. II. Three case studies,"For pt.I see ibid., vol.3, no.3, p.880-6 (1988). In part I the authors discussed the testing of the faulted relay element of the cross and quadrature polarized mho distance relays. In this second part, they discuss three case studies in which the test method described in part I allowed the Saskatchewan Power Corporation (SaskPower), using off-the-shelf test equipment, to accurately predict the relay operating characteristic, improve relay performance, and more accurately set the distance relay for phase-to-phase (LL) and single-line-to-ground (SLG) faults",1988,0, 102,Second IEE/BCS Conference: Software Engineering 88 (Conf. Publ. No.290),The following topics were dealt with: software prototyping; quality assurance; STARTS; measurements and metrics; software reliability; programming paradigms; cost control; software reuse; software maintenance; software testing; software tools; specification tools; specification testing; and embedded systems. Abstracts of individual papers can be found under the relevant classification codes in this or other issues,1988,0, 103,Relevancy in fault detection analysis,"The author discusses the concept of relevancy of circuits, parts, and part failure to system performance and the impact of relevancy to fault detection analysis results. Relevancy requires that all circuits, parts, and part failure modes in a system be analyzed to determine if, in the event of an equipment malfunction, there is an adverse effect on system performance. Thus, a circuit failure critical to system performance is relevant, whereas a failure not critical to system performance is nonrelevant and therefore does not affect equipment operation. By excluding nonrelevant circuits, parts, and part failure modes form the fault detection analysis, the results of the fault detection predictions for a system's built-in test (BIT) design and/or fault detection test software are more accurate",1988,0, 104,Characterizing the software process: a maturity framework,A description is given of a software-process maturity framework that has been developed to provide the US Department of Defense with a means to characterize the capabilities of software-development organizations. This software-development process-maturity model reasonably represents the actual ways in which software-development organizations improve. It provides a framework for assessing these organizations and identifying the priority areas for immediate improvement. It also helps identify those places where advanced technology can be most valuable in improving the software-development process. The framework can be used by any software organization to assess its own capabilities and identify the most important areas for improvement.<>,1988,0, 105,Evaluation of system BIST using computational performance measures,"The impact of built-in self-test (BIST) techniques and system maintenance strategies on the performance of a VLSI processor system is examined. The specific BIST technique used was shown to have a significant influence upon instantaneous and cumulative system reward. It was shown that the additional overhead of the distributed and BILBO approaches is justified for this model when area utilization and cumulative area utilization are considered. For the assumed design parameters, the results presented allow a VLSI system designer to choose an optimal configuration based on system requirements and individual component parameters. The optimal performance will also depend on system and component parameters such as processor failure rates and fault coverage. These results are relevant to the design, evaluation, and optimization of highly reliable, high-performance digital processing systems",1988,0, 106,Hybrid fault diagnosability with unreliable communication links,"Hybrid fault diagnosability in distributed multiprocessor systems is considered for the case in which, in addition to units being faulty, communication links among units can also be faulty. A hybrid fault situation is a (bounded) combination of hard and soft failing units. A novel hybrid fault diagnosability, denoted (t/ts -unit: σ-link)-diagnosability is introduced (a total of t or fewer units can be faulty with at most ts of them soft-failing; σ is the number of incorrect test outcomes caused by unreliable links)., This diagnosability is compatible with the previously known t/ts-diagnosability and can additionally tolerate up to σ incorrect test outcomes to give always-correct diagnosis for any hybrid-fault (HF)-compatible syndrome from a hybrid fault situation. It is shown that an O(|E|) algorithm, proposed originally by the authors (see ibid., vol.C-35, p.503-10, 1986) for faulty unit identification in ts-diagnosable systems, can be used to analyze a syndrome from a (ts-unit σ-link)-diagnosable system efficiently without any preclassification of the syndrome in terms of HF-compatibility. It is shown that this algorithm not only identifies all the faulty units associated with an HF-compatible syndrome but also produces nonempty, always-correct diagnosis for many HF-incompatible syndromes",1988,0, 107,Estimating metrical change in fully connected mobile networks-a least upper bound on the worst case,"A least upper bound is derived on the amount of adjustment of virtual token passing (VTP) time needed to assure collision-free access control in VTP networks (i.e. networks that use `time-out' or scheduling function-based access protocols) and in which nodes change their spatial configuration due to motion (although always stay within range and in line-of-sight of each other). Since the new bound is a function of network geographical size, as well as of node maximal speeds and the time that passed since the previous adjustment, it allows VTP times that are shorter than those found in previous publications, especially when intervals between adjustments grow larger. For most VTP networks, with large mixed populations of mobile and stationary users, the average VTP time (which is a major factor in performance of the access protocol) allowed by the new bound is shorter than that of any configuration-independent protocol",1988,0, 108,An experimental investigation of software diversity in a fault-tolerant avionics application,"Highly reliable and effective failure detection and isolation (FDI) software is crucial in modern avionics systems that tolerate hardware failures in real time. The FDI function is an excellent opportunity for applying the principal of software design diversity to the fullest, i.e., algorithm diversity, in order to provide gains in functional performance as well as potentially enhancing the reliability of the software. The authors examine algorithm diversity applied to the redundancy management software for a hardware fault-tolerant sensor array. Results of an experiment are presented that show the performance gains that can be provided by utilizing the consensus of three diverse algorithms for sensor FDI",1988,0, 109,A simplified method for RPE coding with improved performance,"The regular-pulse-excitation (RPE) technique for speech coding gives good speech quality, but requires a heavy computational effort. A method for the excitation sequence search is proposed in which the pulse amplitudes are computed one at a time, rather than simultaneously in block, by solving a linear system of reduced dimension. The algorithm also provides an appreciable improvement in the performance, because the noise energy is minimized taking into account both quantization errors and the effect of pulses in the next block. The algorithm is implemented in a 8-kb/s codec, using vector quantization for the vocal tract all-pole model parameters. Computer simulations over a large database of male and female speech signals (about 80s of speech) gave good results in terms of segmental signal-to-noise ratio, even in the absence of a patch predictor",1988,0, 110,A case study on the fault-tolerizability of an archetypical shared memory minisuper multiprocessor,"Issues in assessing fault-tolerance are addressed. A specific archetypical high-performance shared-memory architecture is chosen as a test case (specifically the Encore Multimax, to be used in space defense applications and environments), and a set of target fault-tolerance requirements are specified. The requirements are established to meet realistically the anticipated needs of aerospace applications of the 1990s. A methodology is then presented to systematize the steps and document the alternatives to be evaluated and the issues to be resolved in the process of fault-tolerization. The design of the resultant final target fault-tolerant machine is presented. The advantage to design based on parallel architectures is that resources can be dynamically reallocated to enhance either performance or fault-tolerance",1988,0, 111,Expert systems for image processing-knowledge-based composition of image analysis processes,"Expert systems for image processing are classified into four categories, and their objectives, knowledge representation, reasoning methods, and shortcomings are discussed. The categories are: (1) consultation systems for image processing; (2) knowledge-based program composition systems; (3) rule-based design systems for image segmentation algorithms; and (4) goal-directed image segmentation systems. The importance of choosing effective image analysis strategies is discussed. Two methods are proposed for characterizing image analysis strategies: one from a software engineering viewpoint and the other from a knowledge representation viewpoint. Several examples are given to demonstrate the effectiveness of these methods",1988,0, 112,Concurrent error correction in systolic architectures,"It is shown how two features of systolic arrays in which the cells retain partial results rather than pass them on can be used to facilitate testing and fault localization with no modification of the systolic design; the monitoring is performed either by software in the host processor or by hardware following the system output. One feature is the ability to enter identical sequences of inputs into two adjacent processor elements; the other is the resemblance of the data flow to that of a scan design. In particular, a one-dimensional systolic architecture of interest is described and applied to a finite-impulse response filter. Both the error detection and correction coverages are considered in detail for the FIR filter. The concurrent error-correction technique is applicable only to operational reliability, but the concurrent error-detection technique is equally applicable to manufacturing tests, incoming tests, and periodic maintenance tests. A hardware implementation of the error-detection-and-correction technique is presented. The architecture is extended to a two-dimensional processor",1988,0, 113,SLS-a fast switch-level simulator [for MOS],"SLS, a large-capacity, high-performance switch-level simulator developed to run on an IBM System/370 architecture is described. SLS uses a model which closely reflects the behavior of MOS circuits. The high performance is the result of mixing a compiled model with the more traditional approach of event-driven simulation control, together with very efficient algorithms for evaluating the steady-state response of the circuit. SLS is used for design verification/checking applications and for estimating fault coverage",1988,0, 114,Rajdoot: a remote procedure call mechanism supporting orphan detection and killing,"Rajdoot is a remote procedure call (RPC) mechanism with a number of fault tolerance capabilities. A discussion is presented of the reliability-related issues and how these issues have been dealt with in the RPC design. Rajdoot supports exactly-once semantics with call nesting capability, and incorporates effective measures for orphan detection and killing. Performance figures show that the reliability measures of Rajdoot impose little overhead",1988,0, 115,Evaluation of the probability of dynamic failure and processor utilization for real-time systems,"It is shown how to determine closed-form expressions for task scheduling delay and active task time distributions for any real-time system application, given a scheduling policy and task execution time distributions. The active task time denotes the total time a task is executing or waiting to be executed, including scheduling delays and resource contention delays. The distributions are used to determine the probability of dynamic failure and processor utilization, where the probability of dynamic failure is the probability that any task will not complete before its deadline. The opposing effects of decreasing the probability of dynamic failure and increasing utilization are also addressed. The analysis first addresses workloads where all tasks are periodic, i.e., they are repetitively triggered at constant frequencies. It is then extended to include the arrival of asynchronously triggered tasks. The effects of asynchronous tasks on the probability of dynamic failure and utilization are addressed",1988,0, 116,A comparison of four adaptation algorithms for increasing the reliability of real-time software,"In a large, parallel, real-time computer system, it is frequently most cost-effective to use different software reliability techniques (e.g., retry, replication, and resource-allocation algorithms) at different levels of granularity or within different subsystems, dependent on the reliability requirements and fault models associated with each subsystem. The authors describe the results of applying four reliability algorithms, via RESAS (REal-time Software Adaptation System), to a sample real-time program executing on a multiprocessor",1988,0, 117,Experimental evaluation of software reliability growth models,"An experimental evaluation is presented of SRGMs (software reliability growth models). The experimental data sets were collected from compiler construction projects completed by five university students. The SRGMs studied are the exponential model, the hyperexponential model, and S-shaped models. It is shown that the S-shaped models are superior to the exponential model in both the accuracy of estimation and the goodness of fit (as determined by the Kolmogorov-Smirnov test). It is also shown that it is possible to estimate accurately residual faults from a subset of the test results. An estimation method is proposed for the hyperexponential model. It is based on the observation that the start time for testing is different for different program modules. It is shown that this method improves the goodness of fit significantly.<>",1988,0, 118,The implementation and application of micro rollback in fault-tolerant VLSI systems,"The authors present a technique, called micro rollback, which allows most of the performance penalty for concurrent error detection to be eliminated. Detection is performed in parallel with the transmission of information between modules, thus removing the delay for detection from the critical path. Erroneous information may thus reach its destination module several clock cycles before an error indication. Operations performed on this erroneous information are undone using a hardware mechanism for fast rollback of a few cycles. The authors discuss the implementation of a VLSI processor capable of micro rollback as well as several critical issues related to its use in a complete system.<>",1988,0, 119,Fault tolerant parallel processor architecture overview,"The authors address issues central to the design and operation of a Byzantine resilient parallel computer. Interprocessor connectivity requirements are met by treating connectivity as a resource which is shared among many processing elements, allowing flexibility in their configuration and reducing complexity. Reliability analysis results are presented which demonstrate the reduced failure probability of such a system. Redundant groups are synchronized solely by message transmissions and receptions, which also provide input data consistency and output voting. Performance analysis results are presented which quantify the temporal overhead involved in executing such fault tolerance-specific operations.<>",1988,0, 120,Generating hierarchical system descriptions for software error localization,"The purpose of this study is to quantify ratios of coupling and cohesion and use them in the generation of hierarchical system descriptions. The ability of the hierarchical descriptions to localize errors by identifying error-prone system structure is evaluated using actual error data. Measures of data interaction called data bindings, are used as the basis for calculating software coupling and cohesion. A 135000 source line system from a production environment has been selected for empirical analysis. Software error data were collected from high-level system design through system test and from field operation of the system. A set of five tools is applied to calculate the data bindings automatically, and cluster analysis is used to determine a hierarchical description of each of the system's 77 subsystems. An analysis-of-variance model is used to characterize subsystems and individual routines that had either many/few errors or high/low error correction effort",1988,0, 121,Automatically generated acceptance test: a software reliability experiment,"The author presents results of a software reliability experiment that investigates the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multiversion experiment and used the launch interceptor problem as a model problem. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault-interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations and for the use of this test method in other applications",1988,0, 122,A class of analysis-by-synthesis predictive coders for high quality speech coding at rates between 4.8 and 16 kbit/s,"The general structure of this class of coders is reviewed, and the particulars of its members are discussed. The different analysis procedures are described, and the contributions of the various coder parameters to the performance of the coder are examined. Quantization procedures for each transmitted parameter are given along with examples of bit allocations. The speech quality produced by these coders is high at 16 kb/s and good at 8 kb/s, but only fair at 4.8 kb/s. The use of postprocessing techniques changes the performance at lower rates, but more research is needed to further improve the coders",1988,0, 123,An analysis of several software defect models,"Results are presented of an analysis of several defect models using data collected from two large commercial projects. Traditional models typically use either program matrices (i.e. measurements from software products) or testing time or combinations of these as independent variables. The limitations of such models have been well-documented. The models considered use the number of defects detected in the earlier phases of the development process as the independent variable. This number can be used to predict the number of defects to be detected later, even in modified software products. A strong correlation between the number of earlier defects and that of later ones was found. Using this relationship, a mathematical model was derived which may be used to estimate the number of defects remaining in software. This defect model may also be used to guide software developers in evaluating the effectiveness of the software development and testing processes",1988,0, 124,From data to models yet another modeling of software quality and productivity,"Software systems are often subdivided into the levels: application concept, data processing concept and implementation. Evaluation, measurement and assessment have to take these three levels into consideration. Quality models have to consider active and passive elements of the software as well as dynamic characteristics on all three levels. Since software systems and software projects are unique, quality models have to be flexible. A scheme for quality modeling uses weight functions to define quality factors in terms of evaluation factors. With respect to levels of software description a number of ratios are defined in order to describe software quantitatively. Using functions which reflect past experience, it becomes possible to generate estimates over the product extent and the production effort",1988,0, 125,Learning from examples: generation and evaluation of decision trees for software resource analysis,"A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for one problem domain, specifically, that of software resource data analysis. The purpose of the decision trees is to identify classes of objects (software modules) that had high development effort, i.e. in the uppermost quartile relative to past data. Sixteen software systems ranging from 3000 to 112000 source lines have been selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4700 objects, capture a multitude of information about the objects: development effort, faults, changes, design style, and implementation style. A total of 9600 decision trees are automatically generated and evaluated. The analysis focuses on the characterization and evaluation of decision tree accuracy, complexity, and composition. The decision trees correctly identified 79.3% of the software modules that had high development effort or faults, on the average across all 9600 trees. The decision trees generated from the best parameter combinations correctly identified 88.4% of the modules on the average. Visualization of the results is emphasized, and sample decision trees are included",1988,0, 126,Assessing the quality of abstract data types written in Ada,"A method is presented for assessing the quality of ADTs (abstract data types) in terms of cohesion and coupling. It is argued that an ADT that contains and exports only one domain and exports only operations that pertain to that domain has the best cohesive properties, and that ADTs that make neither explicit nor implicit assumptions about other ADTs in the system have the best coupling properties. Formal definitions are presented for each of the cohesion and coupling characteristics discussed. Their application to Ada packages is also investigated, and it is shown how a tool can be developed to assess the quality of an Ada package that represents an ADT",1988,0, 127,Wideband polarimetric radar cross section measurement,"An ultrawideband polarimetric RCS (radar cross-section) measurement system built for investigations on basic vegetation scatterers like leaves and needles is presented. Due to a qualified 20 term calibration procedure and an extensive use of software tools, the system provides a higher quality standard level for future RCS target classification. Some of the outstanding features are fully calibrated monostatic and bistatic RCS target measurement, determination of the complex polarimetric scattering matrix in a frequency range from 1-26.5 GHz and complete elimination of spurious reflections. Additionally, time-domain capability makes it possible to get measurement results in both the frequency and time domains.<>",1988,0, 128,A survey of software functional testing techniques,"The authors survey several technical articles in the area of software testing and provide a cross section of software functional testing techniques. In particular, they focus on the systematic methodologies because such methodologies demonstrate the absence of unwanted erroneous functions and divide the testing effort into manageable pieces for automating the testing effort into manageable pieces for automating the testing process. The authors also discuss the empirical approach to assessing software validation methods and focus on the static and the dynamic techniques used to characterize this approach. Several techniques were found to be very useful for discovering different types of errors. For instance, the symbolic evaluation can be used to prove the correctness of a program without executing it by symbolically evaluating the sequence of the assignment statements occurring in a program path. Structured walkthroughs and design inspections result in substantial improvements in quality and productivity through the use of formal inspections of the design and the code",1988,0, 129,Diagnostic automation-a total systems approach,"Diagnostics automation in the integrated support concept is described in terms of its primary attributes and their normal relationship to one another. The description is set forth with the understanding that each attribute may have both managerial and technological aspects and, in the broadcast sense, could be evaluated from the standpoint of how it contributes to the resolution of chronic support deficiencies",1988,0, 130,Software safety management,"An approach to life-cycle management of software safety, continuing into the operational phases, is discussed. It is based on defect prevention, early-defect detection and removal, and critical-path analysis, with continuous measurement, analysis and evaluation taking place throughout the life cycle. This approach also takes into account the possibility that those responsible for software safety might not have strong software-analysis backgrounds nor the time to perform all of the software safety-related activities themselves.<>",1988,0, 131,Fault diagnosis for sparsely interconnected multiprocessor systems,"The authors present a general approach to fault diagnosis that is widely applicable and requires only a limited number of connections among units. Each unit in the system forms a private opinion on the status of each of its neighboring units based on duplication of jobs and comparison of job results over time. A diagnosis algorithm that consists of simply taking a majority vote among the neighbors of a unit to determine the status of that unit is then executed. The performance of this simple majority-vote diagnosis algorithm is analyzed using a probabilistic model for the faults in the system. It is shown that with high probability, for systems composed of n units, the algorithm will correctly identify the status of all units when each unit is connected to O(log n) other units. It is also shown that the algorithm works with high probability in a class of systems in which the average number of neighbors of a unit is constant. The results indicate that fault diagnosis can in fact be achieved quite simply in multiprocessor systems containing a low to moderate number of testing conditions.<>",1989,0, 132,CHAOS/sup art/: support for real-time atomic transactions,"CHAOS/sup art/ is an object-based, real-time operating system kernel that provides an extended notion of atomic transactions as the basic mechanisms for programming real-time, embedded applications. These transactions are expressed as object invocations with guaranteed timing, consistency, and recovery attributes. The mechanisms implemented by CHAOS/sup art/ kernel provide a predictable, accountable, and efficient basis for programming with real-time transactions. These mechanisms are predictable because they have well-defined upper bounds on their execution times that are (can be) determined before their execution. They are accountable because their decisions are guaranteed to be honored as long as the system is in an application-specific safe state.<>",1989,0, 133,Tools for measuring software reliability,"The author discusses a measure of software reliability and various models for characterizing it, the result of 15 years of theoretical research and experimental application, which are moving into practice and starting to pay off. These tools let developers quantify reliability, give them ways to predict how reliability will vary as testing progresses, and help them use that information to decide when to release software. He examines the distinction between failures and faults and how these affect reliability. He compares execution-time models with calendar-time models, which are less effective, and discusses the choice of execution-time models. The author then describes a generic, step-by-step procedure to guide software reliability engineers in using the reliability models.<>",1989,0, 134,C-DOT DSS-maintenance software,"Switching systems are expected to meet stringent performance criteria, in order to meet the high availability requirements laid down for them. To meet these requirements, major emphasis is placed on reliability, quality of service and maintainability, in the Centre for Development of Telematics (C-DOT) digital switching system (DSS). A robust hardware/software distributed architecture is developed. Within this architecture a loosely coupled recovery strategy is adopted, in which an error is confined and recovered in the module where the error is detected. C-DOT DSS has a wide range of automated capabilities, which provide it with the capability to run with no need for manual intervention. These maintenance functions are capable of handling multiple failures and severe malfunctions. However, a wide variety of user friendly maintenance functions are provided which give the maintenance personnel overriding powers. The flexibility, fault resilience, and user friendly maintenance capabilities are described",1989,0, 135,Perturbation techniques for detecting domain errors,"Perturbation testing is an approach to software testing which focuses on faults within arithmetic expressions appearing throughout a program. This approach is expanded to permit analysis of individual test points rather than entire paths, and to concentrate on domain errors. Faults are modeled as perturbing functions drawn from a vector space of potential faults and added to the correct form of an arithmetic expression. Sensitivity measures are derived which limit the possible size of those faults that would go undetected after the execution of a given test set. These measures open up an interesting view of testing, in which attempts are made to reduce the volume of possible faults which, were they present in the program being tested, would have escaped detection on all tests performed so far. The combination of these measures with standard optimization techniques yields a novel test-data-generation method called arithmetic fault detection",1989,0, 136,Whistle: a workbench for test development of library-based designs,"The authors have applied the difference fault model (DFM) approach to the simulation of high-level designs and built a complete environment for test generation and fault simulation of designs expressed in a hardware description language (HDL), such as VHDL. Their environment, called Whistle, consists of a set of programs that supports a library-based design style. They describe the tools in Whistle, with special emphasis on the tools used to analyze each block in the design library, and the generation of code for the simulation of the F-faults in the design. The authors then evaluate Whistle from several points of view. First, they illustrate the accuracy of the fault coverage estimates by comparing them with the actual fault coverage obtained by gate-level fault simulation. Second, they examine the correlation between the accuracy measurements as given by their evaluation functions and the behavior of various characterization functions for a given block. This illustrates how critical it is to choose a good characterization function and how it affects the accuracy of the results. A third aspect of Whistle that they look at is its test-generation capability.<>",1989,0, 137,The Test Engineer's Assistant: a support environment for hardware design for testability,"A description is given of TEA (Test Engineer's Assistant), a CAD (computer-aided design) environment developed to provide the knowledge base and tools needed by a system designer for incorporating testability features into a design. TEA helps the designer meet the requirements of fault coverage and ambiguity group size. Fault coverage is defined as the percentage of faults that can be detected out of the population of all faults of a unit under test with a particular test set. An ambiguity group is defined as the smallest hardware entity in a given level of the system design hierarchy (that is, board, subsystem, and system) to which a fault can be isolated. The fault model considered throughout is the single stuck-at fault model. An example application of TEA is included.<>",1989,0, 138,A quantitative approach to monitoring software development,"A preliminary overview of a procedure for monitoring software development with respect to quality is provided. The monitoring process is based on the extraction, analysis and interpretation of metrics relating to software products and processes. The problem of interpreting software metrics data in terms that are meaningful to project and quality managers is focused on. The authors present a four-stage model of the interpretation of metrics. The model could be implemented as an advice system for managers. The authors discuss the general principles of project monitoring throughout the development life-cycle and identify metrics that may be collected throughout, from requirements specification to integration testing. They suggest some general principles for assessing the cause of abnormal metric values, for distinguishing between causes, and for responding to unfavourable project circumstances.<>",1989,0, 139,Software inspections: an effective verification process,"The authors explain how to perform software inspections to locate defects. They present metrics for inspection and examples of its effectiveness. The authors contend, on the basis of their experiences and those reported in the literature, that inspections can detect and eliminate faults more cheaply than testing.<>",1989,0, 140,V&V in the next decade [software validation],"The author examines potential changes resulting from current research and evaluation studies of software verification and validation techniques and tools. She offers specific predictions with regard to: standards and guidelines; specification, modeling, and analysis; reviews, inspections, and walkthroughs; and tests.<>",1989,0, 141,The Sequoia approach to high-performance fault-tolerant I/O,"Summary form only given. The Sequoia computer utilizes a modular, tightly-coupled multiprocessor architecture, with hardware fault detection and software fault recovery to achieve high degrees of reliability, availability, and data integrity. The computer comprises from one to 64 processor elements (PE), and from two to 128 memory elements (ME) and I/O processing elements (IOE). Configurations can be tailored to meet the needs of the applications. Each processor element consists of two CPUs running in lockstep, with 256 kb of non-write-through cache. The non-write-through cache is the foundation of the fault-tolerant operation of the system. Each processor performs its work locally within its cache, periodically checkpointing its state to main memory. Each memory element consists of 16 Mb of memory, protected by ECC (error checking and correction), and 1024 test-and-set locks used to synchronize processor updates to shared data structures within the operating system. Writable data is shadowed in main memory; the data is stored on two different memory elements. Each I/O processing element consists of CPUs running in lockstep, with 2 Mb of local memory. The I/O processors are connected to an IEEE-standard 796 bus (the Multibus), which serves as an I/O bus connecting the IOE to up to 16 peripheral controllers of any type.<>",1989,0, 142,Performance properties of vertically partitioned object-oriented systems,"A vertically partitioned structure for the design and implementation of object-oriented systems is proposed, and their performance is demonstrated. It is shown that the application-independent portion of the execution overheads in object-oriented systems can be less than the application-independent overheads in conventionally organized systems built on layered structures. Vertical partitioning implements objects through extended type managers. Two key design concepts result in performance improvement: object semantics can be used in the state management functions of an object type and atomicity is maintained at the type manager boundaries providing efficient recovery points. The performance evaluation is based on a case study of a simple but nontrivial distributed real-time system application",1989,0, 143,Recovery point selection on a reverse binary tree task model,"An analysis is conducted of the complexity of placing recovery points where the computation is modeled as a reverse binary tree task model. The objective is to minimize the expected computation time of a program in the presence of faults. The method can be extended to an arbitrary reverse tree model. For uniprocessor systems, an optimal placement algorithm is proposed. For multiprocessor systems, a procedure for computing their performance is described. Since no closed form solution is available, an alternative measurement is proposed that has a closed form formula. On the basis of this formula, algorithms are devised for solving the recovery point placement problem. The estimated formula can be extended to include communication delays where the algorithm devised still applies",1989,0, 144,A cryogenic monitor system for the liquid argon calorimeter in the SLD detector,"The authors describe the monitoring electronics system design for the liquid argon calorimeter (LAC) portion of the SLD (Stanford Linear Collider Large Detector). This system measures temperatures and liquid levels inside the LAC cryostat and transfers the results over a fiber-optic serial link to an external monitoring computer. System requirements, unique design constraints, and detailed analog, digital, and software designs are presented. Fault tolerance and the requirement for a single design to work in several different operating environments are discussed",1989,0, 145,An experimental evaluation of the effect of time-of-flight information in image reconstructions for the Scanditronix/PETT Electronics SP-3000 positron emission tomograph-preliminary results,"The effects of detector energy thresholds and time-of-flight information on image quality were investigated for the SP-3000 positron emission tomograph. Phantom studies were performed with different energy discriminator settings. A software preprocessor allowed images to be reconstructed from list mode data with different time-of-flight windows. In general, the inclusion of time-of-flight information in image reconstruction improved SNR (signal to-noise-ratio) over conventional filtered back-projection techniques. The data indicate that setting the energy discriminator level higher than the 170 keV currently used in the machine could be beneficial. The results demonstrated the need for good quality assurance testing for time-of-flight offsets and gains, and they indicate that the current look-ahead coincidence system may need to be modified",1989,0, 146,Diagnostic software and hardware for critical real-time systems,"The authors describe a diagnostic software library which contains algorithms to test read-only memory, read/write memory, address lines, the main processor instruction set, the numeric data processor instruction set, and mutual exclusion hardware. The software library also contains algorithms to respond to unexpected software interrupts. A dedicated subsystem diagnostics board called the Multibus Diagnostic Monitor (MDM) is also described",1989,0, 147,Performance analysis of voting strategies for a fly-by-wire system of a fighter aircraft,"Findings of studies on input processing of a digital fly-by-wire system of a fighter aircraft are presented. Objectives were to select a suitable software structure complying with reliability and fault tolerance requirements and to assess its computational load. Ramp and constant input signals with noise were studied based on Monte-Carlo methods. Voting strategies studied and compared include lower-median, upper-median, and weighted average. Execution times and memory requirements of each strategy have also been assessed",1989,0, 148,A multiple DSP environment for on-line signature analysis of plant instrumentation,"A system consisting of four parallel digital signal processors that communicate with a computer host is proposed for implementing online fault detection and identification using industrial transducers. Programming of the fault detection algorithms is carried out in a block-diagram graphical language, specially designed for that purpose. The programming environment allows automatic program segmentation into parallel tasks, the generation of the corresponding code, the allocation of the coded tasks to the four processors, and simulation of the operation of the programmed system. An example (a turbine flow meter) is given to demonstrate the use of the software",1989,0, 149,Concurrent correspondent modules: a fault tolerant Ada implementation,"The authors propose the concept of concurrent correspondent modules as a strategy to implement software fault tolerance. It is based on the concurrent execution of the primary, and its redundant correspondent versions. Error detection and forward error recovery are performed by the comparative test. Using Ada tasks in conjunction with the abort facility, concurrent correspondent modules are easily and efficiently implemented. The use of concurrent Ada enhances the power of concurrent correspondent modules, making this an elegant, effective and feasible alternative to other fault-tolerant approaches.<>",1989,0, 150,The reliability of regeneration-based replica control protocols,"Several strategies for replica maintenance are considered, and the benefits of each are analyzed. Formulas describing the reliability of the replicated data object are presented, and closed-form solutions are given for the tractable cases. Numerical solutions, validated by simulation results, are used to analyze the tradeoffs between reliability and storage cost. With estimates of the mean times to site failure and repair in a given system, the numerical techniques presented can be applied to predict the fewest number of replicas required to provide the desired level of reliability",1989,0, 151,Reliable distributed sorting through the application-oriented fault tolerance paradigm,The design and implementation of a reliable version of the distributed bitonic sorting algorithm using the application-oriented fault tolerance paradigm on a commercial multicomputer is described. Sorting assertions in general are discussed and the bitonic sort algorithm is introduced. Faulty behavior is discussed and a fault-tolerant parallel bitonic sort developed using this paradigm is presented. The error coverage and the response of the fault-tolerant algorithm to faulty behavior are presented. Both asymptotic complexity and the results of run-time experimental measurements on an Ncube multicomputer are given. The authors demonstrate that the application-oriented fault tolerance paradigm is applicable to problems of a noniterative nature,1989,0,346 152,Cost effectiveness of software quality assurance,"The authors address the economic aspects of quality assurance through an error cost model that determines the impact of quality assurance activities on overall development cost and on the number of residual errors. This model is based on the DOD-STD-2167A development cycle and contrasts the cost of finding errors in each development phase with the cost incurred if the error is not found until a later phase. Using data from a wide range of sources, the mode demonstrates that an early investment in error prevention and discovery through quality assurance standards, reviews, audits, and inspections removes errors at a significantly lower cost than finding and eliminating errors during later integration test phases. Furthermore, improving error discovery and correction in early development phases significantly reduces the number of errors that must be found in the later testing phases, enabling this testing to be more effective and thereby reducing the residual errors in the delivered software product. The model also provides a framework for characterizing a particular software development project and for comparing it with other projects",1989,0, 153,Availability analysis of a certain class of distributed computer systems under the influence of hardware and software faults,"There are modeling techniques which may be used to assess the availability of either the software or the hardware of distributed computer systems. However, from the practical point of view, it would be more useful to assess the availability when the system is subjected to the combined effect of hardware and software faults either during its normal operating time or repair time. Such an availability modelling technique, for the class of distributed computer systems which are used for process control is presented in this paper. To demonstrate the use of this technique the time function of the availability of a typical real-life distributed process control system is evaluated.",1989,0, 154,Formant speech synthesis: improving production quality,"The authors describe analysis and synthesis methods for improving the quality of speech produced by D.H. Klatt's (J. Acoust. Soc. Am., vol.67, p.971-95, 1980) software formant synthesizer. Synthetic speech generated using an excitation waveform resembling the glotal volume-velocity was found to be perceptually preferred over speech synthesized using other types of excitation. In addition, listeners ranked speech tokens synthesized with an excitation waveform that simulated the effects of source-tract interaction higher in neutralness than tokens synthesized without such interaction. A series of algorithms for silent and voiced/unvoiced/mixed excitation interval classification, pitch detection, formant estimation and formant tracking was developed. The algorithms can utilize two channels of input data, i.e., speech and electroglottographic signals, and can therefore surpass the performance of single-channel (acoustic-signal-based) algorithms. The formant synthesizer was used to study some aspects of the acoustic correlates of voice quality, e.g., male/female voice conversion and the simulation of breathiness, roughness, and vocal fry",1989,0, 155,Fault-tolerant real-time task scheduling in the MAFT distributed system,"The development of the task scheduling mechanism for the multicomputer architecture for fault-tolerance (MAFT) is discussed. MAFT is a distributed computer system designed to provide high performance and extreme reliability in real-time control applications. The impact of the system's functional requirements, fault-tolerance requirements, and architecture on the development of the scheduling mechanism is examined. MAFT uses a priority-list scheduling algorithm modified to provide extreme reliability in the monitoring of tasks and the detection of scheduling errors. It considers such issues as modular redundancy, Byzantine agreement, and the use of multiversion software and dissimilar hardware. An example of scheduler performance with a realistic workload is presented",1989,0, 156,Application of Linear Adaptive Control to Some Advanced Benchmark Examples,"Linear Adaptive Control [1]-[8] is a new approach to adaptive controller design that uses a novel exogenous linear dynamical model of parameter perturbation ""effects,"" and an ordinary linear observer, to generate the required adaptive control signal u(t). By this means, the need for a nonlinear parameter estimator, as traditionally used in adaptive control, is eliminated and the resulting adaptive controller is completely linear and time-invariant, (all controller ""gains"" are constant). The performance capabilities of linear adaptive controllers have been demonstrated in [4], [6], [7], [8] using relatively simple examples. In this paper the linear adaptive control technique is applied to several examples having complex forms of plant uncertainty. Computer simulation studies are presented to demonstrate the quality of adaptive performance achieved.",1989,0, 157,Insertion of fault detection mechanisms in distributed Ada software systems,"A technique for automatically inserting software mechanisms to detect single event upset (SEU) in distributed Ada systems is presented. SEUs may cause information corruption, leading to a change in program flow or causing a program to execute an infinite loop. Two cooperative software mechanisms for detecting the presence of these upsets are described. Automatic insertion of these mechanisms is discussed in relation to the structure of Ada software systems. A program, software modifier for upset detection (SMUD), has been written to automatically modify Ada application software and insert software upset detection mechanisms. As an example, the mechanisms have been incorporated into a system model that employs the MIL-STD-1553B communications protocol. This system model is used as a testbed for verifying that SMUD properly inserts the detection mechanisms. Ada is used for creating the simulation environment to exercise and verify the protocol. Simulation has been used to test and verify the proper functioning of the detection mechanisms. The testing methodology, a short description of the 1553B testbed, and a set of performance measures are presented",1989,0, 158,PSACOIN level 0 intercomparison-an international verification exercise on a hypothetical safety assessment case study,"A comparison of probabilistic system assessment (PSA) codes relevant to radioactive waste disposal is reported. This level-0 intercomparison was the first of a series of planned code verification exercises and was based on a simple model describing a hypothetical disposal system. PSA codes deal with uncertainty and variability in input parameters using a Monte Carlo approach, so that the analysis of the intercomparison results was complicated by the random fluctuation associated with the output variables to be estimated. The exercise was an important step in the process of assuring the quality of computer programs involved. The methodology could be useful in different contexts, whenever large Monte Carlo outputs affected by random variability are to be analyzed",1989,0, 159,Software reliability prediction for large and complex telecommunication systems,"The problem of predicting the number of remaining faults in a software system are studied. Seven software projects are analyzed using a number of software structure metrices and reliability growth models. The following conclusions are drawn: there is no single model that can always be used, irrespective of the project conditions; software structure metrics (mainly size) do correlate with the number of faults; the assumptions of reliability growth models do not apply when the testing is structured and well organized; and sufficient data has to be collected from different projects to create a basis for predictions",1989,0, 160,Yet another software quality and productivity modeling-YAQUAPMO,"A modeling technique is proposed that uses weight functions to define factors of quality or productivity in terms of evaluation factors. As a result, an acyclic decomposition graph is obtained. Quality or productivity is then defined as the distance between the actual graph and a required graph. An assessment technique is proposed that permits decompositions of any reasonable depth. It is shown how specific models are to be generated by a set of elementary production rules. The approach proposed is also applicable to modeling cost or volume estimations",1989,0, 161,Computer-aided programming for message-passing systems: problems and solutions,"As the number of processors and the complexity of problems to be solved increase, programming multiprocessing systems becomes more difficult and error prone. Program development tools are necessary since programmers are not able to develop complex parallel programs efficiently. Parallel models of computation, parallelization problems, and tools for computer-aided programming (CAP) are discussed. As an example, a CAP tool that performs scheduling and inserts communication primitives automatically is described. It also generates the performance estimates and other program quality measures to help programmers in improving their algorithms and programs",1989,0, 162,Testing of reliability-Analysis tools,"An outline is presented of issues raised in verifying the accuracy of reliability analysis tools. State-of-the-art reliability analysis tools implement various decomposition, aggregation, and estimation techniques to compute the reliability of a diversity of complex fault-tolerant computer systems. However, no formal methodology has been formulated for validating the reliability estimates produced by these tools. The author presents three states of testing that can be performed on most reliability analysis tools to effectively increase confidence in a tool. These testing stages were applied to the SURE (semi-Markov unreliability range evaluator) reliability analysis tool, and the results of the testing are discussed",1989,0, 163,FOCUS: an experimental environment for validation of fault-tolerant systems-case study of a jet-engine controller,"A simulation environment that allows the run-time injection of transient and permanent faults and the assessment of their impact in complex systems is described. The error data from the simulation are automatically fed into the analysis software in order to quantify the fault-tolerance of the system under test. The features of the environment are illustrated with case study of a fault-tolerant, dual-configuration real-time jet engine controller. The entire controller, described at the logic and functional levels, is simulated, and transient fault injections are performed. In the controller, fault detection and reconfiguration are performed by transactions over the communication links. The simulation consists of the instructions specifically designed to exercise this cross-channel communication. The level of effectiveness of the dual configuration of the system to single and multiple transient errors is measured. The results are used to identify critical design aspects from a fault-tolerance viewpoint",1989,0, 164,A distributed fault tolerant architecture for nuclear reactor control and safety functions,"A fault-tolerant architecture that provides tolerance to a broad scope of hardware, software, and communications faults is being developed. This architecture relies on widely available commercial operating systems, local area networks, and software standards. Thus development time is significantly shortened, and modularity allows for continuous and inexpensive system enhancement throughout the expected 20-year life. The fault-containment and parallel-processing capabilities of computers are exploited to provide a high-performance, high-availability network capable of tolerating a broad scope of hardware, software, and operating system faults. The system can tolerate all but one known (and avoidable) single fault, two known and avoidable dual faults, and it will detect all higher-order fault sequences and provide diagnostics to allow for rapid manual recovery",1989,0, 165,Visual representation methodology for the evaluation of software quality,"The visual representation methodology presents data in a pictorial form, such as face graphs, which enables developers and designers to detect problems that would otherwise be difficult to determine from tables or other statistical data, especially when the amount of data is large or the data elements are many and varied. The visual representation methodology is especially applicable to software development projects, which involve complicated data that are often subjectively biased by human factors, which can compromise data reliability. The visual representation methodology was used to evaluate a number of software projects. Quality and productivity improvements were achieved by applying the methodology to the quality control, process control, and subcontract management of telecommunications software",1989,0, 166,Fault tolerance: a verification strategy for switching systems,"At GLOBECOM 87, R. Paterson described the new approach to performing failure mode analysis. This technique insured that each high-level functional failure mode of the detailed design is detected, recovered, and isolated according to the intent of the system-level design. Since that work, Bell-Northern Research has improved the technique and developed a toolset to make the analysis more efficient and effective. The toolset models a system from the high-level architecture to the device level while taking the software maintenance design into consideration. The model calculates the impact of subtle detail design changes at the system level and consequently identifies the effect on the end user and the operating company. The BNR design process and fault-tolerant verification process are described. To illustrate the approach considered, a switching network for a telephone switch is considered. It is concluded that the current fault tolerant verification strategy with the complementing toolset is an effective and efficient way of designing high-quality fault-tolerant systems",1989,0, 167,Elucidation of an ISDN performance management model,"A model for ISDN (integrated services digital network) performance management is discussed. According to this model, the three most important elements of performance management, in order of their importance, are autonomous receipt of failure and service degradation reports by the automated performance management environment (APME); development of tools to sectionalize problems in different layers of protocols; and incorporating correlation, analysis, and action functions, usually performed manually by technicians, into the APME. ISDN performance management tools can be organized into a hierarchy based on the open systems interconnection (OSI) layer model. The digital technology of the ISDN allows many of these tools to be designed in such a way that the service degradations are detected and reported to enable a pro-active approach, which maintains the service quality and isolated for repair the sections or elements causing the degradation in a way that is nondisruptive to service",1989,0, 168,On predicting software reliability,"Definitions of software reliability are presented, and an overview of reliability is given. The system-software-availability relationship is also explored",1989,0, 169,Methodologies for meeting hard deadlines in industrial distributed real-time systems,"Four new approaches for coping with strict timing constraints as encountered in industrial process control applications are presented. For each node in a distributed system, an asymmetrical two-processor architecture capable of guaranteeing response times is employed. One processor in each node is dedicated to the kernel of a real-time operating system. Its three reaction levels are constructively described by outlining their functional units and control procedures. Contemporary real-time computers are unable to manipulate external parallel processes simultaneously, and their response times are generally unpredictable. It is shown that these problems can be solved by endowing the node computers with novel process peripherals. They work in parallel and perform I/O operations precisely at user-specified instants. To cope with a transient overload of a node resulting from an emergency situation, a fault tolerant scheme that handles overloads by degrading the system performance gracefully and predictably is used. The scheme is based on the concept of imprecise results and does not utilize load sharing, which is usually impossible in industrial process control environments",1989,0, 170,Hyper-geometric distribution model to estimate the number of residual software faults,It is shown that the hyper-geometric distribution model can be applied to different types of test-and-debug data to estimate the number of initial software faults. Examples show that the fitness of the estimated growth curves of the model to real data is satisfactory. The relationship of the model to those proposed earlier is clarified. Some of them can be expressed as a special case of the proposed model. The S-shaped characteristic of the growth curve can also be introduced using this model,1989,0, 171,Assessing the adequacy of documentation through document quality indicators,"Case study results for a research effort funded by the US Naval Surface Warfare Center (NSWC) are presented. The investigation focuses on assessing the adequacy of project documentation based on an identified taxonomic structure relating documentation characteristics. Previous research in this area has been limited to the study of isolated characteristics of documentation and English prose, without considering the collective contributions of such characteristics. The research described takes those characteristics, adds others and establishes a well-defined approach to assessing the `goodness' of software documentation. The identification of document quality indicators (DQIs) provides the basis for the assessment procedure. DQIs are hierarchically defined in terms of document qualities, factors that refine qualities, and quantifiers that provide for the measurement of factors",1989,0, 172,Software metric classification trees help guide the maintenance of large-scale systems,"The 80:20 rule states that approximately 20% of a software system is responsible for 80% of its errors. The authors propose an automated method for generating empirically-based models of error-prone software objects. These models are intended to help localize the troublesome 20%. The method uses a recursive algorithm to automatically generate classification trees whose nodes are multivalued functions based on software metrics. The purpose of the classification trees is to identify components that are likely to be error prone or costly, so that developers can focus their resources accordingly. A feasibility study was conducted using 16 NASA projects. On average, the classification trees correctly identified 79.3% of the software modules that had high development effort or faults",1989,0, 173,CAPRES: a software tool for modeling and analysis of fault-tolerant computer architectures,"An engineering software tool is described that was developed to support efficient and cost-effective design of large-scale, parallel, fault-tolerant computer architectures (FTCAs). The tool, termed CAPRES (computer-aided performance and reliability evaluation system), allows the user to predict, by analytical means, the performance, reliability, and performability of candidate computer architectures executing candidate workloads. The algorithms of CAPRES use multidisciplinary techniques from hierarchical queuing-network theory, semi-Markov processes, hybrid-state systems theory and numerical integration. The sophisticated user interface and high-level front-end of CAPRES make this software system a truly integrated package for FTCA analysis and evaluation. To illustrate the role of CAPRES as a tool to aid in the analysis of FTCAs, an example of an Encore Multimax multiprocessor executing a parallel weapon-target-assignment algorithm is presented",1989,0, 174,Does Imperfect Debugging Affect Software Reliability Growth?,"This paper discusses the improvement of conventional software reliability growth models by elimination of the unreasonable assumption that errors or faults in a program can be perfectly removed when they are detected. The results show that exponential-type soft- ware reliability growth models that deal with error- counting data could be used even if the perfect debugging assumption were not held, in which case the interpretation of the model parameters should be changed. An analysis of real project data is presented.",1989,0, 175,Application of state analysis technology to surveillance systems,"
First Page of the Article
",1989,0, 176,Diagnostic tree design with model-based reasoning,"A reasoning procedure using quantitative models of connectivity and function has been developed to generate automatically multibranched diagnostic trees which can isolate faults within feedback loops and in the presence of multiple faults. The authors describe how the model-based reasoning system is used to generate automatically diagnostic trees that can have variable degrees of branching, from binary to ternary (nodes with high, OK, and low branches) to n-ary trees. With branching degrees at or above ternary, these trees are capable of fault isolating within loops and can in fact isolate multiple faults. The trees can utilize much of the information content in quantitative measurements to make efficient and accurate diagnoses not possible with the binary tree. Both efficiency and accuracy of diagnosis increase with the branching factor of the tree. Automated tree generation provides effective automated diagnostics to applications requiring low-cost hardware and fast response time",1989,0, 177,Software fault tolerance for a flight control system,"The aim of software fault tolerance is to introduce programming techniques which will allow the embedded software to maintain performance in the presence of hardware faults which include data, address and control bus corruptions. A case study is described in which the navigation and control software of a remotely piloted vehicle (RPV) is subjected to such transient fault conditions. The embedded software was designed to detect and recover from such faults. Various aspects of the design, fault conditions, experimental setup and results are discussed",1989,0, 178,Industrial metrology and sensors,"Industrial metrology is concerned with sensors to measure movement of machine-tool parts and monitor tool wear and the dimensions of artifacts in machining centers, sensors for robots in flexible manufacturing systems, sensors to gauge mating parts for selective assembly or allowing for interchangeability, and sensors for inspection and testing of assembled or part-assembled products. In general, the dimensional, shape, and physical properties of functional parts need to be inspected. As a consequence of the need to be highly efficient and quality conscious, manufacturing metrology is evolving from traditional engineering metrology (dominated by the skills of quality inspectors at the end of production lines) to automatic inspection methods (offline, in-cycle, and in-line) and utilizing microelectronic, computer (hardware and software), and novel optical techniques. Suitable sensing techniques, sensors, and transducers are essential to this developing situation. The author reviews the subject and emphasizes significant advances",1989,0, 179,A DSP-based active sensor interface performing on-line fault detection and identification,"Process-measuring instruments generating output signals in the form of a frequency are discussed. By applying a specific signal-analysis technique, which can be implemented by a mixture of hardware and software enhancements on a digital signal processor (DSP), it becomes possible to determine abnormal instrument operation and detect a faulty instrument component within minutes after its failure. The technique used comprises the transformation of the received signal to a train of unit pulses and its spectral analysis by the zero-crossing method. Although this technique presents a limited fault-discrimination ability, it can trace down quickly and economically the causes of major faults, as demonstrated by an application example",1989,0, 180,Case study of radial overhead feeder performance at 12.5 kV and 34.5 kV,"The authors compared the economic and electrical performance factors on one specific overhead radial distribution feeder that is in operation at 12.5 kV, but whose present load and growth require that it be upgraded. They examined the economics and electrical performance of several feeder upgrading options for a particular feeder on the Green Mountain Power system (USA) with respect to: feeder losses; feeder voltage regulation; capital costs and cost of losses: and voltage dips on the local 115 kV transmission system caused by feeder faults. The performance of four feeder configuration options is assessed. It is shown that there is no clear economic advantage to 34.5 kV and that there may be a disadvantage with respect to voltage dip performance",1989,0, 181,Comparison of two estimation methods of the mean time-interval between software failures,"The quantitative assessment of software reliability by using failure time data observed during software testing in the software development is addressed. On the basis of two models described by nonhomogeneous Poisson processes, exponential and delayed S-shaped software reliability growth models, the authors adopt the mean time between software failures as a reliability assessment measure and propose a method of software reliability assessment. Since the distributions of the time intervals between software failures for the two models are improper, the mean time intervals are obtained by standardization of the distributions. Further, applying the two models to actual data, numerical examples of the mean time between software failures are shown",1990,0, 182,Controllable signature check pointing scheme for transient error detection,"A program can fail during execution owing to software, hardware, or extraneous faults. There are three types of errors caused by extraneous faults: transient, intermittent, and permanent. This work deals with transient faults. The authors present a flexible and controllable transient error detection scheme called the controllable signature check pointing scheme. The new signature monitoring differs from existing schemes in three major aspects: (1) the intervals in the program to be monitored are decided at a higher level of abstraction (high-level-language level); (2) the interval size is chosen not arbitrarily but by applying control-flow and bulk complexity measures; (3) the interval size can be controlled to suit the application. The location of the signature check points and the frequency of occurrence of the check points can be controlled by varying the interval control-flow complexity factor (INCCF). Thus the error coverage and latency, memory overhead, and performance degradation can also be tuned to suit the application. Future refinements and extensions include signature check pointing of sequential code and reducing signature embedding overhead",1990,0, 183,DEPEND: a design environment for prediction and evaluation of system dependability,"The development of DEPEND, an integrated simulation environment for the design and dependability analysis of fault-tolerant systems, is described. DEPEND models both hardware and software components at a functional level, and allows automatic failure injection to assess system performance and reliability. It relieves the user of the work needed to inject failures, maintain statistics, and output reports. The automatic failure injection scheme is geared toward evaluating a system under high stress (workload) conditions. The failures that are injected can affect both hardware and software components. To illustrate the capability of the simulator, a distributed system which employs a prediction-based, dynamic load-balancing heuristic is evaluated. Experiments were conducted to determine the impact of failures on system performance and to identify the failures to which the system is especially susceptible",1990,0, 184,Fault recovery characteristics of the fault tolerant multiprocessor,"The fault handling performance of the fault tolerant multiprocessor (FTMP) was investigated. Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles byzantine or lying faults. It is pointed out that these weak areas in the FTMP's design increase the probability that, for any hardware fault, a good LRU (line replaceable unit) is mistakenly disabled by the fault management software. It is concluded that fault injection can help detect and analyze the behavior of a system in the ultra-reliable regime. Although fault injection testing cannot be exhaustive, it has been demonstrated that it provides a unique capability to unmask problems and to characterize the behavior of a fault-tolerant system",1990,0, 185,Rapid recovery from transient faults in the fault-tolerant processor with fault-tolerant shared memory,"The Draper fault-tolerant processor with fault-tolerant shared memory (FTP/FTSM), which is designed to allow application tasks to continue execution during the memory alignment process, is described. Processor performance is not affected by memory alignment. In addition, the FTP/FTSM incorporates a hardware scrubber device to perform the memory alignment quickly during unused memory access cycles. The FTP/FTSM architecture is described, followed by an estimate of the time required for channel reintegration",1990,0, 186,Analytical techniques for diagnostic functional allocation,"The allocation of maintenance system elements to accomplish the diagnostic mission must be optimized to balance performance, size, weight, and cost. The authors discuss techniques used to create a functional dependency model of a subject system and derive an effectiveness and cost model for the diagnostic system. It is concluded that the diagnostic elements of a weapon system may be designed properly through the use of dependency models and adherence to a structured system engineering process. The critical process of functional allocation is the trade-off of diagnostic performance against operational system efficiencies. Models which have been created for this purpose assist the system engineer in the allocation and design processes, document the results for updates and changes, and help to achieve an optimized system design",1990,0, 187,Bridging the gap between design and testing of analog integrated circuits,"A global methodology for the design of analog and mixed analog/digital integrated systems is outlined. The methodology calls for a flexible framework that combines CAD software, simulators, measurement set-ups, and data-processing software. As an application example within such a framework, error modeling and estimation in multistage analog/digital converters is discussed using a new algorithm",1990,0, 188,Paging strategies in an object oriented environment,"Paging issues which arise in object-oriented programming systems are considered. Unlike conventional virtual memory systems, object-oriented system provide clues about the underlying structure of the executing programs. This fact is used to suggest improved memory management strategies for object-oriented systems. These strategies are modeled using a kind of simple game on graphs. This game allows estimation of the performance needed to ensure object-fault-free computation",1990,0, 189,Market research of hardware and software with partial-order classification,"This work deals with one of the basic problems of market research: the evaluation of many different qualities, e.g. price, size, performance, and service. All the above are accumulated into comparable numbers and presented to the decision-makers as a total sum of qualities. To improve the techniques of data representation and accumulation, the author proposes the use of partial order classification for analyzing and representing sets of data vectors. He explains some of the basic principles of the theory and demonstrates its potential capabilities by a memory comparison between 33-MHz 386 PCs",1990,0, 190,A comprehensive approach for modeling and testing analog and mixed-signal devices,"An approach to optimizing the testing of analog and mixed-signal devices is presented. Once an accurate model has been developed, simple algebraic operations on the model can be used to select an optimum set of test points that will minimize the test effort and maximize the test confidence; estimate the parameters of the model from measurements made at the selected test points; predict the response of the device at all candidate test points as a basis for accepting or rejecting units; calculate the accuracy of the parameter estimates and response predictions on the basis of the random measurement error; and test the validity of the model, online, so that changes in the manufacturing process can be constantly monitored and the model can be updated. The authors show how each of these procedures can be performed using simple calls to routines that are available in both public domain and commercial linear algebra software packages. The approach is quite general and has been experimentally applied to the measurement of the frequency response of an amplifier-attenuator network, to fault diagnosis of a bandpass filter using time-domain measurements, and to efficient linearity tests of A/D and D/A converters",1990,0, 191,The software death cycle,"Some aspects of software quality have been fairly well explored-particularly those that pertain to the development phase. Other aspects, notably the measurement of the cost of keeping operational software in line with customer requirements, have received considerably less attention. The paper explores this area by proposing a means of assessing the ownership cost of such software. It shows how this may be used as a basis for deciding when a software system becomes obsolescent and should, on economic grounds, be retired from use",1990,0, 192,"PROOFS: a fast, memory efficient sequential circuit fault simulator","A super-fast fault simulator for synchronous sequential circuits, called PROOFS, is described. PROOFS achieves high performance by combining all the advantages of differential fault simulation, single fault propagation, and parallel fault simulation, while minimizing their individual disadvantages. PROOFS minimizes the memory requirements, reduces the number of events that need to be evaluated, and simplifies the complexity of the software implementation. PROOFS requires an average of one fifth the memory required for concurrent fault simulation and runs 6 to 67 times faster on the ISCAS sequential benchmarks",1990,0, 193,Real-time computer network performance analysis based on ISO/OSI transport service definition,"A unified approach focuses on estimation of statistical parameters at the transport service access points from the states and transitions of the service endpoints in the transport layer. Unlike previous methods, this approach emphasizes simplicity, efficiency, and the possibility of real-time analysis. Other possible applications of this concept are discussed, e.g. timer value assignment, end-to-end global rate or throughput control. On the basis of the proposed concept, much more sophisticated real implementations, in end-to-end bases, could be considered for existing computer networks and for future open systems and integrated services",1990,0, 194,QUIETEST: a quiescent current testing methodology for detecting leakage faults,"A hierarchical leakage fault analysis methodology is proposed for IDDQ (quiescent power supply current) testing of VLSI CMOS circuits. A software system, QUIETEST, has been developed on the basis of this methodology. The software can select a small number of test vectors for IDDQ testing from the provided functional test set. Therefore, the total test time for IDDQ measurements can be reduced significantly to make IDDQ testing of VLSI CMOS circuits feasible in a production test environment. For two VLSI circuits QUIETEST was able to select less than 1% of functional test vectors from the full test set for covering as many leakage faults as would be covered if IDDQ was measured upon the application of 100% of the vectors.<>",1990,0, 195,A study on the effect of reengineering upon software maintainability,"The effect of reengineering on software maintainability was investigated in a laboratory experiment conducted within the METKIT research project of the European ESPRIT program for the study and promotion of the use of metrics in software engineering. The experiment was conducted as a case study in measuring software complexity and maintainability. However, the results also serve to assess the benefits of reengineering old programs. Maintainability is defined as the effort required to perform maintenance tasks, the impact domain of the maintenance actions, and the error rate caused by those actions. Complexity is defined as a combination of code, data, data flow, structure, and control flow metrics. The data collected demonstrate that reengineering can decrease complexity and increase maintainability, but that restructuring has only a minor effect on maintainability",1990,0, 196,Monitoring and control of distributed systems,"Describes the Distributed System Manager (DSM) being developed on Sun workstations connected by an Ethernet at the University of California. Effective utilization of distributed systems requires many management facilities such as resource management, performance analysis, behavior analysis, fault management, and security management. Each of the management facilities becomes a client to the DSM and registers to the DSM a request of the form 〈assertion, actionâŒ?if it wants the action to be taken when the assertion is detected. The DSM monitors the distributed system to collect information using sampling probes and tracing probes and stores the collected information in a history database. The DSM also detects the moment when the assertion in a registered request becomes satisfied and then takes the corresponding action. An assertion can be a system property holding or an event occurring at a time point or during a time interval. An action can be answering clients' queries to system status stored in the history database, notifying the detection of assertions, or exercising direct control over the system. The overall structure of the managed system and the DSM is given. Data modeling of the managed system, the mechanisms to collect information, and the representation scheme of the information in the history database are described. The assertion specification language based on the classical temporal logic is presented",1990,0, 197,Predictability measures for software reliability models,"A two-component predictability measure is presented that characterizes the long-term predictability of a software reliability growth model. The first component, average predictability, measures how well a model predicts throughout the testing phase. The second component, average bias, is a measure of the general tendency to overestimate or underestimate the number of faults. Data sets for both large and small projects from diverse sources have been analyzed. The results seem to support the observation that the logarithmic model appears to have good predictability is most cases. However, at very low fault densities, the exponential model may be slightly better. The delayed S-shaped model which in some cases has been shown to have good fit, generally performed poorly",1990,0, 198,An integrated expert system framework for software quality assurance,"A software quality assurance framework using knowledge-based engineering technology is described. The knowledge-engineering technology uses an object-oriented database to store the knowledge (the software quality information), and rules and meta-rules are its inferential knowledge. A dependency-based truth maintenance system based on hypothetical reasoning is used for design evaluation of the software quality. This framework can provide knowledge-based assistance for quality assurance throughout the entire software development cycle. To ensure high quality software and achieve cost-effective software development and maintenance, software metrics are used during the entire software development cycle to measure and predict the quality of software products. Various metrics for software attributes for all phases of the software development cycle will be collected and stored in the object-oriented database. The integration of the knowledge base with the software quality framework provides a wide range of support to the development of large-scale software systems",1990,0, 199,Measuring software size by distinct lines,"The relationship between DLC (distinct line count) and NCSL (noncomment source lines) is studied on a number of programs, and it is found that, as a simple rule of thumb, the NCSL count can be estimated by twice the DLC. A more accurate model is derived by predicting NCSL from DLC and the number of lines that occur exactly once. It is also shown that, for unrelated programs, the proportion of common lines is very small; hence, DLC is approximately additive. It is concluded that, overall, the DLC is a very attractive measure of size that has two basic advantages over NCSL: it is an intuitively more appealing measure of effort than NCSL, and the problems of measuring size of subsequent releases disappear when using DLC",1990,0, 200,The lines of code metric as a predictor of program faults: a critical analysis,"The relationship between measures of software complexity and programming errors is explored. Four distinct regression models were developed for an experimental set of data to create a predictive model from software complexity metrics to program errors. The lines of code metric, traditionally associated with programming errors in predictive models, was found to be less valuable as a criterion measure in these models than measures of software control complexity. A factor analytic technique used to construct a linear compound of lines of code with control metrics was found to yield models of superior predictive quality",1990,0, 201,Extending software complexity metrics to concurrent programs,"A metric for concurrent software is proposed based on an abstract model (Petri nets) as an extension of T.J. McCabe's (1976) cyclomatic number. As such, its focus is on the complexity of control flow. This metric is applied to the assessment of Ada programs, and an automatic method for its direct computation based on the inspection of Ada code is provided. It is pointed out, however, that wider experimentation is needed in order to better assess its effectiveness",1990,0, 202,A quantum modification to the Jelinski-Moranda software reliability model,"Recent studies show that the Jelinski-Moranda model always gives an optimistic prediction for software reliability. The major reason for this shortcoming is that each fault occurring in the model is assumed to have exactly the same contribution to the failure rate. In this study the authors assume that different faults may have different contributions to the failure rate. In addition, the structure of software is also incorporated in their approach",1990,0, 203,A software approach to fault detection on programmable systolic arrays,"Current approaches to fault detection on processor arrays can be classified as hardware-based or algorithmic. The former are particularly rigid while the latter are restricted in their application. The paper presents two versions of general-purpose software fault detection for programmable systolic arrays. The methods automatically transform cell programs to provide fault detection, allowing the user more flexibility than hardware methods and more generality than algorithmic methods. Additionally, since they are based in software, the fault detection can easily be eliminated when ether methods are more appropriate",1990,0, 204,Testing of the rapidly developed prototypes,"A conflict occurs between rapid prototyping and the adequate testing of these prototypes. The author shows how much of the software testing process could be automated to produce faster and better quality prototypes. The application for which the automated testing tools were developed is a prototyped negotiation support system (NSS). The system's primary function is to assist in the negotiation of the networking services offered by Bellcore Client Companies (BCCs) to their customers. The Negotiation is preceded by an initialization process (called an NSS Maintenance), which creates a customer-specific Data Base (DB). The System for Performance Testing of NSS (SPT) was developed to automate NSS testing. SPT's major function is to simulate the NSS execution of several different classes of negotiation scenarios. The SPT system has been in use for almost six months, and the preliminary estimates show that testing productivity has significantly improved. In the remainder of this paper, the methodology used to build SPT as a general automated testing tool is described",1990,0, 205,Using symbolic execution to aid automatic test data generation,"It is shown how symbolic execution is used to solve the internal variable problem in the Godzilla test data generator. In constraint-based testing, which is used by the system, the internal variable problem appears when the constraints that specify the test cases contain internal variables. The necessary background is developed by describing the constraint systems used by Godzilla and by discussing symbolic execution in general terms. The application of symbolic execution to the internal variable problem is presented. The discussion focuses on the software used, including algorithmic details. Following this, a practical example of using this system to detect a fault in a small program is presented.<>",1990,0, 206,Using CSP to develop trustworthy hardware,"An overview of a method for formalizing critical system requirements and decomposing them into requirements of the system components and a minimal, possibly empty, set of synchronization requirements is presented. The trace model of communicating sequential processes (CSPs) is the basis for the formal method, and the EHDM verification system is the basis for mechanizing proofs. The results of the application of this method to the top-level implementation of an error-detecting character repeater are discussed. The critical requirements of the repeater are decomposed into the requirements of its components. Provided that the components meet their derived requirements, the repeater has been proven to meet its critical requirements.<>",1990,0, 207,IEE Colloquium on `Control and Optimisation Techniques for the Water Industry: II' (Digest No.135),The following topics were dealt with: electric tariff optimisation for water treatment works; pipe network design; water distribution networks joint state and parameter estimation under bounded measurement error; knowledge-based techniques in pump scheduling; optimising control of water supply/distribution networks; set point control of water treatment; quality control; and Sample Collection Manager software,1990,0, 208,Propagation modelling for TV broadcasting at microwave frequencies,"Summary form only given. A receive system, mounted in a survey vehicle with a telescopic mast, has been used to measure field strength and picture quality throughout the Winchester area. Particular studies have been made of: comparison of the two modulation schemes, FM and VSB-FM; bit-error-ratios for both modulation schemes; cross-polar discrimination; comparison between winter and summer; multi-path effects, using the swept-frequency source; and propagation just beyond the horizon, using four remote data-loggers. These measurements have been compared with the predictions of the IBA transmitter coverage software, which uses a 50 m terrain data base. A clutter data base has been added, with the same horizontal resolution, and various techniques have been assessed for using this information in the prediction software",1990,0, 209,Application of quantitative feedback theory (QFT) to flight control problems,"QFT has been applied to numerous analog and digital flight control design problems. These include linear time invariant (LTI) fixed compensation m output, n input (nm ) over a wide variety of flight conditions (FCs) with effector failures. Quantitative performance specifications are satisfied despite the failures, with no failure detection and identification. Rigorous nonlinear QFT design theory has been applied to large α problems for which LTI modeling is incorrect. QFT has been successfully applied to an X29 problem with varying numbers of right half plane poles and zeros (due to different FCs) close together",1990,0, 210,Systems analysts performance using CASE versus manual methods,"A laboratory experiment was conducted to determine if systems analysts who use computer-aided software engineering (CASE) tools perform work of higher quality than those who do not use CASE. A group of subjects used a CASE tool to prepare data-flow diagrams and data dictionary entries to represent a system. A second group used traditional pencil-and-paper methods. The outputs of the two groups were compared using three attributes of quality, namely correctness, completeness, and communicability. Correctness is the only attribute of quality in which a significant difference between the two groups of subjects was detected, the CASE tool users being more correct. This result is attributed to the nature of CASE tools. The lack of association between the use of a CASE tool and completeness and communicability has led to the conclusion that the CASE tool does not lead the user to represent the problem more accurately nor does it help the user to understand the problem better. Developing complete and easily understood data flow diagrams and data dictionaries are perhaps quality attributes of the systems analysts themselves, not of the tool they use",1990,0, 211,Constraint-driven synthesis from architectures described in ELLA,"Demonstrates how the new synthesis tool LOCAM, provided by Praxis Electronic Design, gives system designers, for the first time, an automatic route to silicon from behavioural descriptions of their intended architectures. LOCAM avoids the need for time-consuming, error-prone logic design and raises the design reference level from boolean logic equations to abstract descriptions of architectures. The paper illustrates how the performance of a chosen architecture can be pre-determined both by the nature of the ELLA description and by user-defined constraints imposed on LOCAM",1990,0, 212,An evaluation of some design metrics,"Some software design metrics are evaluated using data from a communications system. The design metrics investigated were based on the information flow metrics proposed by S. Henry and D. Kafura (1981) and the problems they encountered are discussed. The slightly simpler metrics used in this study are described. The ability of the design metrics to identify change-prone, error-prone and complex programs are contrasted with that of simple code metrics. Although one of the design metrics (informational fan-out)/ was able to identify change-prone, fault-prone and complex programs, code metrics (i.e. lines of code and number of branches) were better. In this context `better' means correctly identifying a larger proportion of change-prone, error-prone and/or complex programs, while maintaining a relatively low false identification rate (i.e. incorrectly identifying a program which did not in fact exhibit any undesirable features)",1990,0, 213,Software diversity metrics quantifying dissimilarity in the input partition,"Considerations are presented which suggest measuring the diversity degree of fault-tolerant software systems by quantifying the dissimilarity in the input partition of alternative versions. Being closely related to the common failure behaviour of diverse programs and easily estimated by means of dynamic analysis, this method should provide suitable diversity metrics to support the testing and the assessment of ultra-high software reliability",1990,0, 214,Achieving dependability throughout the development process: a distributed software experiment,"Distributed software engineering techniques and methods for improving the specification and testing phases are considered. To examine these issues, an experiment was performed using the design diversity approach in the specification, design, implementation, and testing of distributed software. In the experiment, three diverse formal specifications were used to produce multiple independent implementations of a distributed communication protocol in Ada. The problems encountered in building complex concurrent processing systems in Ada were also studied. Many pitfalls were discovered in mapping the formal specifications into Ada implementations",1990,0, 215,Performance analysis of a generalized concurrent error detection procedure,"A general procedure for error detection in complex systems, called the data block capture and analysis monitoring process, is described and analyzed. It is assumed that, in addition to being exposed to potential external fault sources, a complex system will in general always contain embedded hardware and software fault mechanisms which can cause the system to perform incorrect computations and/or produce incorrect output. Thus, in operation, the system continuously moves back and forth between error and no-error states. These external fault sources or internal fault mechanisms are extremely difficult to detect. The data block capture and analysis monitoring process is concerned with detecting deviations from the normal performance of the system, known as errors, which are symptomatic of fault conditions. The process consists of repeatedly recording a fixed amount of data from a set of predetermined observation lines of the system being monitored (i.e. capturing a block of data) and then analyzing the captured block in an attempt to determine whether the system is functioning correctly",1990,0, 216,Supporting service development for intelligent networks,"The author examines the development support required to assure the quality of intelligent network service specifications. He then assesses the current state of enabling technologies, which include the technologies for formal specification, software reuse, rapid prototyping, performance evaluation, behavioral property verification, and feature interaction analysis and arbitration. It is concluded that much existing work in software specification is relevant to the support of intelligent network service specification. There are, however, two major classes of technical difficulties that must be addressed: limited experience with intelligent network services and immaturity of support technologies",1990,0, 217,A framework for software quality measurement,"The authors propose a quality model as a framework which should facilitate the evolution of theoretically based systems of measurement for the processes and products of the software development lifecycle. They start by defining measurement and quality measurement, and they present a model for software quality measurement. This model is used to explore several fundamental concepts and as a framework for the development of a set of software quality metrics. The authors discuss how the model can be developed through experimental work and by developing current and potential applications of the model in its immature state. These include process optimization, quality specification, end product quality control, intermediate product quality control, and the prediction of software quality. The authors conclude with a discussion of current issues and future trends in this area. Topics covered include education, the evolution of standards for data definition, and tool support",1990,0, 218,Predicting software development errors using software complexity metrics,"Predictive models that incorporate a functional relationship of program error measures with software complexity metrics and metrics based on factor analysis of empirical data are developed. Specific techniques for assessing regression models are presented for analyzing these models. Within the framework of regression analysis, the authors examine two separate means of exploring the connection between complexity and errors. First, the regression models are formed from the raw complexity metrics. Essentially, these models confirm a known relationship between program lines of code and program errors. The second methodology involves the regression of complexity factor measures and measures of errors. These complexity factors are orthogonal measures of complexity from an underlying complexity domain model. From this more global perspective, it is believed that there is a relationship between program errors and complexity domains of program structure and size (volume). Further, the strength of this relationship suggests that predictive models are indeed possible for the determination of program errors from these orthogonal complexity domains",1990,0, 219,Software fault content and reliability estimations for telecommunication systems,"The problem of software fault content and reliability estimations is considered. Estimations that can be used to improve the control of a software projects are emphasized. A model of how to estimate the number of software faults is presented, which takes into account both the development process and the developed software product. A model of how to predict, before the testing has started, the occurrences of faults during the testing stage is also presented. These models, together with software reliability growth models, have been evaluated on a number of software projects",1990,0, 220,A modeling approach to software cost estimation,"The author describes a novel software estimation modeling process as well as the important productivity factors and the productivity measurement metrics used in the 5ESS project, one of the largest telecommunication projects at AT&T Bell Laboratories. The 5ESS switch is a modern digital electronic switching system with a distributed hardware and software architecture. The model estimation approach has greatly improved the quality of estimates. The study of productivity factors has resulted in some significant productivity and quality improvement processes in the 5ESS development community",1990,0, 221,Modeling of correlated failures and community error recovery in multiversion software,"Three aspects of the modeling of multiversion software are considered. First, the beta-binomial distribution is proposed for modeling correlated failures in multiversion software. Second, a combinatorial model for predicting the reliability of a multiversion software configuration is presented. This model can take as inputs failure distributions either from measurements or from a selected distribution (e.g. beta-binomial). Various recovery methods can be incorporated in this model. Third, the effectiveness of the community error recovery method based on checkpointing is investigated. This method appears to be effective only when the failure behaviors of program versions are lightly correlated. Two different types of checkpoint failure are also considered: an omission failure where the correct output is recognized at a checkpoint but the checkpoint fails to correct the wrong outputs and a destructive failure where the good versions get corrupted at a checkpoint",1990,0, 222,A microprocessor-based piezoelectric quartz microbalance system for compound-specific detection,"The authors describe the implementation of a portable, microprocessor-controlled, piezoelectric quartz microbalance (PQM) system suitable for remote and autonomous monitoring of air quality around hazardous waste sites or chemical spills. The system has been designed to detect nanogram-level mass change or sorbed compounds sensitive to the specific coating materials used. Miniaturized modular hybrid crystal clock oscillators were used to construct a four-sensor array. The quartz crystals of the four modules were exposed and coated with chemically specific compounds so that they would act as sorption detectors. These sensor elements were enclosed in a small flow-through gas chamber along with a fifth module which was left sealed and was used as a reference oscillator. The frequency of each sensor was separately mixed with that of the reference oscillator and multiplexed into the microprocessor. The difference frequencies were measured by the microprocessor through a programmable timer/counter chip. The temperature within the gas chamber was maintained constant by means of a PID (proportional-integral-derivative) controller implemented in software. A Peltier device was used as a heating/cooling element of the gas chamber",1990,0, 223,Superfluid helium tanker instrumentation,"An instrumentation system for a 1992 Space Shuttle flight demonstration of a superfluid helium (SFHe) tanker and transfer technology is presented. The system provides measurement of helium temperatures, pressures, flow rates, and mass, and it detects the presence of liquid or vapor. The instrumentation system described consists of analog and digital portions which comprise a fault tolerant, compact, and relatively lightweight space-quality electronics system. The data processing hardware and software are ground-commandable, perform measurements asynchronously, and format telemetry for transmission to the ground. A novel heat pulse mass gaging technique is described and a new liquid/vapor sensor is presented. Flowmeters for SFHe are discussed, and a SFHe fountain-effect pump is described. Results of tests to date are presented",1990,0, 224,Predicting source-code complexity at the design stage,"It is shown how metrics can be used to gauge the quality of source code by evaluating its design specifications before coding, thus shortening the development life cycle. The predictive abilities of several types of metrics, all quantitative, are evaluated, and statistics that can be used to adapt them to a particular project are provided. The study data consist of programs written by students, but, because the programs are large, the professor did not provide the specifications, and the test data were not provided, it is believed that the programs are sufficiently realistic for the approach presented to be generally valid.<>",1990,0, 225,Empirically guided software development using metric-based classification trees,"The identification of high-risk components early in the life cycle is addressed. A solution that casts this as a classification problem is examined. The proposed approach derives models of problematic components, based on their measurable attributes and those of their development processes. The models provide a basis for forecasting which components are likely to share the same high-risk properties, such as being error-prone or having a high development cost. Developers can use these classification techniques to localize the troublesome 20% of the system. The method for generating the models, called automatic generation of metric-based classification trees, uses metrics from previous releases or projects to identify components that are historically high-risk.<>",1990,0, 226,Realizing the V80 and its system support functions,"An overview is given of the architecture of an overall design considerations for the 11-unit, 32-b V80 microprocessor, which includes two 1-kB cache memories and a branch prediction mechanism that is a new feature for microprocessors. The V80's pipeline processing and system support functions for multiprocessor and high-reliability systems are discussed. Using V80 support functions, multiprocessor and high-reliability systems were realized without any performance drop. Cache memories and a branch prediction mechanism were used to improve pipeline processing. Various hardware facilities replaced the usual microprogram to ensure high performance.<>",1990,0, 227,The use of self checks and voting in software error detection: an empirical study,"The results of an empirical study of software error detection using self checks and N-version voting are presented. Working independently, each of 24 programmers first prepared a set of self checks using just the requirements specification of an aerospace application, and then each added self checks to an existing implementation of that specification. The modified programs were executed to measure the error-detection performance of the checks and to compare this with error detection using simple voting among multiple versions. The analysis of the checks revealed that there are great differences in the ability of individual programmers to design effective checks. It was found that some checks that might have been effective failed to detect an error because they were badly placed, and there were numerous instances of checks signaling nonexistent errors. In general, specification-based checks alone were not as effective as specification-based checks combined with code-based checks. Self checks made it possible to identify faults that had not been detected previously by voting 28 versions of the program over a million randomly generated inputs. This appeared to result from the fact that the self checks could examine the internal state of the executing program, whereas voting examines only final results of computations. If internal states had to be identical in N-version voting systems, then there would be no reason to write multiple versions",1990,0, 228,Measures of testability as a basis for quality assurance,"Program testing is the most used technique for analytical quality assurance. A lot of time and effort is devoted to this task during the software lifecycle, and it would be useful to have a means for estimating this testing effort. Such estimates could be used, on one hand, for guiding construction and, on the other, to help organise the development process and testing. Thus the effort needed for testing is an important quality attribute of a program; they call it its testability. They argue that a relevant program characteristic contributing to testability is the number of test cases needed for satisfying a given test strategy. They show how this can be measured for glass (white) box testing strategies based on control flow. In this case, one can use structural measures defined on control flowgraphs which can be derived from the source code. In doing so, two well researched areas of software engineering testing strategies and structural metrication are brought together.<>",1990,0, 229,Performance analysis of real-time software supporting fault-tolerant operation,"Analyzing the performance of real-time control systems featuring mechanisms for online recovery from software faults is discussed. The application is assumed to consist of a number of interacting cyclic processes. The underlying hardware is assumed to be a multiprocessor, possibly with a separate control processor. The software structure is assumed to use design diversity along with forward and/or backward recovery. A detailed but efficiently solvable model for predicting various performance and reliability characteristics is developed. One of the key ideas used in modeling is hierarchical decomposition, which enables computation of level-oriented performance parameters in an efficient manner. The model is general, and adaptable for a number of useful special cases",1990,0, 230,Bayes predictive analysis of a fundamental software reliability model,"The concepts of Bayes prediction analysis are used to obtain predictive distributions of the next time to failure of software when its past failure behavior is known. The technique is applied to the Jelinski-Moranda software-reliability model, which in turn can show an improved predictive performance for some data sets even when compared with some more sophisticated software-reliability models. A Bayes software-reliability model is presented which can be applied to obtain the next time to failure PDF (probability distribution function) and CDF (cumulative distribution function) for all testing protocols. The number of initial faults and the per-fault failure rate are assumed to be s -independent and Poisson and gamma distributed respectively. For certain data sets, the technique yields better predictions than some alternative methods if the frequential likelihood and U-plot criteria are adopted",1990,0, 231,"Comments, with reply, on 'Predicting source-code complexity at the design state' by S. Henry and C. Selig","The commenter objects to the claim made by the authors of the above-mentioned article (see ibid., vol.7, no.2, p.36-44, 1990) that Halstead's E (effort) metric has been extensively validated and to a lack of precise definition for units of 'elementary mental discrimination.' With regard to the first point, S. Henry points out that since E seems to give an indication of error-prone software, it must be measuring some aspect of the source code, and that many studies have shown high correlations between effort and lines of code and McCabe's cyclomatic complexity. She agrees with the commenter's second point.<>",1990,0, 232,A case study of Ethernet anomalies in a distributed computing environment,"Fault detection and diagnosis depend critically on good fault definitions, but the dynamic, noisy, and nonstationary character of networks makes it hard to define what a fault is in a network environment. The authors take the position that a fault or failure is a violation of expectations. In accordance with empirically based expectations, operating behaviors of networks (and other devices) can be classified as being either normal or anomalous. Because network failures most frequently manifest themselves as performance degradations or deviations from expected behavior, periods of anomalous performance can be attributed to causes assignable as network faults. The half-year case study presented used a system in which observations of distributed-computing network behavior were automatically and systematically classified as normal or anomalous. Anomalous behaviors were traced to faulty conditions. In a preliminary effort to understand and catalog how networks behave under various conditions, two cases of anomalous behavior are analyzed in detail. Examples are taken from the distributed file-system network at Carnegie Mellon University",1990,0, 233,Empirically based analysis of failures in software systems,"An empirical analysis of failures in software systems is used to evaluate several specific issues and questions in software testing, reliability analysis, and reuse. The issues examined include the following: diminishing marginal returns of testing; effectiveness of multiple fault-detection and testing phases; measuring system reliability versus function or component reliability; developer bias regarding the amount of testing that functions or components will receive; fault-proneness of reused versus newly developed software; and the relationship between degree of reuse and development effort and fault-proneness. Failure data from a large software manufacturer and a NASA production environment were collected and analyzed",1990,0, 234,Experimental evaluation of the fault tolerance of an atomic multicast system,"The authors present a study of the validation of a dependable local area network providing multipoint communication services based on an atomic multicast protocol. This protocol is implemented in specialized communication servers, that exhibit the fail-silent property, i.e. a kind of halt-on-failure behavior enforced by self-checking hardware. The tests that have been carried out utilize physical fault injection and have two objectives: (1) to estimate the coverage of the self-checking mechanisms of the communication servers, and (2) to test the properties that characterize the service provided by the atomic multicast protocol in the presence of faults. The testbed that has been developed to carry out the fault-injection experiments is described, and the major results are presented and analyzed. It is concluded that the fault-injection test sequence has evidenced the limited performance of the self-checking mechanisms implemented on the tested NAC (network attachment controller) and justified (especially for the main board) the need for the improved self-checking mechanisms implemented in an enhanced NAC architecture using duplicated circuitry",1990,0, 235,Algorithm-based fault detection for signal processing applications,"The increasing demands for high-performance signal processing along with the availability of inexpensive high-performance processors have results in numerous proposals for special-purpose array processors for signal processing applications. A functional-level concurrent error-detection scheme is presented for such VLSI signal processing architectures as those proposed for the FFT and QR factorization. Some basic properties involved in such computations are used to check the correctness of the computed output values. This fault-detection scheme is shown to be applicable to a class of problems rather than a particular problem, unlike the earlier algorithm-based error-detection techniques. The effects of roundoff/truncation errors due to finite-precision arithmetic are evaluated. It is shown that the error coverage is high with large word sizes",1990,0, 236,Software-reliability engineering: technology for the 1990s,"It argued that software engineering is about to reach a new stage, the reliability stage, that stresses customers' operational needs and that software-reliability engineering will make this stage possible. Software-reliability engineering is defined as the applied science of predicting, measuring, and managing the reliability of software-based systems to maximize customer satisfaction. The basic concepts of software reliability-engineering and the reasons why it is important are examined. The application of software-reliability engineering at each stage of the life cycle is described.<>",1990,0, 237,Microprocessor-based relay for protecting power transformers,"The authors describe the design, implementation and testing of a microprocessor-based relay for protecting single-phase and three-phase transformers. The relay implements algorithms that use nonlinear models of transformers to detect winding faults. The algorithms, hardware and software are also briefly described. The performance of the relay is checked in the laboratory. The testing procedure and some test results are also presented",1990,0, 238,Practice of quality modeling and measurement on software life-cycle,"The authors introduce quality metrics into the quantitative software quality estimation technique, embracing the quality estimate of design, as well as of the source code, in studying a quality quantification support system. They outline a quality quantification technique for this system, describe examples of both its application to actual projects and its evaluation, and consider its relationship conventional techniques for estimate indexing of T.J. McCabe (IEEE Trans. Softw. Eng., vol.SE-2, no.4, 1976) and M.H. Halstead (Elements of Software Science, North Holland, NY, 1977)",1990,0, 239,Getting started on metrics-Jet Propulsion Laboratory productivity and quality,"A description is given of the SPA (Software Product Assurance) Metrics Study, part of an effort to improve software quality and the productivity and predictability of software development. The first objective was to collect whatever data could be found from as many projects for which they were still available. Data were assembled on the basis of four basic parameters: source lines of code, dollars, work years, and defects. By using these four basic parameters, it was possible to construct quality and productivity baselines. Quality was defined as the number of defects per thousand lines of source code. Productivity was defined in two ways: dollars per source line of code and source lines of code per work month. Preliminary results of this study are presented",1990,0, 240,On the assessment of safety-critical software systems,An introduction to the assessment of safety-critical software systems is presented. Particular attention is given to the role of process models,1990,0, 241,An algorithm for diagnostics with signature analyzer,"An algorithm for fault diagnosis that makes use of the information from a faulty signature is presented. The idea is to search the likely error locations before the tests are performed. The method reduces the number of tests required to diagnose the errors with the probability of aliasing. Such probability is always smaller than that of error detection in signature analysis. When matching tests are difficult or impossible, the method provides an estimate of where errors that caused the incorrect signature might have occurred. Also, the case of `don't cares' at the input sequence of signature analysis is discussed. The algorithm can be readily implemented in software with the faulty signature as the only input variable. The proposed fault diagnostic scheme has an advantage over exhaustive testing in that it leads to fault isolation in a homing-in manner",1990,0, 242,Test program for Honeywell/DND helicopter integrated navigation system (HINS),"The advanced development model (ADM) for the helicopter integrated navigation system (HINS) is built for the Canadian Department of National Defence and tested at the Honeywell Advanced Technology Centre (ATC). The system blends the complementary strengths of its components sensors, resulting in fast alignment and optimum navigation accuracy. A failure detection isolation and reconfiguration functionality monitors sensor health, identifies failed components, and automatically reconfigures the system to optimally integrate the remaining components, thus providing a graceful degradation of performance in the event of a sensor failure. Both the test program approach and the test results are discussed. The testing has given the HINS team a high degree of confidence in the navigation system design and in the software that implements the sensor blending algorithms in the real-time system",1990,0, 243,Flight evaluation of the Integrated Inertial Sensor Assembly (IISA) on a helicopter,"After successful flight test evaluation of the integrated inertial sensor assembly (IISA) on an F-15 aircraft, the system was installed onboard a Blackhawk helicopter at Wilmington Airport in Delaware. An overview of the flight test evaluation conducted on the Blackhawk is presented. It is concluded that all program objectives were successfully demonstrated and proved that the IISA system can be used for helicopter applications. The IISA Helicopter Demonstration Program consisted of two phases. During the first phase, vibration data were obtained to quantify the helicopter's vibration environment. Based on these data, IISA's digital flight control filters were modified for flight control signal quality evaluations during the second phase. During the flight tests, no evidence of body bending modes or local structural vibration degrading IISA's performance was found. The flight test data indicate that IISA's flight control signals have the necessary dynamic range to satisfy future helicopter fly-by-wire flight control systems. IISA's redundancy management software worked flawlessly during the insertion of hardover failures. From the standpoint of flight safety IISA exhibits full QUAD redundancy",1990,0, 244,"Failure management in spatio-temporal redundant, integrated navigation and flight control reference-systems","Failure management techniques for highly reliable, fault-tolerant inertial reference systems are described. Cost, weight, and power considerations imply the use of a minimum number of inertial sensors in a skewed geometry. Fault-tolerant hardware performance is obtained by spatially separated channels with a preceptron-type information flow. Data diversity in temporally separated software channels yields software fault tolerance. Advanced vector space procedures for fault detection, localization, masking, and dynamic system reconfiguration permit safe and quick response, yielding minimal data and recovery latency",1990,0, 245,PAALS: precision accelerometer alignment and leveling system,"A description is given of the precision accelerometer alignment and leveling system (PAALS), a high-performance microcontroller-corrected accelerometer triad that outputs acceleration (ft/s2) via MIL-STD-1553BN communications. Combined tight-packaging, high-reliability, and high-performance requirements were the main design drivers. The key system elements are QA-700 Accelerometers, an Intel 80C 196 microcontroller, custom digital ASIC (application-specific integrated circuit), custom hybrid, and dual redundant MIL-STD-1533B communication components. For high performance, the QA-700 accelerometers are thermally modeled (third-order) for bias, scale factor, and alignment. The hybrid (three digitizers) is thermally modeled (third-order) for scale factor and offset, as well as input linearity. Extensive FDIRR (fault detection, indication, reporting, and recording) has been incorporated for supportability and maintainability",1990,0, 246,Error recovery in shared memory multiprocessors using private caches,"The problem of recovering from processor transient faults in shared memory multiprocessor systems is examined. A user-transparent checkpointing and recovery scheme using private caches is presented. Processes can recover from errors due to faulty processors by restarting from the checkpointed computation state. Implementation techniques using checkpoint identifiers and recovery stacks are examined as a means of reducing performance degradation in processor utilization during normal execution. This cache-based checkpointing technique prevents rollback propagation, provides rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions to take error latency into account are presented",1990,0, 247,Depth-first search approach for fault-tolerant routing in hypercube multicomputers,"Using depth-first search, the authors develop and analyze the performance of a routing scheme for hypercube multicomputers in the presence of an arbitrary number of faulty components. They derive an exact expression for the probability of routing messages by way of optimal paths (of length equal to the Hamming distance between the corresponding pair of nodes) from the source node to an obstructed node. The obstructed node is defined as the first node encountered by the message that finds no optimal path to the destination node. It is noted that the probability of routing messages over an optimal path between any two nodes is a special case of the present results and can be obtained by replacing the obstructed node with the destination node. Numerical examples are given to illustrate the results, and they show that, in the presence of component failures, depth-first search routing can route a message to its destination by means of an optimal path with a very high probability",1990,0, 248,Hypertool: a programming aid for message-passing systems,"Programming assistance, automation concepts, and their application to a message-passing system program development tool called Hypertool are discussed. Hypertool performs scheduling and handles the communication primitive insertion automatically, thereby increasing productivity and eliminating synchronization errors. Two algorithms, based on the critical-path method, are presented for scheduling processes statically. Hypertool also generates the performance estimates and other program quality measures to help programmers improve their algorithms and programs",1990,0, 249,Availability evaluation of MIN-connected multiprocessors using decomposition technique,"An analytical technique for the availability evaluation of multiprocessors using a multistage interconnection network (MIN) is presented. The MIN represents a Butterfly-type connection with a 4*4-switching element (SE). The novelty of this approach is that the complexity of constructing a single-level exact Markov chain (MC) is not required. By use of structural decomposition, the system is divided into three subsystems-processors, memories, and MIN. Two simple MCs are solved by using a software package, called HARP, to find the probability of i working processing elements (PEs) and j working memory modules (MMs) at time t. A second level of decomposition is then used to find the approximate number of SEs (x) required for connecting the i PEs and j MMs. A third MC is then solved to find the probability that the MIN will provide the necessary communication. The model has been validated through simulation for up to a 256-node configuration, the maximum size available for a commercial MIN-connected multiprocessor.<>",1990,0, 250,Failure analysis and modeling of a VAXcluster system,"The authors discuss the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics, such as error and failure distributions and hazard rates for both individual machines and the VAXcluster, they develop reward models to analyze the impact of failures on the system as a whole. The results show that more than 46% of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors but also failures occur in bursts. Approximately 40% of all failures occur in bursts and involve multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 versus 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model. The VAXcluster system availability is evaluated to be 0.993 250 days of operation.<>",1990,0, 251,Practical application and implementation of distributed system-level diagnosis theory,A DSD (distributed self-diagnosing) project that consists of the implementation of a distributed self-diagnosis algorithm and its application to distributed computer networks is presented. The EVENT-SELF algorithm presented combines the rigor associated with theoretical results with the resource limitations associated with actual systems. Resource limitations identified in real systems include available message capacity for the communication network and limited processor execution speed. The EVENT-SELF algorithm differs from previously published algorithms by adopting an event-driven approach to self-diagnosability. Algorithm messages are reduced to those messages required to indicate changes in system those messages required to indicate changes in system state. Practical issues regarding the CMU-ECE DSD implementation are considered. These issues include the reconfiguration of the testing subnetwork for environments in which processors can be added and removed. One of the goals of this work is to utilize the developed CMU-ECE DSD system as an experimental test-bed environment for distributed applications.<>,1990,0, 252,On the performance of software testing using multiple versions,"The authors present analytic models of the performance of comparison checking (also called back-to-back testing and automatic testing), and they use these models to investigate its effectiveness. A Markov model is used to analyze the observation time required for a test system to uncover a fault using comparison checking. A basis for evaluation is provided by developing a similar Markov model for the analysis of ideal checking, i.e. using a perfect (through unrealizable) oracle. Also presented is a model of the effect of comparison checking on a version's failure probability as testing proceeds. Again, comparison checking is evaluated against ideal checking. The analyses show that comparison checking is a powerful and effective technique.<>",1990,0, 253,Adjudicators for diverse-redundant components,"The authors define the adjudication problem, summarize the existing literature on the topic, and investigate the use of probabilistic knowledge about error/faults in the subcomponents of a fault-tolerant component to obtain good adjudication functions. They prove the existence of an optimal adjudication function, which is useful both as an upper bound on the probability of correctly adjudged obtainable output and as a guide for design decisions",1990,0, 254,A comparison of voting strategies for fault-tolerant distributed systems,"The problem of voting is studied for both the exact and inexact cases. Optimal solutions based on explicit computation of condition probabilities are given. The most commonly used strategies, i.e. majority, median, and plurality are compared quantitatively. The results show that plurality voting is the most powerful of these techniques and is, in fact, optimal for a certain class of probability distributions. An efficient method of implementing a generalized plurality voter when nonfaulty processes can produce differing answers is also given",1990,0, 255,Cyclomatic complexity density and software maintenance productivity,"A study of the relationship between the cyclomatic complexity metric (T. McCabe, 1976) and software maintenance productivity, given that a metric that measures complexity should prove to be a useful predictor of maintenance costs, is reported. The cyclomatic complexity metric is a measure of the maximum number of linearly independent circuits in a program control graph. The current research validates previously raised concerns about the metric on a new data set. However, a simple transformation of the metric is investigated whereby the cyclomatic complexity is divided by the size of the system in source statements. thereby determining a complexity density ratio. This complexity density ratio is demonstrated to be a useful predictor of software maintenance productivity on a small pilot sample of maintenance projects",1991,0, 256,Fault-tolerant round-robin A/D converter system,"A robust A/D converter system that requires much less hardware overhead than traditional modular redundancy approaches is described. A modest amount of oversampling generates information that is exploited to achieve fault tolerance. A generalized likelihood ratio test is used to detect the most likely failure and also to estimate the optimum signal reconstruction. The error detection and correction algorithm reduces to a simple form and requires only a slight amount of hardware overhead. A derivation of the algorithm is presented, and modifications that lead to a realizable system are discussed. The authors then evaluate overall performance through software simulations",1991,0, 257,Designing a controller that works: using formal techniques in robotic systems,"The size and complexity of robot controllers is such that it is impossible to predict their performance by conventional means. Debugging and system maintenance are major problems which have only partial solutions. In conjunction with careful structuring, the authors have used formal mathematical techniques (often known generically as formal methods) in designing the software architecture of a mobile robot controller to gain greater understanding of the system and to validate its expected performance. The impetus for this work is a major mobile robot project to provide sensory control in a factory application. In this article, the authors assess the place of formal methods in the cycle of system development.<>",1991,0, 258,A high performance processor switch based architecture for fault tolerant computing,"A novel fault tolerant switch concept in the context of fault tolerant network architecture is discussed. This intelligent switch provides a non-blocking transmission path for all of its resources of the network architecture such as processor buses, disk subsystems, and its peripherals. In addition, it incorporates concurrent error detection, time-out mechanisms and a sophisticated error detecting protocol between the switch matrix and the resource management units to detect the hardware errors. Upon detection of the error, several recovery techniques are provided to ensure continuous operation of the system. Additional features of the network architecture for supporting high performance applications include concurrent channel access and priority channel service. The studies show that the fully connected network which supports both graceful degradation and standby sparing techniques exhibits excellent reliability performance and throughput",1991,0, 259,Toward new techniques to assess the software implementation process,"The author presents the concept of software boundaries and their automated detection. He describes an objective, high-level assessment technology to support process control of software development. The technique is largely independent of underlying design and development methods. An example illustrates an automated system partitioning application. The technique analyzes the end product and the delivered software code. By suitable adjustment of the analysis goals, measures, and criteria, the technique can help evaluate the effectiveness of many software implementation methods. The author discusses the technique's potential for increasing confidence in the fault-tolerating properties of a system",1991,0, 260,Parameter value computation by least square method and evaluation of software availability and reliability at service-operation by the hyper-geometric distribution software reliability growth model (HGDM),"The authors explain precisely the idea of the capture-recapture process for software faults in the context of a proposed testing environment and introduce the least square method into the model to estimate the parameter values of the HDGM. For real observed data collected during the service-operational phase, the authors show the applicability of the HGDM in estimating the degree of unavailability of a software system in operation (service). Furthermore, the estimated probability of discovering zero faults as service-operation proceeds can be taken as a reliability measure",1991,0, 261,The use of a distributed vibration monitoring system for on-line mechanical fault diagnosis,"Modern day paper making machines are high speed, high volume manufacturing units that incorporate advanced computer monitoring and control systems. New Thames Paper company manufactures high quality fine paper on a single machine. Emphasis has been placed on the importance of predictive maintenance within such continuous process industries. The company has installed condition monitoring which requires a measurement to be taken from a machine and used to indicate the current working condition of that machine. The system uses vibration analysis as a tool that can indicate faults at an early stage and thus reduce downtime. The authors outline the condition monitoring system installed at New Thames Mill and the reasons for its selection",1991,0, 262,Monitoring of distributed systems,"Distributed systems offer opportunities for attaining high performance, fault-tolerance, information sharing, resource sharing, etc. But we cannot benefit from these potential advantages without suitable management functions such as performance management, fault management, security management, etc. Underlying all these management functions is the monitoring of the distributed system. Monitoring consists of collecting information from the system and detecting particular events and states using the collected information. These events and states can be symptoms for performance degradations, erroneous functions, suspicious activities, etc. and are subject to further analysis. Detecting events and states requires a specification language and an efficient detection algorithm. The authors introduce an event/state specification language based on classical temporal logic and a detection algorithm which is a modification of the RETE algorithm for OPS5 rule-based language. They also compare their language with other specification languages",1991,0, 263,A notion of rule-based software quality engineering,"Engineering of software quality is defined by rules for methods, tools, evaluation, measurement and assessment. The product model used requires at least three layers: application concept or requirements specification, data processing concept or system specification, and realization or programs. For a computer evaluation, assessment and certification of a software system one has to examine and assess all its components described by these layers. Quality is decomposed into evaluation factors and quality factors. Evaluation factors define quality factors; metrics for functional and non-functional requirements are the terms used to define evaluation factors. All these are dependent upon quality objectives and goals. Methods and tools for evaluation, measurement and assessment are also imposed by objectives as they are connected to the factors, methods and tools. Each problem domain is represented by a set of rules. Problem domain dependencies are also expressed by rule sets. This might provide the basis to define a framework for learner controlled instruction on software quality engineering",1991,0, 264,A framework for selecting system design metrics,Specification and design have a decisive influence over the quality of software systems. The article is based on a classification of design aspects. Design decisions that determine the design and specification products are derived by a selection process according to the functional specification and a set of non-functional constraints. The techniques of a CASE tool as a constructive approach and those of software metrics as an analytic approach have been combined to support quality management early in the development process,1991,0, 265,Parameter estimation of the hyper-geometric distribution model for real test/debug data,"The hyper-geometric distribution model (HGDM) has been proposed for estimating the number of faults initially resident in a program at the beginning of the test/debug process. However, the parameters of the hyper-geometric distribution necessary for making the estimation were previously determined by the 3-dimensional exhaustive search and therefore, much time was needed to get the numerical result. The authors demonstrate, using real test/debug data of programs, that the least square sum method can be well applied to the estimation of such parameters of the hyper-geometric distribution model. Thus, the time needed for calculating the estimates can be reduced greatly",1991,0, 266,Reliability models for very large software systems in industry,"BNR, the R&D subsidiary of Northern Telecom and Bell Canada, has one of the largest software systems in the world, with code libraries exceeding 8 million source lines of a high level language. This software is used in the high-end digital switching systems that Northern Telecom markets. Software reliability methods are applied to a major subset of this software to determine if the total number of customer-perceived failures and actual software faults can be predicted before or soon after a new release of such a system. These predictions are based on pre-customer testing (α and β) and small field trials. Many of the existing reliability models and methods of parameter estimation currently demonstrated in the literature are compared",1991,0, 267,Realistic assumptions for software reliability models,A definition of reliability appropriate for systems containing significant software that includes trustworthiness and is independent of requirements is stated and argued for. The systems addressed encompass the entire product development process as well as both product and its documentation. Cost incurred as a result of faults are shown to be appropriate as a performance measurement for this definition. This and more realistic assumptions are shown to lead to the use of auto-regressive integrated moving average (ARIMA) mathematical models for the modeling of reliability growth,1991,0, 268,EDF: a formalism for describing and reusing software experience,"One approach to achieve high levels of reuse of software experience is to create a database with information on software products, processes, and measurements. The authors present the Extensible Description Formalism (EDF), a language to describe these databases. EDF is a generalization of the faceted index approach to classification. Objects in EDF can be described in terms of different sets of facets and other object descriptions. Classification schemes are easy to extend by adding new attributes or refining existing ones. EDF has been implemented and used to classify a large software library. The authors present the EDF approach to classification; the development of a classification of data structure operations and packages; and a software defect classification scheme used to describe, explain, and predict defects in data structure packages",1991,0, 269,Metrics evaluation of software reliability growth models,"Numerous software reliability growth models have been proposed. The authors present three metrics which have been helpful in assessing the applicability and predictive validity of these models. These metrics are the relative fitting error metric, the short term predictive validity metric and the long term predictive validity metric. The application of these three metrics is illustrated on estimation of field reliability of telecommunication switching systems",1991,0, 270,On the determination of optimum software release time,One of the important applications of software reliability models is the determination of software release time. The author presents some software release policies and discusses the problem of determination of optimum test time. Both reliability requirements and cost models are considered in obtaining specific release policies. It is noted that acceptable failure intensity should be used as a reliability goal and optimum release policy should be based on sequential approach. Some other interesting software release policies are also reviewed,1991,0, 271,Performability evaluation of CSMA/CD and CSMA/DCR protocols under transient fault conditions,"The authors present the results of an evaluation for the CSMA/CD (carrier sense multiple access with collision detection) protocol and a deterministic protocol under workloads anticipated in an industrial environment. Stochastic activity networks are used as the model type, and simulation is used as the solution method. The results show that the preferred resolution scheme depends on the level of workload anticipated and whether transient faults occur. It is seen that stochastic activity networks permit the representation of a relatively complex fault model as well as normal protocol operations. It is shown that, when transient faults are considered, the deterministic collision resolution scheme performs better than the nondeterministic scheme",1991,0, 272,Error/failure analysis using event logs from fault tolerant systems,"A methodology for the analysis of automatically generated event logs from fault tolerant systems is presented. The methodology is illustrated using event log data from three Tandem systems. Two are experimental systems, with nonstandard hardware and software components causing accelerated stresses and failures. Errors are identified on the basis of knowledge of the architectural and operational characteristics of the measured systems. The methodology takes a raw event log and reduces the data by event filtering and time-domain clustering. Probability distributions to characterize the error detection and recovery processes are obtained, and the corresponding hazards are calculated. Multivariate statistical techniques (factor analysis and cluster analysis) are used to investigate error and failure dependency among different system components. The dependency analysis is illustrated using processor halt data from one of the measured systems. It is found that the number of errors is small, even though the measurement period is relatively long. This reflects the high dependability of the measured systems.<>",1991,0, 273,An adaptive distributed system-level diagnosis algorithm and its implementation,"An adaptive distributed system-level diagnosis algorithm, called Adaptive DSD, suitable for local area networks, is presented. Adaptive DSD assumes a distributed network in which nodes perform tests of other nodes and determine them to be faulty or fault-free. Test results conform to the PMC model of system-level diagnosis. Tests are issued from each node adaptively and depend on the fault situation of the network. Adaptive DSD is proved correct in that each fault-free node reaches an accurate independent diagnosis of the fault conditions of the remaining nodes. Furthermore, no restriction is placed on the number of faulty nodes. The algorithm can diagnose any fault situation with any number of faulty nodes. Adaptive DSD is shown to be a considerable improvement over previous efforts including being optimal in terms of the total number of tests and messages required. The use of the algorithm in an actual distributed network environment and the experimentation within that environment are described.<>",1991,0,764 274,A new approach to control flow checking without program modification,"An approach to concurrent control flow checking that avoids performance and software compatibility problems while preserving a high error coverage and a low detection latency is proposed. The approach is called watchdog direct processing. Extensions of the basic method, taking into account the characteristics of complex processors, are also considered. The architecture of a watchdog processor based on the proposed method is described. Implementation results are reported for a watchdog designed for the Intel 80386sx microprocessor.<>",1991,0, 275,Yield analysis for a large-area analog X-ray sensor array,"A small-scale CMOS-based radiographic X-ray image sensor array has been developed for nondestructive test and medical imaging. The authors present an analysis of tradeoffs between yield and area of a scaled up large-area X-ray sensor design. The X-ray sensor array can tolerate a low level of faults in the individual pixel cells and these faults can be corrected by imaging software. However, global signal line faults cause X-ray sensor failures. This work models X-ray sensor yield in a 12-layer analog CMOS process for three possible overall defect densities, 1.5 defects/cm, 1.0 defects/cm, and 0.75 defects/cm. It is shown that the X-ray sensor is more manufacturable than a charge coupled device (CCD) array of the same area",1991,0, 276,The Stealth distributed scheduler,"The justification, design, and performance of the Stealth distributed scheduler is discussed. The goal of Stealth is to exploit the unused computing capacity of a workstation-based distributed system (WDS) without undermining the predictability in quality of service that a WDS provides to workstation owners. It is shown that the liberal approach taken by the Stealth distributed scheduler is a promising method of exploiting the vast quantity of unused computing capacity typically present in a WDS, while preserving predictability of service for workstation owners",1991,0, 277,11th International Conference on Distributed Computing Systems (Cat. No.91CH2996-7),Presents the title page of the 11th International Conference on Distributed Computing Systems proceedings record.,1991,0, 278,The theory and algorithm of detecting and identifying measurement biases in power system state estimation,"The theory and algorithm of the detection and identification of measurement biases are developed. A fast and efficient recursive measurement biases estimation identification method is proposed. A set of linearized recursive formulae are developed to recursively calculate measurement residual averages and their variance. Using this algorithm, the long list of suspicious data can be avoided and the computational speed can be increased greatly. This adds a new function of monitoring on the operation of measurement system to the real network state analysis application software in power system control center and can improve the quality of the state estimation and the ability to detect and identify the bad data",1991,0, 279,An automated environment for optimizing fault-tolerant systems designs,"A description is given of the computer-aided Markov evaluation (CAME) program, a computer-aided engineering (CAE) tool that automatically generates Markov models directly from a system description, providing an environment for quickly evaluating probabilistic measures of a fault tolerant system's performance. The CAME program provides an automated environment that aids a design in systematically optimizing a fault tolerant system design and, in the process, gaining insight and intuition about why the design behaves as it does. The overall processes incorporated in the CAME program are discussed with particular emphasis on its automated model reduction capabilities. An example is shown where the CAME program is used to optimize the design of a fault-tolerant, integrated navigation system",1991,0, 280,Cost effective software quality,"Software quality is decomposed into two complementary aspects. The first aspect is software metrics to show how well the code is currently performing, and at what rate it is improving through test or deployment. The second aspect deals with the roots of software quality in the underlying development process. Specific quality initiatives that have been found to have significant payback are discussed. The beneficial coupling with metrics is demonstrated",1991,0, 281,A neural network approach to machine vision systems for automated industrial inspection,"A knowledge-based machine vision system for industrial web inspection that is applicable to a broad spectrum of different inspection tasks is proposed. The main function of the machine vision system is to detect undesirable defects that can appear on the surface of the material being inspected. A neural network is used within the blackboard framework for a labeling verification step of the high-level recognition module of this system. As an application, this system has been applied to a lumber inspection problem. It has been successfully tested on a number of boards from several different species. A performance comparison of the neural network with the k-nearest neighbor classifier is also presented",1991,0, 282,Reliability evaluation of subtransmission systems,"A brief description of failure modes encountered in the analysis of subtransmission systems is presented. The concepts discussed are illustrated by application to a system which is sufficiently large that practical factors can be realistically modeled and assessed and also sufficiently small that the effect of sensitivity studies can be easily identified. The computer program SUBTREL, which can be used to consider all the factors which affect the reliability of a subtransmission system, is also discussed. The computer program for the reliability evaluation of subtransmission systems is suitable for analyzing a wide range of networks of sizes normally encountered in actual systems. A wide range of sensitivity studies, including adverse weather effects and high-order outages, can be conducted and the results can be used to make objective decisions",1991,0, 283,Standards for navigable databases: A progress report,"SAE's Database Standards Task Group is addressing digital street map databases to support vehicle navigation. The initial work of the Task Group is to develop a descriptive ""truth-in-labeling"" standard to permit database vendors to consistently and accurately describe their products and to permit application developers to match available databases to their needs. The truth-in-labeling standard consists of definitions to identify database entities, metrics to provide scales against which entity quality can measured, and tests to score the entities along metrics. The general classes of entities being considered are nodes (e.g., roadway intersections), links (e.g., roadway segments), and regions (e.g., cities, parks, shopping malls, etc.). Among the issues being considered are: how to facilitate broadcasts of traffic information to vehicles using navigable databases; approaches to dealing with roads with multiple names; roadway classifications; and coordination with similar efforts in other parts of the world.",1991,0, 284,A technique for improving software robustness to failure,"Software failure modes analysis is a technique designed to minimize the impact of software failures by providing controlled, recovery paths that affect service as little as possible. Currently, the approach throughout the software industry is an ad hoc one in an attempt to ensure good recovery from software design errors. The authors have taken a rigorous approach to ensure high recovery coverage from software design errors as well as providing reliability predictions as input to software architecture decisions. A description is presented of the author's experiences with these techniques",1991,0, 285,Applying advanced digital simulation techniques in designing fault tolerant systems,"Protocol, a services organization that is providing simulation solutions for the development of digital systems, is discussed. By leveraging the latest in design simulation technology, Protocol has addressed the problems associated with integrating today's most complex systems. The author describes the capabilities developed by Protocol, and how they can be applied to the development of complex fault-tolerant systems. It is noted that the methodologies and techniques developed and applied by Protocol have proven that anomalies in system designs can be located and corrected early in the development process through the use of gate level system simulation (GLSS). Protocol's concurrent hardware and software enabler (CHASE) provides the ability to debug software running on gate-level models of system hardware, thereby providing a software/hardware integration environment prior to fabrication. GLSS and CHASE together provide powerful capabilities needed by fault tolerant system designer",1991,0, 286,Implementing an initial software metrics program,"It is noted that software metrics is one of the tools that software practitioners have available to provide software measurement and management of software projects. Using software metrics for measurement allows an organization to assess software products to ensure they adhere to original performance requirements continuously throughout a development effort. Using software metrics to assist in the software management process allows the managers to assess the development process for adherence to development plans and schedules. Ultimately, the inherent quality of a software product and the efficiency of a development effort can then be identified, quantified, and improved. The author discusses the importance of using metrics, explains how to implement an initial software measurement program within a software development organization, and introduces a software quality metrics framework for further growth in the area of software measurement",1991,0, 287,Characterization of TFT/LCD arrays,"Thin-film-transistor (TFT) array characterization is important to liquid-crystal display (LCD) design, development, failure analysis, and sorting on a manufacturing line. The authors present characterization highlights obtained using a TFT array tester that can detect, accurately locate, and in many cases identify both line faults and pixel faults in a TFT array. It can provide useful performance information on normally operating pixels.<>",1991,0, 288,Performance and degradation of the (p)a-SiC:H/(i)a-Si:H/(n)a-Si:H solar cell: a computer modeling study,"The computer code AMPS (analysis of microelectronic and photonic structures) has been used to study the performance of the (p)a-SiC:H/(i)a-Si:H/(n)a-Si:H heterojunction solar cell. The authors found initially that AMPS predicts for an ideal (p)a-Si:H/(i)a-Si:H/(n)a-Si:H heterojunction an open-circuit voltage Voc value of about 1 V. However, experimental values of Voc are around 0.8 V without (i)a-SiC:H buffer layers at the p/i interface and around 0.85 V with these buffer layers. The authors studied the possible origins of these lower experimentally observed values and have found that although higher concentrations of defect states anywhere in a cell can reduce Voc, the impact of these defect states on Voc and fill factor (FF) is different according to their location",1991,0, 289,Experimental evaluation of the cost effectiveness of software reviews,"A new metric for evaluating the cost effectiveness of technical reviews is described. The proposed metric is based on the degree to which testing costs are reduced by technical reviews. The metric can be interpreted as combining two conventional metrics. Using an experimental evaluation of the conventional metrics and the proposed metric for data collected in an industrial environment, the authors show the validity and usefulness of the proposed metric. In particular, they present a method to estimate a value of the proposed metric by using only the values obtained at review phase",1991,0, 290,Quality-time tradeoffs in simulated annealing for VLSI placement,A model is presented to characterize the relationship between the best solution (incumbent) found by an iterative algorithm (simulated annealing) and the time spent in achieving it. The target application has been chosen to be the placement of cells on a VLSI chip. The model is used to achieve a tradeoff between solution quality and time spent. This gives an idea of the time at which the iterative algorithm should be terminated when the marginal gain in solution quality is smaller than the marginal increase in cost (or time) spent. Nonlinear regression analysis is used to predict the decrease in time with respect to improvement in solution quality. Experimental results on benchmark circuits are presented to show the errors of run-time prediction compared to a static prediction,1991,0, 291,"Performance, effectiveness, and reliability issues in software testing",The author has identified two problems that need to be overcome in order that some of the powerful testing techniques be used in practice: performance and effectiveness. The testing methods referred to are dataflow and mutation testing,1991,0, 292,Fault-tolerant parallel matrix multiplication with one iteration fault detection latency,"A new algorithm, the ID algorithm, is presented which minimizes the fault-detection latency. In the ID algorithm, a fault is detected as soon as the fault occurs instead of at problem termination. For n 2 processors, the fault-latency time of the ID algorithm is 1/n of that of the checksum algorithm with a run-time penalty of O(n log2 n) in an n ×n matrix operation. This algorithm has better performance in terms of error coverage and expected run time in large-scale matrix multiplications such as signal and image processing, weather prediction, and finite-element analysis",1991,0, 293,Detection of localised array fault from near field data,"The technique of localized fault diagnosis in an array antenna system is considered. Commonly available software provides transformation of near zone measured data to the far field pattern in the visible region only. It is shown that the reverse transformation of this far.field pattern in the visible region back to the aperture fails to detect the localized fault. However, in the case of the forward transform, if the far field pattern is computed not only in the visible region but also well into the invisible region, the detection of the localized fault is found to be possible by doing the reverse transformation of these data, which extends well into the invisible region. Since a commonly occurring fault in a passive array is the phase-only fault, the concept has been validated by a simulated example of this type of fault.<>",1991,0, 294,Application of syndrome pattern-matching approach to fault isolation in avionic systems,"The authors examine several attributes of a fault detection, isolation, and reconfiguration (FDIR) approach that is categorized as syndrome pattern matching. This approach is based on the premise that effective fault isolation algorithms must possess the capability of identifying patterns in the results of many fault detection tests, i.e., the fault syndrome. Examples of tests that constitute the syndrome are comparisons of redundant signals, hardware built-in test results and status indication from other systems or equipment. The syndrome pattern-matching method outlined presents a simple approach to implementing efficient and effective FDIR logic. Performance issues discussed include intermittent fault, multiple sequential failures, and unanticipated syndromes",1991,0, 295,The Office of Naval Research initiative on ultradependable multicomputers and electronic systems,"The US Office of Naval Research initiative in ultradependable multicomputers and electronic systems is focused on the development of novel techniques for achieving high dependability in parallel and distributed systems. Some of the approaches being funded through the initiative and their role in the program as a whole are surveyed. The objective of the program is to achieve greater effectiveness and efficiency (than traditional massive redundancy) through the following: increased understanding of physical failure modes and their effects in real systems; increased sensitivity to the structure, semantics, and requirements of real applications; exploitation of the inherent redundancy in parallel systems; and examination of hardware/software tradeoffs. To achieve these objectives, an integrated program has been constructed around four major components: measurement and modeling of real systems; software-based methods such as robust algorithms; efficient hardware-based methods and real-time systems",1991,0, 296,Hardware assisted real-time rollback in the advanced fault-tolerant data processor,"Rollback recovery offers an efficient method of recovering from transient faults and permanent faults when rollback is combined with spare resource reconfiguration. The hardware macro-rollback technique presented has been implemented in the advanced fault-tolerant data processor (AFTDP), which is a high-performance fault-tolerant shared memory multiprocessor. The architecture discussion focuses on the unique problems of achieving both low overhead and fast recovery in high-throughput cached multiprocessors. Macro-rollback recovery from transient faults and hard macro-rollback from permanent faults are examined. In addition, deadline analysis based on a semi-Markov model is presented",1991,0, 297,Fault tolerant avionics display system,"Discusses the fault-tolerant requirements of avionics display systems, and an open architecture which permits implementing these requirements. The fault tolerant requirements include the classes of hardware and software faults. Within each class, the requirements are broken down into fault types the system must tolerate. The designers of both hardware and software must develop and integrated approach to implementing fault detection, damage assessment, fault isolation, and recovery for the fault types. Active matrix (AM) liquid crystal displays (LCDs) used with high-performance graphics and video processors provide the capability needed to achieve these fault-tolerant designs",1991,0, 298,An on-line integrated industrial inspection system for the lighting industry,"Presents a set of knowledge-based algorithms suitable for on-line inspection, with particular reference to lamp inspection problems for the lighting industry. The system was developed to identify not only uncut wires, but also many other faults, such as missing solders on connector pads, insufficient soldering, and solder splashes on the backing disc. The paper describes the problem and identifies the types of faults to be detected by a machine vision system; and presents the methods proposed to solve this specific industrial inspection problem. A model system on which the software algorithms are tested and the performances are evaluated is also discussed",1991,0, 299,IEE Colloquium on `Designing Quality into Software Based Systems' (Digest No.151),The following topics were dealt with: software quality review; software timing requirements; the anticipation of software change; controlling quality with quantified attribute check-lists; models for constructive quality assurance; confidence intervals on performance estimates; modelling timing constraints; systems development collaborative approach; automatic documentation for program maintenance; functional complexity analysis; the effect of incremental delivery on software quality and ensuring the quality and maintainability of software elements,1991,0, 300,Confidence intervals on performance estimates,"Performance estimates show how a computer system architecture will respond to the load placed upon it. Typical estimates required are response times to particular events, system throughputs and loading levels of CPUs and LANs. The purpose of performance estimation is to reduce risk. It aims to provide the best possible estimate of the system performance as cheaply as possible, and to highlight design weaknesses. This paper places emphasis on the practical applications of performance estimation, focusing on the approach taken early in the system life-cycle. Therefore, it is treated as a costed process, which makes definite recommendations on a design. To understand the approach, some basic concepts behind system modelling are laid down",1991,0, 301,A case study in traceability,"In 1987, a major engineering project took place whose aim was to develop a high speed electromechanical device which was to be controlled by some 30 microprocessors. The software development followed the YOURDON method supported by the Teamwork analysis and design tool. The project was not an unqualified success. One part delivered later than predicted, but the deliverables were complete and showed a lower defect frequency than previous products. The other part by contrast, delivered some two years late after several defeaturing exercises. It was suggested that the difference in productivity was partly due to the fact that the YOURDON guidelines had been followed more closely in the successful part. In addition the successful part had employed quite rigorous progress and quality measurement techniques. When a new project was begun it was desirable to ensure that progress would be tracked, that all parts of the YOURDON models could be traced to particular requirements and that quality assurance could detect nonconformance to the guidelines efficiently and effectively. The paper describes how the project team went about using the features offered by Teamwork to achieve the required improved control over the analysis and design process",1991,0, 302,A comparison of voting algorithms for n-version programming,"One approach to software fault-tolerance is to develop three versions of a program from a common specification, execute these versions in parallel, and vote on the output. If two versions agree, this majority vote is taken as the correct result. Errors in a single version are thus masked out. If there is no majority, a 3-version failure occurs. In earlier experimental work, groups of versions were tested in 3- version programming mode and the resulting average failure probability compared with the average failure probability of the individual versions. This paper describes the use of 5 alternative voting algorithms and compares the resulting failure probabilities against the failure probability for standard 3-version programming. All algorithms produced an improvement over standard 3-version programming, and one algorithm was sometimes able to produce entirely correct values even when none of the three versions has done so",1991,0, 303,Customer expectations for introduction of a large software product [for 5ESS switch],"Illinois Bell has developed a rigorous set of expectations, including formal evaluation criteria, for a new software release in its network elements. The combined introduction experience of Illinois Bell as the customer and AT&T as the vendor in meeting these expectations during the introduction of the 5E6 software release for the 5ESS switch is discussed. Topics outlined include customer requirements for successful retrofit, resolution process for critical problems, customer view of quality, software quality management prior to release, and metrics to assess improvement results over the in-service software release. This close customer-supplier relationship resulted in improved product delivery processes as well as improved end-customer service after delivery",1991,0, 304,Balancing the benchmark opportunities,"It is noted that benchmarking activities can be focused on two areas, internal and external benchmarks. Depending on the focus, different objectives are achieved. Based on an enterprise's reproducible development and delivery process, internal benchmarks are used to drive continuous improvements for quality and productivity as well as predictions of outcomes. Based on industry models or practices, external benchmarks identify competitive standing and improvement opportunities. As complementary activities, the external benchmark is used to identify new process requirements and the internal benchmark is used to focus the improvement efforts. Using benchmarks in the proper context allows an enterprise to make effective decisions on their software development and delivery processes",1991,0, 305,Pseudoduplication of floating-point addition. A method of compiler generated checking of permanent hardware faults,A method of compiler generated checkpoints for the detection of permanent hardware faults is offered. Some of the floating point addition instructions of a program are used as checkpoints. The compiler automatically generates a diverse execution on different data paths (pseudo-duplication). Rounding-modes and fault coverage are investigated. The method can be implemented without additional hardware and with a tolerable reduction in performance.<>,1991,0, 306,Motion-compensated video image compression using luminance and chrominance components for motion estimation,"Summary form only given. A variable rate video image compression system using motion-compensated discrete cosine transform (MCDCT) coding is investigated. This paper focuses on the quality improvement achieved by using the luminance component (Y) and both chrominance components (Cr and Cb) for motion estimation. The design of the MCDCT system is based solely on existing technology. The complete system is currently simulated in software. The simulations of the discrete cosine transform processor and the motion estimation processor are based on the technical specifications of existing SGS-Thomson components. The performance of the MCDCT image compression system is evaluated using video images collected from a shuttle mission. The significant error count is shown to be sensitive to the luminance or color distortions in localized areas in an image sequence that are noticeable to viewers, but do not cause significant variations in the SNR measurement",1991,0, 307,Reliability modeling of the MARS system: a case study in the use of different tools and techniques,"Analytical reliability modeling is a promising method for predicting the reliability of different architectural variants and to perform trade-off studies at design time. However, generating a computationally tractable analytic model implies in general an abstraction and idealization of the real system. Construction of such a tractable model is not an exact science, and as such, it depends on the modeler's intuition and experience. This freedom can be used in formulating the same problem by more than one approach. Such a N-version modeling approach increases the confidence in the results. In this paper, we analyze the MARS architecture with the dependability evaluation tools SHARPE and SPNP, employing several different techniques including: hierarchical modeling, stochastic Petri nets, folding of stochastic Petri nets, and state truncation. The authors critically examine these techniques for their practicability in modeling complex fault-tolerant computer architectures",1991,0, 308,An improved numerical algorithm for calculating steady-state solutions of deterministic and stochastic Petri net models,Introduces an algorithm for calculating steady-state solutions of DSPN models. The described method employs the randomization technique enhanced by a stable calculation of Poisson probabilities. A complete re-design and re-implementation of the appropriate components implemented in the version 1.4 of the software package GreatSPN has lead to significant savings in both computation time and memory space. These benefits are illustrated by DSPN models taken from the literature. The author considers DSPN models for an Er/D/1/K queueing system and a fault-tolerant clocking system. These examples show that the model solutions are calculated with significantly less computational effort and a better error control by the algorithm described than by the method implemented in the version 1.4 of the software package GreatSPN,1991,0, 309,Connectionist speaker normalization and its applications to speech recognition,"Speaker normalization may have a significant impact on both speaker-adaptive and speaker-independent speech recognition. In this paper, a codeword-dependent neural network (CDNN) is presented for speaker normalization. The network is used as a nonlinear mapping function to transform speech data between two speakers. The mapping function is characterized by two important properties. First, the assembly of mapping functions enhances overall mapping quality. Second, multiple input vectors are used simultaneously in the transformation. This not only makes full use of dynamic information but also alleviates possible errors in the supervision data. Large-vocabulary continuous speech recognition is chosen to study the effect of speaker normalization. Using speaker-dependent semi-continuous hidden Markov models, performance evaluation over 360 testing sentences from new speakers showed that speaker normalization significantly reduced the error rate from 41.9% to 5.0% when only 40 speaker-dependent sentences were used to estimate CDNN parameters",1991,0, 310,From random testing of hardware to statistical testing of software,"Random or statistical testing consists in exercising a system by supplying to it valued inputs which are randomly selected according to a defined probability distribution on the input domain. The specific role one can expect from statistical testing in a software validation process is pointed out. The notion of test quality with respect to the experiment goal allows the testing time to be adjusted to a target test quality. Numerical results illustrate the strengths and limits of statistical testing as a software validation tool, in the present state-of-the-art",1991,0, 311,Software reliability modelling: achievements and limitations,"An examination is made of the direct evaluation of the reliability of a software product from observation of its actual failure process during operation, using reliability growth modeling techniques described. Conclusions drawn from perfect working are examined. An assessment is also made of indirect sources of confidence in dependability. The author argues that one way forward might be to insist that certain safety-critical system should only be built if the safety case can be demonstrated to rely only on assurable levels of design dependability",1991,0, 312,Dependability prediction of a fault-tolerant microcomputer for advanced control,"The architecture of a reliable microcomputer for advanced control is presented. The microcomputer provides fault-tolerance features by means of spare modules and graceful degradation of the performance level. To quantitatively analyze its main features it is necessary to numerically evaluate three dependability measures: reliability, steady-state availability, and performability. The influence of permanent, intermittent, and transient faults and the effect of local and global repair actions, are emphasized. Finally, the main capabilities and limitations of the microcomputer are discussed",1991,0, 313,Fault-tolerance in parallel architectures with crosspoint switches,"Based on an existing IBM/370 switch connected parallel processor (SCPP) prototype implementation, conceptual enhancements in the design for continuous availability are described. These enhancements include the user-specifiable two- to three-fold concurrent execution of processes on nonsynchronized processing units (PUs). The output produced by these processes is compared by software or hardware means. The asynchronism of the execution allows the detection of context-dependent software failures in addition to the detection of hardware errors. These design advances encompass novel hardware and software features aimed at achieving continuous system operation as well as superior information integrity at an improved cost/performance ratio, facilitating a proven crosspoint switch intercommunication mechanism of the SCPP",1991,0, 314,Automated portable instruments for quantitative measurements of nuclear materials holdup and in-process inventories,"Summary form only given. Portable gamma-ray spectroscopy instrumentation has been combined with a versatile mechanical system and sophisticated software package to produce an automated instrument for determination of quantities of nuclear materials deposited in process equipment. The mechanical system enables using high-resolution spectroscopy methods in the variety of measurement locations and detector positions required to perform quantitative assays of nuclear materials holdup and in-process inventories. The, software automates the electronics setup, data acquisition, and analysis, output of results, and storage of results and spectra, as well as quality and measurement control tests and compensation for gain drift and rate loss. The analysis, which is based on a systematic method of generalizing the geometries of the nucl,ear material deposits, provides quantitative assay results at the time of the measurement. The software enables portable measurements to be performed in rapid succession with full utilization of readout and storage capabilities.<>",1991,0, 315,Predicting faults with real-time diagnosis,The authors discuss a strategy for implementing real-time expert systems when the expert system hardware-software combination is too slow at first analysis. A major aspect of the system is its ability to predict faults. The techniques described have been fully implemented on a large-scale real-time expert system for British Steel at Ravenscraig in Scotland. This application is discussed in detail. The authors outline the requirement for a real-time system and how real-time analysis has been provided in spite of the fact that the computer system and expert system are notionally too slow for real-time input. The heart of the real-time capability is a result of planning ahead for various types of data analysis and providing adequate history management so that the analysis is available at the appropriate time. The authors also discuss the innovative strategy for allowing the system to predict faults based on the actual plant performance,1991,0, 316,An animated interface for X-ray laminographic inspection of fine-pitch interconnect,"While the function of solder joint inspection is to assess joint quality, evolving fine-pitch mounting technologies have made some electronic assemblies impossible to inspect using human-oriented visual techniques. A group of automated inspection tools has recently been developed in response to this need, one of which uses X-ray laminography in conjunction with very elaborate software. The authors present the work that has been performed to provide an easy-to-use animated graphics-oriented, X-ray laminographic solder joint inspection system interface. The interface has been developed to aid in the study of performance-related interconnect inspection",1991,0, 317,A Fault Tolerant Four Wheel Steering System,"The paper describes an automotive fault tolerant control system applied to a 4WS system for high performance cars. The goal is to study the tradeoff among high performance, safety and low cost implementation.",1991,0, 318,Dynamic oscillations predicted by computer studies,"During the latter part of 1988, a study was begun to review the dynamic stability performance of Georgia Power Company's Plant Scherer. The scope of the study was to identify any operating conditions that might contribute to system oscillations and to examine alternative solutions that would control these oscillations. The study was performed in several phases. The authors briefly discuss the study process which utilized two different software packages for the analysis: dynamic stability studies using time-domain software and eigenvalue analysis using frequency-domain software.<>",1991,0, 319,Analyzing error-prone system structure,"Using measures of data interaction called data bindings, the authors quantify ratios of coupling and strength in software systems and use the ratios to identify error-prone system structures. A 148000 source line system from a prediction environment was selected for empirical analysis. Software error data were collected from high-level system design through system testing and from field operation of the system. The authors use a set of five tools to calculate the data bindings automatically and use a clustering technique to determine a hierarchical description of each of the system's 77 subsystems. A nonparametric analysis of variance model is used to characterize subsystems and individual routines that had either many or few errors or high or low error correction effort. The empirical results support the effectiveness of the data bindings clustering approach for localizing error-prone system structure",1991,0, 320,An empirical comparison of software fault tolerance and fault elimination,"The authors compared two major approaches to the improvement of software-software fault elimination and software fault tolerance-by examination of the fault detection (and tolerance, where applicable) of five techniques: run-time assertions, multiversion voting, functional testing augmented by structural testing, code reading by stepwise abstraction, and static data-flow analysis. The focus was on characterizing the sets of faults detected by the techniques and on characterizing the relationships between these sets of faults. Two categories of questions were investigated: (1) comparison between fault elimination and fault tolerance techniques and (2) comparisons among various testing techniques. The results provide information useful for making decisions about the allocation of project resources, show strengths and weaknesses of the techniques studies, and indicate directions for future research",1991,0, 321,A new DSP method for group delay measurement,"After a review of methods available for group delay measurement and their primary errors, a digital signal processing (DSP) method with excellent accuracy is proposed. The method is described, and a comparison is made with other methods. An example using the new method is given. When compared with the methods available, the new method does not involve compromises of the frequency intervals, and it eliminates the defects caused by the assumption of linearity of phase-frequency characteristics, the calculation of the phase-frequency characteristics, and the evaluation of the difference quotient, whose measurement accuracy depends only on the hardware and software used for measurement. It has the advantages of high precision and forthright computation",1991,0, 322,The Past Quarter Century and the Next Decade of Video Tape Recording (Invited),"Since the first commercially successful video tape recorder (VTR) was developed in 1956, many VTRs were developed and put on the commercial market. If the picture quality is almost the same, the tape consumption per hour (TCH) of a commercially successful VTR in its sales year will decrease according to a trend line of one tenth per ten years. If the picture quality is improves, the trend line will move toward a higher position on a parallel line. â€?We can forecast the future specification of the VTR as a function of TCH from such a trend chart. The miniaturization of the pitch and wavelength is a motivating force of VTR development. In this manner, high utility, high performance, but low cost VTR was developed and continues to improve. The harmonization of hardware and software is the next important problem. â€?In this paper, a short technical history of VTR, the future of VTR predicted from the trend chart, and the technical motivating force of VTR development are discussed.",1991,0, 323,Design And Performance Of A One-half Mv Rep-rate Pulser,"A rep-rate pulser system is under development for use as an Army user facility. This pulser is entering its final stages of testing prior to delivery. This paper presents results which show the pulser's performance demonstrated to date and describes future upgrades now in progress. The pulser system contains a power supply which energizes a resonant charging circuit which charges a PFN to 1 MV. This power supply can provide up to 300 kW in one second bursts to the pulser. The output of the power unit charges the pulser's 10.5 /spl mu/F primary capacitor bank to 80 kV in 5 ms. This bank is then discharged into the primary of a 1:13.5 iron-core pulse transformer. The transformer's core is reset by the charging current to the 10.2 /spl mu/F capacitor bank. A pair of low inductance, simultaneously triggered gas switches discharges the capacitors into the primary winding of the transformer. The transformer secondary resonant-charges a 5 /spl Omega/ PFN to 1 MV in 7 /spl mu/s. The stored PFN energy is thus 28 kJ. A single self- breaking output switch discharges the PFN into a low inductance 5 /spl Omega/ load resistor. The pulser section, extending from the primary capacitors to the resistive dummy load is contained within an oil tank. The pulser was tested in single pulse and low rep-rates. The specified 500 kV load voltage was achieved without breakdown or corona. A 10 to 90% risetime of approximately 30 ns was obtained. The pulsewidth of the flat-top, a critical parameter for this pulser, was 450 ns (within an amplitude range /spl plusmn/5% of peak) . The pulser and power sections are interfaced to a Macintosh Ilcx computer. This computer accepts commands while operating under the LabVIEW software system which allows the pre-setting of all required pulser settings, including rep-rate, number of shots, burst width and the amplitude and timing of voltages and currents within the power unit. The LabVIEW control program also incorporates fault detection and display rou- ines.",1991,0, 324,Predicting where faults can hide from testing,"Sensitivity analysis, which estimates the probability that a program location can hide a failure-causing fault, is addressed. The concept of sensitivity is discussed, and a fault/failure model that accounts for fault location is presented. Sensitivity analysis requires that every location be analyzed for three properties: the probability of execution occurring, the probability of infection occurring, and the probability of propagation occurring. One type of analysis is required to handle each part of the fault/failure model. Each of these analyses is examined, and the interpretation of the resulting three sets of probability estimates for each location is discussed. The relationship of the approach to testability is considered.<>",1991,0, 325,"Trends in measurement, estimation, and control (software engineering)","As an expert in quantitative aspects of software management, the author shares his view of where the industry is and where it is going. He identifies a few basic measures that have been used successfully in management and describes the direction that measurement is being driven by the pressure of total quality management.<>",1991,0, 326,An experimental evaluation of software redundancy as a strategy for improving reliability,"The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is discussed. The effectiveness of multiversion software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on 20 versions of an aerospace application developed and independently validated by 60 programmers from 4 universities. Descriptions of the application and development process are given, together with an analysis of the 20 versions",1991,0, 327,Analyzing partition testing strategies,"Partition testing strategies, which divide a program's input domain into subsets with the tester selecting one or more elements from each subdomain, are analyzed. The conditions that affect the efficiency of partition testing are investigated, and comparisons of the fault detection capabilities of partition testing and random testing are made. The effects of subdomain modifications on partition testing's ability to detect faults are studied",1991,0, 328,A validity measure for fuzzy clustering,"The authors present a fuzzy validity criterion based on a validity function which identifies compact and separate fuzzy c-partitions without assumptions as to the number of substructures inherent in the data. This function depends on the data set, geometric distance measure, distance between cluster centroids and more importantly on the fuzzy partition generated by any fuzzy algorithm used. The function is mathematically justified via its relationship to a well-defined hard clustering validity function, the separation index for which the condition of uniqueness has already been established. The performance of this validity function compares favorably to that of several others. The application of this validity function to color image segmentation in a computer color vision system for recognition of IC wafer defects which are otherwise impossible to detect using gray-scale image processing is discussed",1991,0, 329,Power system fault detection and state estimation using Kalman filter with hypothesis testing,An algorithm for detecting power system faults and estimating the pre and post-fault steady state values of the voltages and currents is described. The proposed algorithm is based on the Kalman filter and hypothesis testing. It is shown that a power system fault is ideally suited for single sample hypothesis testing. The performance of the proposed technique is examined in connection with the protection of parallel transmission lines. The proposed technique avoids many problems of parallel line protection. The simplicity and the avoidance of complicated software and hardware in addition to the stability of the relay under different switching conditions are some of the features of the proposed technique. Studies performed on a long double-circuit transmission line with and without series compensation show a trip time in all cases of around 5 ms,1991,0, 330,The KAT (knowledge-action-transformation) approach to the modeling and evaluation of reliability and availability growth,"An approach for the modeling and evaluation of reliability and availability of systems using the knowledge of the reliability growth of their components is presented. Detailed models of reliability and availability for single-component systems are derived under much weaker assumption than usually considered. These models, termed knowledge models, enable phenomena to be precisely characterized, and a number of properties to be deduced. Since the knowledge models are too complex to be applied in real life for performing predictions, simplified models for practical purposes (action models) are discussed. The hyperexponential model is presented and applied to field data of software and hardware failures. This model is shown to be comparable to other models as far as reliability of single-component systems is concerned: in addition, it enables estimating and predicting the reliability of multicomponent systems, as well as their availability. The transformation approach enables classical Markov models to be transformed into other Markov models which account for reliability growth. The application of the transformation to multicomponent systems is described",1991,0, 331,Current antenna research at K. U. Leuven,"Four antenna research projects at the Katholieke Universiteit Leuven are described. In the first project, four different methods for the transformation of plane-polar near-field data to far-field antenna characteristics have been compared, and the method with the best performance (accuracy versus computer time) has been implemented on an HP 1000 and on a VAX under VMS. In the second project, a test method has been developed to detect faulty columns in three-dimensional array antennas. The method is based on a reconstruction technique, and has been implemented in an air-surveillance secondary radar system. In the third project, the transmission-line model developed for rectangular microstrip antennas has been implemented and verified for linear and planar arrays. The software package runs on a PC, as well as on VAX workstations. In the fourth project, a software package, running on a VAX under VMS, has been developed based on a moment-method approach for the analysis of multiprobe, multipatch configurations in stratified-dielectric media. The procedure developed contains features to improve the convergence, accuracy and efficiency of the moment method.<>",1991,0, 332,Development of a model for predicting flicker from electric arc furnaces,"Owing to the characteristics of electric-arc phenomena electric arc furnace loads can result in serious electrical disturbances on a power system. Low level frequency modulation of the supply voltage of less than 0.5% can cause annoying flicker in lamps and invoke public complaints when the frequency lies in the range of 6-10 Hz. There is a need therefore to develop better techniques to predict flicker effects from single and multiple arc furnace loads and the effects of compensation schemes. The authors discuss the nature of arc furnace loads, and describe the instrumentation, field measurements, and signal analysis techniques which were undertaken to develop an arc furnace model. A single-phase furnace model is proposed suitable for use on digital simulation programs such as the EMTP or other appropriate commercial software simulation programs. Representative results based on actual furnace input data are included to show the validity of the model",1992,0, 333,"PROOFS: a fast, memory-efficient sequential circuit fault simulator","The authors describe PROOFS, a fast fault simulator for synchronous sequential circuits, PROOFS achieves high performance by combining all the advantages of differential fault simulation, single fault propagation, and parallel fault simulation, while minimizing their individual disadvantages. The fault simulator minimizes the memory requirements, reduces the number of gate evaluations, and simplifies the complexity of the software implementation. PROOFS requires an average of one fifth the memory required for concurrent fault simulation and runs six to 67 times faster on the ISCAS-89 sequential benchmark circuits",1992,0,192 334,Channels and codes for magnetooptical recording,"The author outlines the technology used in the optical recording channel concentrating on magnetooptical recording and describes some of the major formats, codes, and data detection methods which are being developed. A sophisticated model of the optical recording channel was used as a way to make fair comparisons of these candidates, without skewing the results because of different disk or drive quality, degrees of hardware development, test procedures, etc. The author describes strengths and weaknesses of some of the leading contenders for current and future optical recording channels",1992,0, 335,Fault tolerance aspects of a highly reliable microprocessor-based water turbine governor,"A highly reliable fault tolerant microprocessor-based governor with high performance is being developed. In this system, techniques such as redundancy, fault detection, fault location, fault isolation, and self-recovery have been applied successfully to achieve high reliability. The hardware architecture of this system and its fault tolerance aspects are described. The software structure and the redundancy architecture are presented. The fault detection, the fault location, and the fault isolation techniques applied to this system are studied. The self-recovery and reconfiguration of the system and the switch-over control based on the fault detection results are discussed",1992,0, 336,Reliability analysis of distributed systems based on a fast reliability algorithm,"The reliability of a distributed processing system (DPS) can be expressed by the analysis of distributed program reliability (DPR) and distributed system reliability (DSR). One of the good approaches to formulate these reliability performance indexes is to generate all disjoint file spanning trees (FSTs) in the DPS graph such that the DPR and DSR can be expressed by the probability that at least one of these FSTs is working. In the paper, a unified algorithm to efficiently generate disjoint FSTs by cutting different links is presented, and the DPR and DSR are computed based on a simple and consistent union operation on the probability space of the FSTs. The DPS reliability related problems are also discussed. For speeding up the reliability evaluation, nodes merged, series, and parallel reduction concepts are incorporated in the algorithm. Based on the comparison of number of subgraphs (or FSTs) generated by the proposed algorithm and by existing evaluation algorithms, it is concluded that the proposed algorithm is much more economic in terms of time and space than the existing algorithms",1992,0, 337,Methodology for validating software metrics,"A comprehensive metrics validation methodology is proposed that has six validity criteria, which support the quality functions assessment, control, and prediction, where quality functions are activities conducted by software organizations for the purpose of achieving project quality goals. Six criteria are defined and illustrated: association, consistency, discriminative power, tracking, predictability, and repeatability. The author shows that nonparametric statistical methods such as contingency tables play an important role in evaluating metrics against the validity criteria. Examples emphasizing the discriminative power validity criterion are presented. A metrics validation process is defined that integrates quality factors, metrics, and quality functions",1992,0, 338,The detection of fault-prone programs,"The use of the statistical technique of discriminant analysis as a tool for the detection of fault-prone programs is explored. A principal-components procedure was employed to reduce simple multicollinear complexity metrics to uncorrelated measures on orthogonal complexity domains. These uncorrelated measures were then used to classify programs into alternate groups, depending on the metric values of the program. The criterion variable for group determination was a quality measure of faults or changes made to the programs. The discriminant analysis was conducted on two distinct data sets from large commercial systems. The basic discriminant model was constructed from deliberately biased data to magnify differences in metric values between the discriminant groups. The technique was successful in classifying programs with a relatively low error rate. While the use of linear regression models has produced models of limited value, this procedure shows great promise for use in the detection of program modules with potential for faults",1992,0, 339,How to assess tools efficiently and quantitatively,"A generic, yet tailorable, five-step method to select CASE tools is presented. The guide, developed by the Software Engineering Institute and Westinghouse, includes a tool taxonomy that captures information like the tool name and vendor, release date, what life-cycle phases and methods it supports, key features, and the objects it produces, and includes six categories of questions designed to determine how well a tool does what it was intended to do. The questionnaire comprises 140 questions divided into the categories: ease of use, power, robustness, functionality, ease of insertion, and quality of support.<>",1992,0, 340,Evaluating and selecting testing tools,"A system using process and forms to give managers and tool evaluators a reliable way to identify the tool that best fits their organization's needs is discussed. It is shown that a tool evaluator must analyze user needs, establish selection criteria (including general criteria, environment-dependent criteria, tool-dependent functional criteria, tool-dependent nonfunctional criteria, and weighting), search for tools, select the tools, and then reevaluate the tools. Forms for recording tool selection criteria, classifying testing tools, profiling tool-to-organization interconnections, creating hardware and software profiles, and creating tool-interconnection profiles are presented. It is argued that, with these forms, evaluators have a reasonably accurate and consistent system for identifying and quantifying user needs, establishing tool-selection criteria, finding available tools, and selecting tools and estimating return on investment.<>",1992,0, 341,Assessing testing tools in research and education,"An evaluation of three software engineering tools based on their use in research and educational environments is presented. The three testing tools are Mothra, a mutation-testing tool, Asset, a dataflow testing tool, and ATAC, a dataflow testing tool. Asset, ATAC, and Mothra were used in research projects that examined relative and general fault-detection effectiveness of testing methods, how good a test set is after functional testing based on program specification, how reliability estimates from existing models vary with the testing method used, and how improved coverage affects reliability. Students used ATAC and Mothra by treating the tools as artifacts and studying them from the point of view of documentation, coding style, and possible enhancements, solving simple problems given during testing lectures, and conducting experiments that supported ongoing research in software testing and reliability. The strengths, weaknesses, and performances of Asset, Mothra, and ATAC are discussed.<>",1992,0, 342,Evaluation and comparison of triple and quadruple flight control architectures,"Fault tolerances of a triple flight control architecture are defined and compared to that of a quadrupole type, and the impact of worst case failures on the transient response of an aircraft is investigated. Insights in computing fault coverage are discussed. Two coverage models have been used to compute the probability of loss of control (PLOC) and the probability of mission abort (PMA) for candidate architectures. Results indicate that both triple and quadruple architectures can meet the fault tolerance requirements with an acceptable level of transients upon first and second failures. Triple architectures will require a higher level of fault detection, isolation, and accommodation coverage than quadruple architectures and produce substantially larger transients upon second failure.<>",1992,0, 343,Virtual checkpoints: architecture and performance,"Checkpoint and rollback recovery is a technique that allows a system to tolerate a failure by periodically saving the entire state and, if an error is detected, rolling back to the prior checkpoint. A technique that embeds the support for checkpoint and rollback recovery directly into the virtual memory translation hardware is presented. The scheme is general enough to be implemented on various scopes of data such as a portion of an address space, a single address space, or multiple address spaces. The technique can provide a high-performance scheme for implementing checkpoint and rollback recovery. The performance. of the scheme is analyzed using a trace-driven simulation. The overhead is a function of the interval between checkpoints and becomes very small for intervals greater than 106 references. However, the scheme is shown to be feasible for intervals as small as 1000 references under certain conditions",1992,0, 344,Measuring software reliability,"The component concepts of the term 'reliability' are examined to clarify why its measurement is difficult. Two approaches to measuring the reliability of finished code are described. The first, a developer-based view, focuses on software faults; if the developer has grounds for believing that the system is relatively fault-free, then the system is assumed to be reliable. The second, a user-based view more in keeping with the standard IEEE/ANSI definition, emphasizes the functions of the system and how often they fail. The advantages of the failure-based approach are discussed, and various techniques are described. The issues that require further exploration and definition before reliability measurement becomes a straightforward for software as for hardware are identified.<>",1992,0, 345,Compound-Poisson software reliability model,"The probability density estimation of the number of software failures in the event of clustering or clumping of the software failures is considered. A discrete compound Poisson (CP) prediction model is proposed for the random variable Xrem, which is the remaining number of software failures. The compounding distributions, which are assumed to govern the failure sizes at Poisson arrivals, are respectively taken to be geometric when failures are forgetful and logarithmic-series when failures are contagious. The expected value (μ) of Xrem is calculated as a function of the time-dependent Poisson and compounding distribution based on the failures experienced. Also, the variance/mean parameter for the remaining number of failures, qrem, is best estimated by qpast from the failures already experienced. Then, one obtains the PDF of the remaining number of failures estimated by CP(μ,q). CP is found to be superior to Poisson where clumping of failures exists. Its predictive validity is comparable to the Musa-Okumoto log-Poisson model in certain cases",1992,0, 346,Reliable distributed sorting through the application-oriented fault tolerance paradigm,"A fault-tolerant parallel sorting algorithm developed using the application-oriented fault tolerance paradigm is presented. The algorithm is tolerant of one processor/link failure in an n-cube. The addition of reliability to the sorting algorithm results in a performance penalty. Asymptotically, the fault-tolerant algorithm is less costly than host sorting. Experimentally it is shown that fault-tolerant sorting quickly becomes more efficient that host sorting when the bitonic sort/merge is considered. The main contribution is the demonstration that the application-oriented fault tolerance paradigm is applicable to problems of a noniterative-convergent nature",1992,0, 347,"Predicting software errors, during development, using nonlinear regression models: a comparative study","Accurately predicting the number of faults in program modules is a major problem in quality control of a large software system. The authors' technique is to fit a nonlinear regression model to the number of faults in a program module (dependent variable) in terms of appropriate software metrics. This model is to be used at the beginning of the test phase of software development. The aim is not to build a definitive model, but to investigate and evaluate the performance of four estimation techniques used to determine the model parameters. Two empirical examples are presented. Results from average relative error (ARE) values suggest that relative least squares (RLS) and minimum relative error (MRE) procedures possess good properties from the standpoint of predictive capability. Moreover, sufficient conditions are given to ensure that these estimation procedures demonstrate strong consistency in parameter estimation for nonlinear models. Whenever the data are approximately normally distributed, least squares may possess superior predictive quality. However. in most practical applications there are important departures from normality: thus RLS and MRE appear to be more robust",1992,0, 348,Producing a very high quality still image from HDMAC transmission,"An experimental system for grabbing high definition television pictures from HDMAC transmission is presented. The hardware part of the system is based on a video sequencer which is used for digitizing, storing, and displaying the HDMAC signal. The HDMAC decoder itself has been implemented using computer software. The system is able to receive and display approximately a 3-second-long sequence of HDTV pictures. The software part of the system employs image enhancement algorithms to produce high quality still images. The interlaced transmission format is converted to progressive and both temporal and spatial noise filtering is applied to improve the image quality in the presence of transmission noise. The algorithms depend on simple motion detection and nonlinear filtering. Examples of the received images and of the performance of the image enhancement algorithms are shown",1992,0, 349,Efficient diagnosis of multiprocessor systems under probabilistic models,"The problem of fault diagnosis in multiprocessor systems is considered under a probabilistic fault model. The focus is on minimizing the number of tests that must be conducted to correctly diagnose the state of every processor in the system with high probability. A diagnosis algorithm that can correctly diagnose these states with probability approaching one in a class of systems performing slightly greater than a linear number of tests is presented. A nearly matching lower bound on the number of tests required to achieve correct diagnosis in arbitrary systems is proved. Lower and upper bounds on the number of tests required for regular systems are presented. A class of regular systems which includes hypercubes is shown to be correctly diagnosable with high probability. In all cases, the number of tests required under this probabilistic model is shown to be significantly less than under a bounded-size fault set model. These results represent a very great improvement in the performance of system-level diagnosis techniques",1992,0, 350,Traffic routing for multicomputer networks with virtual cut-through capability,"The problem of selecting routes for interprocess communication in a network with virtual cut-through capability while balancing the network load and minimizing the number of times that a message gets buffered is addressed. The approach taken is to formulate the route selection problem as a minimization problem, with a link cost function that depends upon the traffic through the link. The form of this cost function is derived based on the probability of establishing a virtual cut-through route. It is shown that this route selection problem is NP-hard, and an approximate algorithm is developed which tries to incrementally reduce the cost by rerouting traffic. The performance of this algorithm is evaluated for the hypercube and the C-wrapped hexagonal mesh, example networks for which virtual cut-through switching support has been developed",1992,0, 351,Orthogonal defect classification-a concept for in-process measurements,"Orthogonal defect classification (ODC), a concept that enables in-process feedback to software developers by extracting signatures on the development process from defects, is described. The ideas are evolved from an earlier finding that demonstrates the use of semantic information from defects to extract cause-effect relationships in the development process. This finding is leveraged to develop a systematic framework for building measurement and analysis methods. The authors define ODC and discuss the necessary and sufficient conditions required to provide feedback to a developer; illustrate the use of the defect type distribution to measure the progress of a product through a process; illustrate the use of the defect trigger distribution to evaluate the effectiveness and eventually the completeness of verification processes such as inspection or testing; provides sample results from pilot projects using ODC; and open the doors to a wide variety of analysis techniques for providing effective and fast feedback based on the concepts of ODC",1992,0, 352,Predictive modeling techniques of software quality from software measures,"The objective in the construction of models of software quality is to use measures that may be obtained relatively early in the software development life cycle to provide reasonable initial estimates of the quality of an evolving software system. Measures of software quality and software complexity to be used in this modeling process exhibit systematic departures of the normality assumptions of regression modeling. Two new estimation procedures are introduced, and their performances in the modeling of software quality from software complexity in terms of the predictive quality and the quality of fit are compared with those of the more traditional least squares and least absolute value estimation techniques. The two new estimation techniques did produce regression models with better quality of fit and predictive quality when applied to data obtained from two software development projects",1992,0, 353,Projecting software defects from analyzing Ada designs,"Models for projecting software defects from analyses of Ada designs are described. The research is motivated by the need for technology to analyze designs for their likely effect on software quality. The models predict defect density based on product and process characteristics. Product characteristics are extracted from a static analysis of Ada subsystems, focusing on context coupling, visibility, and the import-export of declarations. Process characteristics provide for effects of reuse level and extent of changes. Multivariate regression analyses were conducted with empirical data from industry/government-developed projects: 16 Ada subsystems totaling 149000 source lines of code. The resulting models explain 63-74% of the variation in defect density of the subsystems. Context coupling emerged as a consistently significant variable in the models",1992,0, 354,Improving the reliability of function point measurement: an empirical study,"One measure of the size and complexity of information systems that is growing in acceptance and adoption is function points, a user-oriented, nonsource line of code metric of the systems development product. Previous research has documented the degree of reliability of function points as a metric. This research extends that work by (a) identifying the major sources of variation through a survey of current practice, and (b) estimating the magnitude of the effect of these sources of variation using detailed case study data from commercial systems. The results of this research show that a relatively small number of factors has the greatest potential for affecting reliability, and recommendations are made for using these results to improve the reliability of function point counting in organizations",1992,0, 355,An entropy-based measure of software complexity,"It is proposed that the complexity of a program is inversely proportional to the average information content of its operators. An empirical probability distribution of the operators occurring in a program is constructed, and the classical entropy calculation is applied. The performance of the resulting metric is assessed in the analysis of two commercial applications totaling well over 130000 lines of code. The results indicate that the new metric does a good job of associating modules with their error spans (averaging number of tokens between error occurrences)",1992,0, 356,Assessing the risk of software failure in a funds transfer application,"Software failure risk, the expected loss resulting from software faults, is a useful measure of software quality that can guide development, maintenance, testing, and operational use of software. The author describes the measurement of software failure risk in a funds transfer system and demonstrates how the external environment and the structure of the software minimize failure risk even when the potential financial exposure is extremely large",1992,0, 357,Submarine pipeline automatic-repair system-an approach to the mission-availability assessment,"The authors present the methodology followed to evaluate the mission availability of a remotely controlled submarine system designed to repair damaged pipelines in deep waters. The goal of the analysis has been to identify the technical and operational solutions with respect mainly to the logistic support system design, to guarantee operational availability and thereby minimize delays. To this purpose, the FMEA and the fault tree analysis technique for reliability assessment of each subsystem have been supplemented by an event tree analysis performed through a computer software package (ADMIRA). All sequences of failure and/or success for the whole system have been analyzed during all the operation steps. This analysis has allowed an estimate of the delay for each possible system state. The analysis has shown that the event tree approach is an adequate tool to handle situations where failure occurrences and dependencies among different events have to be taken into account. The analysis has led to improved design, operator skills, and training as well as support system definition",1992,0, 358,Reliability growth for typed defects [software development],"The authors present a reliability growth model for defects that have been categorized into defect types associated with specific stages in the software development process. Modeling the reliability growth of defects for each type separately allows identification of problems in the development process which may otherwise be masked when defects of all types are modeled together. The authors incorporate dependencies, from a failure detection point of view, between defects of two different types into a model for reliability growth. They use typed defect data from three different software projects to validate the model. They find that the defect detection rates are not equal for defects of different types within the project. Since each defect type can be associated with a software development stage, comparing the estimated defect detection rates and the dependency between types provides a basis for feedback on the process",1992,0, 359,Eleventh Annual International Phoenix Conference on Computers and Communications (Cat. No.92CH3129-4),The following topics are dealt with: hypercubes; performance analysis and modeling: advanced architectures; operating systems design; database systems design; object-oriented databases; spread spectrum; modulation; communications theory; connection and interconnection; topology and traffic control CAD/CAM and image processing; expert system design; real-time computations; reliability and fault tolerance; reconfiguration; software design; programming languages and compilers: satellite communication; networking systems; open systems and artificial intelligent and applications. Abstracts of individual papers can be found under the relevant classification codes in this or other issues,1992,0, 360,Statistical evaluation of data allocation optimization techniques with application to computer integrated manufacturing,"A statistical approach is used to analyze the performance of data allocation optimization techniques in distributed computer systems. This approach is applied to a computer integrated manufacturing (CIM) system with hierarchical control constraints. The performance of some existing general purpose optimization techniques are compared with heuristics specifically developed for allocating data in a CIM system. Data allocation optimization techniques were actually implemented and performance was evaluated for a 250 node apparel CIM factory. The specific issues addressed include: (1) solution quality in terms of the minimum system wide input/output (I/O) task operating cost, (2) the probability of finding a better solution, (3) the performance of optimization heuristics in terms of their figure-of-merit, and (4) the impact of I/O task activation scenarios on the choice of an optimization technique for solving the CIM data allocation problem.<>",1992,0, 361,Automatic visual inspection of wood surfaces,"A prototype software system for visual inspection of wood defects has been developed. The system uses a hierarchical vector connected components (HVCC) segmentation which can be described as a multistage region-growing type of segmentation. The HVCC version used in experiments uses RGB color vector differences and Euclidean metrics. The HVCC segmentation seems to be very suitable for wood surface image segmentation. Geometrical, color and structural features are used in classification. Possible defects are classified using combined tree-kNN classifier and pure kNN-classifier. The system has been tested using plywood boards. Preliminary classification accuracy is 85-90% depending on the type of defect",1992,0, 362,Evaluating and selecting testing tools,"The authors consider how a data collection system that leads a company to successful tool selections must be carefully devised. They discuss the analysis of user needs and the establishment of tool selection criteria. They suggest a number of standards, articles and surveys which will help in the search for tools and provide a procedure for rating them",1992,0,340 363,Assessment of reverse engineering tools: A MECCA approach,"It is a general requirement in the software engineering field that quality and productivity should be taken into consideration. Software development tools can have significant impacts in assuring the quality of a software system and productivity of the development process. In a rapidly evolving engineering field such as software engineering, it is therefore important to select appropriate development tools. This paper discusses the reverse engineering tool as a quality software development tool. An introduction to the reverse engineering tools, their impact on the development environment and their functionality in general are cited before a MECCA-model for assessing specific tools is proposed. The main objective is to introduce the reader to how a reverse engineering tool can be assessed in terms of quality and productivity",1992,0, 364,Quorum-oriented multicast protocols for data replication,"A family of communication protocols, called quorum multicasts, is presented that provides efficient communication services for widely replicated data. Quorum multicasts are similar to ordinary multicasts, which deliver a message to a set of destinations. The protocols extend this model by allowing delivery to a subset of the destinations, selected according to distance or expected data currency. These protocols provide well-defined failure semantics, and can distinguish between communication failure and replica failure with high probability. The authors have evaluated their performance, taking measurements of communication latency and failure in the Internet. A simulation study of quorum multicasts showed that they provide low latency and require few messages. A second study that measured a test application running at several sites confirmed these results",1992,0, 365,Experimental evaluation of certification trails using abstract data type validation,"The authors report on an attempt to assess the performance of algorithms utilizing certification trails on abstract data types. Specifically, they have applied this method to the following problems: heapsort, Huffman tree, shortest path, and skyline. Previous results used certification trails specific to a particular problem and implementation. The approach allows certification trails to be localized to data structure modules making the use of this technique transparent to the user of such modules",1992,0, 366,Fault-tolerant concurrent branch and bound algorithms derived from program verification,"One approach for providing fault tolerance is through examining the behavior and properties of the application and deriving executable assertions that detect faults. This paper focuses on transforming the assertions of a verification proof of a program to executable assertions. These executable assertions may be embedded in the program to create a fault-tolerant program. It is also shown how the natural redundancy of the program variables can be used to reduce the number of executable assertions needed. While this approach has been applied to the sequential programming environment, the distributed programming environment presents special challenges. The authors discuss the application of concurrent programming axiomatic proof systems to generate executable assertions in a distributed environment using distributed branch and bound as a model problem",1992,0, 367,A chip solution to hierarchical and boundary-scan compatible board level BIST,"To achieve the full benefit of self test approaches, current self test techniques aimed at chip level must be extended to whole boards and systems. The self test must be hierarchical and compatible to the standardized boundary-scan architecture. A hierarchical boundary-scan architecture is presented together with the necessary controller chip and the synthesis software which make a hierarchical self test of arbitrary depth possible and provide sophisticated diagnosis features in case of failure detection",1992,0, 368,Computer-aided design of avionic diagnostics algorithms,"Describes the application of a computer-aided design tool developed to complement an alternative approach to diagnostics design using the failure signatures or syndromes. The computer-aided design tool can analyze fault-tolerant architectural networks, provide network design guidance, and generate syndrome data for inclusion in the embedded software diagnostic algorithms. The tool accepts as input a graphical description of the system to be analyzed and produces as output a complete failure dependency analysis and the syndrome vectors needed for a table look-up implementation of the diagnostics. The failure syndrome also provides analysis of the theoretical performance potential of the system. Additional benefits are that transient and intermittent failures can be conveniently handled, and the method can be extended to diagnose multiple sequential failure conditions",1992,0, 369,Neural networks for sensor management and diagnostics,"The application of neural network technology to control system input signal management and diagnostics is explored. Control systems for critical plants, where operation must not be interrupted for safety reasons, are often configured with redundant sensing, computing, and actuating elements to provide fault tolerance and ensure the required degree of safety. In one such test case, the validation, selection and diagnosis of redundant sensor signals required 40%-50% of the control system hardware and software, while the control algorithms required less than 20% of these resources. Neural networks are investigated to determine if they can reduce the computational requirement and improve the performance of control system input signal management and diagnostics. Four neural networks are investigated to determine if they can reduce the computational requirement and improve the performance of control system input signal management and diagnostics. Four neural networks were trained to perform signal validation and selection of redundant sensors, sensor estimation from the data redundancy among dissimilar sensors, diagnostics, and estimation of unobservable control parameters. The performance of the neural networks are compared against plant models, and the results are discussed",1992,0, 370,Fault detection and isolation in RF subsystems,The authors report on the development of an approach for performing fault detection and isolation in RF subsystems using a highly integrated system-level operation. By integrating diagnostics into system-level operation RF subsystem faults are detected and isolated to the line replaceable module (LRM). This study is conducted in the context of the communication navigation identification (CNI) subsystem and the electronic warfare (EW) subsystem developed by TRW avionics,1992,0, 371,Arithmetic codes for concurrent error detection in artificial neural networks: the case of AN+B codes,"A number of digital implementations of neural networks have been presented in recent literature. Moreover, several authors have dealt with the problem of fault tolerance; whether such aim is achieved by techniques typical of the neural computation (e.g. by repeated learning) or by architecture-specific solutions, the first basic step consists clearly in diagnosing the faulty elements. The present paper suggests adoption of concurrent error detection; the granularity chosen to identify faults is that of the neuron. An approach based on a class of arithmetic codes is suggested; various different solutions are discussed, and their relative performances and costs are evaluated. To check the validity of the approach, its application is examined with reference to multi-layered feed-forward networks",1992,0, 372,Design of an image processing integrated circuit for real time edge detection,"Presents the design of a real time image processing micro-system to detect defects on manufacturing products. The analysis method is based on an edge detection algorithm (differential operators) to select the information related to the structure of the objects present in the image. The edge calculation function has been integrated in a standard cell circuit using a CMOS 1.5 μm process. The ASIC has been implemented and tested in an image processing microsystem with a CCD camera. Results show an improvement of performances (speed, noise, size reduction system characterization, etc. . .) in comparison with the first prototypes (software implementation and printed board with standard components) allowing the use of this system in an industrial environment for the real time detection of defects",1992,0, 373,Improving the performance of message-passing applications by multithreading,"Achieving maximum performance in message-passing programs requires that calculation and communication be overlapped. However, the program transformations required to achieve this overlap are error-prone and add significant complexity to the application program. The authors argue that calculation/communication overlap can be achieved easily and consistently by executing multiple threads of control on each processor, and that this approach is practical on message-passing architectures without any special hardware support. They present timing data for a typical message-passing application, to demonstrate the advantages of the scheme",1992,0, 374,Optimistic message logging for independent checkpointing in message-passing systems,Message-passing systems with a communication protocol transparent to the applications typically require message logging to ensure consistency between checkpoints. A periodic independent checkpointing scheme with optimistic logging to reduce performance degradation during normal execution while keeping the recovery cost acceptable is described. Both time and space overhead for message logging can be reduced by detecting messages that need not be logged. A checkpoint space reclamation algorithm is presented to reclaim all checkpoints which are not useful for any possible future recovery. Communication trace-driven simulation for several hypercube programs is used to evaluate the techniques,1992,0, 375,A framework for process maintenance [software],"The authors present a framework, called the process cycle, which can assist in supporting and controlling process maintenance. The process cycle incorporates engineering management, performance, and improvement of processes by human agents subjected to desirable goals and policy constraints. Process maintenance is supported by incorporating feedback cycles so that processes, goals, and policies can be assessed and improved. In particular the authors address the identification of the reasons why processes change, the overall process change process, and the issue of policy improvement. Furthermore, they assess the applicability of the process cycle framework by relating it to current process maintenance practices. It is pointed out that an implication of using the process cycle for process maintenance is that there is a clear logical separation of concern in the various roles played by people, tools used, activities carried out, goals, and policies specified",1992,0, 376,A replicated monitoring tool,Modeling the reliability of distributed systems requires a good understanding of the reliability of the components. Careful modeling allows highly fault-tolerant distributed applications to be constructed at the least cost. Realistic estimates can be found by measuring the performance of actual systems. An enormous amount of information about system performance can be acquired with no special privileges via the Internet. A distributed monitoring tool called a tattler is described. The system is composed of a group of tattler processes that monitor a set of selected hosts. The tattlers cooperate to provide a fault-tolerant distributed data base of information about the hosts they monitor. They use weak-consistency replication techniques to ensure their own fault-tolerance and the eventual consistency of the data base that they maintain,1992,0, 377,An instrument for concurrent program flow monitoring in microprocessor based systems,"The activity of a microprocessor system can be altered by random failures due to dynamic hardware faults or unforeseen software errors that cannot be detected by common automatic test (ATE) or built-in self-test (BIST) offline systems. The authors describe a method for the detection of a particular class of such faults. This method is based on a monitoring strategy that surveys online the correctness of the program flow in an actual microprocessor system. It is determined whether the microprocessor program counter is correctly updated by following one of the allowable paths of the program flow. In program flow, two situations can be considered: sequential flow and deviation form sequentiality. It is possible to subdivide the program into segments in which the instructions are sequentially executed. Such segments are linked to each other by those instructions that cause the deviation from sequentiality. Examples of linking between program segments are given, and the logical structure for the main control activity of the implemented checking system is described",1992,0, 378,Automated reliability prediction?,"With currently available software, it is unrealistic to expect a design engineer to perform an automated reliability prediction during the early design stage. Two major road blocks are: lack of a set of software tools for electrical and thermal stress analyses of an early design, and lack of an industry-wide parts library identifying part type, complexity, and quality levels. Without these tools, a part-count prediction is more appropriate during the early stages of a design. The Naval Avionics Center (NAC) has used a reliability prediction program to generate predictions for the projects listed. The Naval Avionics Center's (NAC) mission is to conduct research, development, engineering, material acquisition, pilot and limited manufacturing, technical evaluation, depot maintenance, and integrated logistics support on assigned airborne electronics (avionics), missile, spaceborne, undersea and surface weapon systems and related equipment. In striving to accomplish this mission, NAC has made a commitment to develop all new designs on their integrated computer-aided engineering environment",1992,0, 379,A methodology for comparing fault tolerant computers,"A general methodology for comparing fault tolerant computers using a hierarchical taxonomy of pertinent issues is described. The uppermost level taxonomy consists of reliability, specialized fault detection, performance, flexibility, and communication issues. Each class is further decomposed into more specific topics. The methodology consists of studying each issue in turn to determine which candidate architecture(s) best support that issue. The results are then combined using a user defined function to rank the architectures. A design space technique is described to aid in drawing the comparisons. The technique is illustrated by considering the fault tolerant capabilities of the JPL hypercube and the Encore Multimax",1992,0, 380,A system for fault diagnosis in electronic circuits using thermal imaging,"In many instances, the electronic state of an electronic circuit card contains insufficient information for correct fault diagnosis. Infrared thermal images, captured during the initial warmup of an electronic circuit card, are employed to enhance the available information and to assist the technician in performing the fault diagnosis. This system has proven beneficial for radar and microwave technologies where probing changes the electronic response of the circuit and signal levels can be extremely small. The development of this system has required the integration of technologies from a diversity of fields including image capture, image compression, image enhancement, neural networks, genetic algorithms, thermodynamics, and automatic test equipment (ATE) software. These technologies and preliminary system performance are discussed.<>",1992,0, 381,Integrated diagnostic for new French fighter,"An important logistic support function of airborne weapon systems is automatic failure detection and localization. On the new French fighter aircraft this function is being developed as a whole, taking into account the needs for all maintenance levels. O-level maintenance uses integrated monitoring and testing, which is described. The intermediate level maintenance is performed by a unique test system used for production equipment acceptance and maintenance in the user's operational intermediate workshops. Appropriate data are exchanged between maintenance levels by means of digital media. The main feature has been to apply, to all maintenance levels and as far as possible, the integrated maintenance concept which has been used for automatic diagnostics in Mirage 2000 aircraft taking into account the good results experienced since 1982.<>",1992,0, 382,Onboard diagnostics of avionics for fault tolerance and maintenance,"Summary form only given. Emerging modular avionics systems will require excellent onboard fault diagnosis, both to support flight critical functions and to provide for cost effective maintenance. The approach to fault isolation discussed makes use of multiple indications of failures, known as failure signatures or syndromes. A computer-aided design tool that can analyze the architecture and generate syndrome data for inclusion in embedded software algorithms has been developed. This tool performs automatic failure dependency analysis using as input a graphical schematic of the system to be analyzed. A further complication arises with the syndrome approach when detection tests fail to operate correctly. A method of mitigating this problem using partial pattern matching techniques on the pattern of test failure indication has also been developed. Other benefits of this approach are outlined.<>",1992,0, 383,Applying diagnostic techniques to improve software quality,"The author describes the use of diagnostic techniques in the design, testing, and maintenance of software. The goal in applying diagnostic techniques to software is twofold. First, design-for-test techniques can be applied to help ensure that software failures can be diagnosed when they occur. Second, testing software using techniques which are normally applied to the testing of diagnostics can help detect and eliminate software failures before the product is delivered. This means that the diagnostic design techniques are applied to make the software more testable, and that the software is tested more thoroughly using methods which are applied to testing of diagnostic capabilities.<>",1992,0, 384,Integrated detection and protection schemes for high-impedance faults on distribution systems,"The authors describe two integrated microcomputer-based detection and protection schemes for distribution system faults including high-impedance faults. The main objective of the two schemes is to deal with single-line-to-ground faults of feeders on a distribution substation using delta-delta connection. The first scheme, based on an innovative fault-detection algorithm, identifies a faulty feeder by measuring varied phase angles between phase currents and bus voltages. The second scheme, including several existing detection algorithms, incorporates an expert system for selecting an algorithm suitable for a given application based on their advantages. A simulation software package is built to simulate the implementation of the two schemes and their combination. Using the simulation package and feeder fault data, simulation tests are made to verify the feasibility of the schemes, the algorithms, and the system structures suggested",1992,0, 385,Inheritance-based object-oriented software metrics,"There is no software metrics based on object-oriented programming languages (OOPLs) developed to help object-oriented software development. A graph-theoretical complexity metric to measure object-oriented software complexity is described. It shows that inheritance has a close relation with the object-oriented software complexity, and reveals that misuse of repeated (multiple) inheritance will increase software complexity and be prone to implicit software errors. An algorithm to support this software metric is presented. Its time complexity is O(n3)",1992,0, 386,A fail safe node using transputers for railway signalling applications,"A new fail-safe node using transputers for application in railway signaling systems is reported. This fail-safe node provides fault tolerance and safe reaction, which are required for railway signaling systems. The basic scheme of the fail-safe node is discussed, together with the concepts of leadership based on rotation, recovery in the fail-safe node, reconfiguration, and safe shutdown",1992,0, 387,An alternative approach to avionics diagnostics algorithms,A methodology for designing avionics system diagnostics has been developed that uses an automated failure syndrome approach coupled with a partial-pattern matching algorithm to simplify diagnostics design and to provide improved diagnostic performance. The method was validated by performing experiments using a flight control failure simulation and a simulation of the proposed diagnostics. Results demonstrate significantly improved failure isolation performance using diagnostic algorithms of modest size and complexity. The approach described is particularly applicable to emerging highly integrated systems such as modular avionics,1992,0, 388,Test coverage dependent software reliability estimation by the HGD model,"The hyper-geometric distribution model (HGDM) has been presented as a software reliability growth model with the capability to make estimations for various kinds of real observed test-and-debug data. With the HGDM, the discovery and rediscovery of faults during testing and debugging has been discussed. One of the parameters, the `ease of test' function w(i) represents the total number of faults discovered and rediscovered at a test instance. In this paper, the authors firstly show that the ease of test function w(i) can be expressed as a function of the test coverage for the software under test. Test coverage represents a measure of how much of the software has been tested during a test run. Furthermore, the ease of test function w(i) can integrate a user-experience based parameter c that represents the strength of test cases to discover faults. This parameter c allows the integration of information on the goodness of test cases into the estimation process. The application of the HGDM to a set of real observed data clearly shows that the test coverage measure can be integrated directly into the ease of test function w(i) of the model",1992,0, 389,The Schneidewind software reliability model revisited,"A software reliability model based on a nonhomogeneous Poisson process (NHPP) was proposed by N.F. Schneidewind (Sigplan Notices, vol.10, p.337, 1975). Since then, many other NHPP models have been suggested and studied by various authors. The authors show that several NHPP models can be derived based on the general assumptions made by Schneidewind. Also, they note that in the original paper, there are several interesting approaches worth further consideration. To mention a few, Schneidewind modelled both the software correction process and the software detection process. Methods of weighted least squares were adopted together with the software reliability forecasting based on previous measurements. In this paper, the Schneidewind model is revisited and some further research result are presented",1992,0, 390,A method proposal for early software reliability estimation,"Presents a method proposal for estimation of software reliability before the implementation phase. The method is based upon a formal description technique and that it is possible to develop a tool for performing dynamic analysis, i.e. locating semantic faults in the design. The analysis is performed by applying a usage profile as input as well as doing a full analysis, i.e. locating all faults that the tool can find. The tool must provide failure data in terms of time since the last failure was detected. The mapping of the dynamic failures to the failures encountered during statistical usage testing and operation is discussed. The method can be applied either on the software specification or as a step in the development process by applying it on the design descriptions. The proposed method allows for software reliability estimations that can be used both as a quality indicator, and also for planning and controlling resources, development times etc. at an early stage in the development of software systems",1992,0, 391,Control-flow based testing of Prolog programs,"Presents test selection criteria for Prolog programs which are based on control flow. The control flow in Prolog programs is not obvious because of the declarative nature of Prolog. The authors present two types of control flow graphs to represent the hidden control flow of Prolog programs explicitly. A fault model is developed for Prolog programs for guidance on test selection. Test selection criteria are given in terms of the coverage on these control flow graphs. Under the given fault model, the effectiveness of these criteria is analyzed in terms of fault detection capability of the test cases produced with these criteria",1992,0, 392,Analysis of large system black-box test data,"Studies black box testing and verification of large systems. Testing data is collected from several test teams. A flat, integrated database of test, fault, repair, and source file information is built. A new analysis methodology based on the black box test design and white box analysis is proposed. The methodology is intended to support the reduction of testing costs and enhancement of software quality by improving test selection, eliminating test redundancy, and identifying error prone source files. Using example data from AT&T systems, the improved analysis methodology is demonstrated",1992,0, 393,A neural network approach for predicting software development faults,"Accurately predicting the number of faults in program modules is a major problem in the quality control of a large scale software system. In this paper, the use of the neural networks as a tool for predicting the number of faults in programs is explored. Software complexity metrics have been shown to be closely related to the distribution of faults in program modules. The objective in the construction of models of software quality is to use measures that may be obtained relatively early in the software development life cycle to provide reasonable initial estimates of quality of an evolving software system. Measures of software quality and software complexity to be used in this modeling process exhibit systematic departures of normality assumptions of regression modeling. This paper introduces a new approach for static reliability modeling and compares its performance in the modeling of software reliability from software complexity in terms of the predictive quality and the quality of fit with more traditional regression modeling techniques. The neural networks did produce models with better quality of fit and predictive quality when applied to one data set obtained from a large commercial system",1992,0, 394,The nature of fault exposure ratio,"The fault exposure ratio K is an important factor that controls the per-fault hazard rate, and hence the effectiveness of software testing. The paper examines the variations of K with fault density which declines with testing time. Because faults get harder to find, K should decline if testing is strictly random. However, it is shown that at lower fault densities K tends to increase, suggesting that real testing is more efficient than random testing. Data sets from several different projects are analyzed. Models for the two factors controlling K are suggested, which jointly lead to the logarithmic model",1992,0, 395,Software reliability measurements in N-Version software execution environment,"The author quantitatively examines the effectiveness of the N-Version programming approach. He looks into the details of an academia/industry joint project employing six programming languages, and studies the properties of the resulting program versions. He explains how exhaustive testing was applied to the project, and measure the error probability in different N-Version Software execution configurations. He also applies mutation testing techniques to measure the safety coverage factor for the N-Version software system. Results from this investigation reveals the potential of N-Version software in improving software reliability. Another observation of this research is that the per fault error rate does not remain constant in this computation-intensive project. The error rates associated with each program fault differ from each other dramatically, and they tend to decrease as testing progresses",1992,0, 396,Software quality metrics in space systems,"The French Space Agency uses a special computer environment to collect quality and reliability data on the software it develops, and to analyze and predict the performance potential of that software throughout its life cycle. The authors present the environment, along with the way data is organized and collected during software development and maintenance. They show the impact their work in this field has on complexity metrics and software reliability measurements. They present practical applications and results concerning quite a large number of space projects in a real context",1992,0, 397,Standards and practices for reliable safety-related software systems,"Safety and reliability are of increasing concern as mechanical systems are replaced or upgraded with computer-based software technology as we move into the 21st century. The article is the result of an evaluation of domestic and international, industry and government accepted practices and standards relating to software safety and reliability. Standards were evaluated for similarities and variance. There are three areas of interest when discussing the development of safety-related software systems. These are the quality enhancing development processes, safety analysis techniques, and measurement techniques for assessing reliability or safety",1992,0, 398,Providing an empirical basis for optimizing the verification and testing phases of software development,"Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault density components so that testing/verification effort can be concentrated where needed. Such a strategy is ejected to detect more faults and thus improve the resulting reliability of the overall system. The authors present an alternative approach for constructing such models that is intended to fulfil specific software engineering needs, (i.e. dealing with partial/incomplete information and creating models that are easy to interpret). The approach to classification is to: measure the software system to be considered; and to build multivariate stochastic models for prediction. The authors present experimental results obtained by classifying FORTRAN components into two fault density classes: low and high. They also evaluate the accuracy of the model and the insights it provides into the software process",1992,0, 399,Measuring the field quality of wide-distribution commercial software,"The problem of quantifying the field quality of wide-distribution commercial software is addressed. The authors argue that for this type of software the proper quality metric is an estimate of the number of defects remaining in the code. They observe that the apparent number of defects remaining in this type of software is a function of the number of users as well as the number of actual defects in the code. New releases of commercial software normally consist of some code from prior releases and some new or modified code. The authors continue to discover new defects in the code from prior releases even after a new release is in the field. In fact, the new release code appears to stimulate discovery of defects latent in the `old' code and cause a `next release effect.' Field defect data from several releases of a widely distributed commercial software product are shown to a demonstrate this effect",1992,0, 400,Adaptable technique for collecting MWD density logging research data using Windows software,The goal of the Density Project at Develco is to further develop gamma-gamma technology and support a commercial instrument that measures bulk density of Earth formations while drilling. A large database is automatically generated by the data collection system developed in the R&D department. This data collection system also performs several quality assurance functions for the data as they are collected. Commercial software applications are used in lieu of software generated in-house. Real-time analysis can be accomplished with any software application that supports the Dynamic Data Exchange (DDE) features of Windows applications. Mathcad was used initially to develop an algorithm for locating photo peaks and subsequently fitting to the peak a Gaussian function,1992,0, 401,Estimating the fault rate function,"Paging activity can be a major factor in determining whether a software workload will run on a given computer system. A program's paging behavior is difficult to predict because it depends not only on the workload processed by the program, but also on the level of storage contention of the processor. A program's fault rate function relates storage allocation to the page fault rate experienced while processing a given workload. Thus, with the workload defined, the fault rate function can be used to see how the program's storage allocation is affected by varying levels of storage contention, represented by varying fault rates. This paper presents a technique to represent program workloads and estimate the fault rate function, and describes how these results can be used in analyzing program performance.",1992,0, 402,OPTIC - A High Performance Image Processor,"The OPTIC is a high performance VLSI device designed for use in general-purpose real-time vision systems. The chip incorporates a fast proprietary (see [1]) shape- and edge-recognition algorithm, which does not require any preprocessing of the grey-level image. This makes the device especially suited to difficult machine vision problems where the image quality varies over time. The full-custom 1.5μm CMOS circuit contains 80,000 transistors and operates at an image sampling rate of 25MHz.",1992,0, 403,The past Quarter-Century and the Next Decade of Videotape Recording,"Since the first commercially successful videotape recorder (VTR) was introduced in 1956, many VTRs have been developed and put on the market. If the picture quality is almost the same, the tape consumption per hour (TCH) of a commercially successful VTR in its sales year will decrease according to a trend line of one-tenth per ten years. In other words, the recording density will be increased by a rule relating to the trend line. If the picture quality is improved, the trend line will move toward a higher TCH position on a parallel line. The future specifications of the VTR as a function of TCH can be predicted from such a trend chart. The miniaturization of the recording pitch and wavelength is a motivating force of VTR development. A high-utility, high-performance, but low-cost VTR was developed and continues to improve. The harmonization of hardware and software is the next important challenge. In this article, a short technical history of the VTR, the future of the VTR predicted from the trend chart, and the technical motivating force of VTR development are discussed.",1992,0, 404,Software Reuse Economics: Cost-benefit Analysis On A Large-scale Ada Project,"The software industry is painfully realizing that a software reuse effort, if not carefully planned and properly carried out, oftentimes becomes an inhibitor rather than a catalyst to software productivity and quality. Despite numerous ar ticles in the areas of domain analysis, component classification, automated component storage/retrieval tools, reuse metrics, etc., only a handful have managed to address the economics of software reuse. In order to be successful, not only must a reuse program be technically sound, it must also be economically worthwhile. After all, reducing costs and increasing quality were the two main factors that drove software reuse into the software mainstream. This paper presents a number of reuse economics models used to perform an economic assessment of a reuse effort on a large-scale Ada project: the United States Federal Aviation Administration's Advanced Automation System (FAA/AAS). Reuse economics models, cost/benefit tracking methods, and reuse catalystslinhibitors are addressed.",1992,0, 405,The software engineering laboratory - an operational software experience factory,"For 15 years, the Software Engineering Laboratory (SEL) has been carrying out studies and experiments for the purpose of understand- ing, assessing, and improving software and software processes within a production software development environment at the National Aeronautics and Space Administration/Goddard Space Flight Center (NASA/GSFC). The SEL comprises three major organizations: NASA/GSFC, Flight Dynamics Division University of Maryland, Department of Computer Science Computer Sciences Corporation, Flight Dynamics Technology Group - These organizations have jointly carried out several hundred software studies, producing hundreds of reports, papers, and documents, all of which de scribe some aspect of the software engineering technology that has been analyzed in the flight dynamics environment at NASA. The studies range from small, controlled experiments (such as analyzing the effectiveness of code readingversus that of functional testing) tolarge, multiple- project studies (such as assessing the impacts of Ada on a production environment). The organization's driving goal is to improve the software process continually, so that sustained improvement may be observed in the resulting products. This paper discusses the SEL as a functioning example of an operational software experience factory and summarizes the characteristics of and major lessons learned from 15 years of SEL operations.",1992,0, 406,Recent advances in software estimation techniques,"This paper describes some recent developments in the field of software estimation. Factors to be considered in software estimation are discussed, a framework for reconciling those factors is presented, software estimation techniques are categorized, and recent advances are described. The paper concludes with a forecast of likely future trends in software estimation.",1992,0, 407,Getting started on software metrics,"The principles on which the Software Management Metrics system is based are discussed. The system collects metrics at regular intervals and represents current estimates of the work to be done, the work accomplished, the resources used, and the status of products being generated. The lessons learned in the eight years since Software Management Metrics were first imposed on the US Air Force's software contractors are reviewed.<>",1993,0, 408,Software-reliability growth with a Weibull test-effort: a model and application,"Software reliability measurement during the testing phase is essential for examining the degree of quality or reliability of a developed software system. A software-reliability growth model incorporating the amount of test effort expended during the software testing phase is developed. The time-dependent behavior of test-effort expenditures is described by a Weibull curve. Assuming that the error detection rate to the amount of test effort spent during the testing phase is proportional to the current error content, the model is formulated by a nonhomogeneous Poission process. Using the model, the method of data analysis for software reliability measurement is developed. This model is applied to the prediction of additional test-effort expenditures to achieve the objective number of errors detected by software testing, and the determination of the optimum time to stop software testing for release",1993,0, 409,Simulation analysis of a communication link with statistically multiplexed bursty noise sources,"Variable bit rate (VBR) coding techniques have received great research interest as very promising tools for transmitting bursty multimedia traffic with low bandwidth requirements over a communication link. Statistically multiplexing the multimedia bursty traffic is a very efficient method of maximizing the utilization of the link capacity. The application of computer simulation techniques in analyzing a rate-based access control scheme for multimedia traffic such as voice traffic is discussed. The control scheme regulates the packetized bursty traffic at the user network interface of the link. Using a suitable congestion measure, namely, the multiplexer buffer length, the scheme dynamically controls the arrival rate by switching the coder to a different compression ratio (i.e., changing the coding rate). VBR coding methods can be adaptively adjusted to transmit at a lower rate with very little degradation in the voice quality. Reported results prove that the scheme greatly improves the link performance, in terms of reducing the probability of call blocking and enhancing the statistical multiplexing gain",1993,0, 410,"Performance evaluation of channel-sensitive, stabilized multiple access schemes for land-mobile satellite services using a hybrid simulation tool","A hybrid simulation tool for real-time performance measurements of complete networks is presented. The simulation tool combines the advantages of software and hardware modules. Recorded channel data are included. They allow the reproduction of realistic situations in the laboratory. Synchronization problems and noise are considered. As an example, stabilized slotted ALOHA access protocols were implemented and investigated in land-mobile satellite environments. A slotted ALOHA-based modified channel access strategy for highly unreliable channels is proposed requiring all mobile stations to estimate the link quality before starting a transmission. The different strategies are analyzed by simulation as well as the analytical methods. Transmission parameters are determined and optimized using a combination of analysis and simulation. The results show the power of the simulation tool and the better performance of stabilized slotted ALOHA with link-quality estimation in city channels",1993,0, 411,Evaluating design metrics on large-scale software,"Design-quality metrics that are being developed for predicting the potential quality and complexity of the system being developed are discussed. The goal of the metrics is to provide timely design information that can guide a developer in producing a higher quality product. The metrics were tested on projects developed in a university setting with client partners in industry. To further validate the metrics, they were applied to professionally developed systems to test their predictive qualities. The results of applying two design metrics and a composite metric to data from a large-scale industrial project are presented.<>",1993,0, 412,A novel non-unit protection scheme based on fault generated high frequency noise on transmission lines,"This paper is concerned with outlining a novel technique for the detection of power transmission line fault generated noise by employing an arrangement involving the use of conventionally connected, power line communication signal line traps and CVTs. A specially designed stack tuner (tuned to a certain frequency bandwidth) is connected to the coupling capacitor of the CVT in order to capture the high frequency signals. Digital signal processing is then applied to the captured information to determine whether the fault is inside or outside the protected zone. The paper concludes by presenting results based on extensive simulation studies carried out on a typical 400 kV vertical construction line, using the Electromagnetic Transient Program (EMTP) software. In particular, the performance of the nonunit protection scheme is examined for faults on systems with heavy loading and for high resistance arcing earth faults",1993,0, 413,Performability enhancement of fault-tolerant software,"Model-based performability evaluation is used to assess and improve the effectiveness of fault-tolerant software. The evaluation employs a measure that combines quantifications of performance and dependability in a synergistic manner, thus capturing the interaction between these two important attributes. The specific systems evaluated are a basic realization of N-version programming (NVP) (N =3) along with variants thereof. For each system, its corresponding stochastic process model is constructed in two layers, with performance and dependability submodels residing in the lower layer. The evaluation results reveal the extent to which performance, dependability, and performability of a variant are improved relative to the basic NVP system. More generally, the investigation demonstrates that such evaluations are indeed feasible and useful with regard to enhancing software performability",1993,0, 414,A general framework for developing adaptive fault-tolerant routing algorithms,"It is shown that Cartesian product (CP) graph-based network methods provide a useful framework for the design of reliable parallel computer systems. Given component networks with prespecified connectivity, more complex networks with known connectivity and terminal reliability can be developed. CP networks provide systematic techniques for developing reliable fault-tolerant routing schemes, even for very complex topological structures. The authors establish the theoretical foundations that relate the connectivity of a CP network, the connectivity of the component networks, and the number of faulty components: present an adaptive generic algorithm that can perform successful point-to-point routing in the presence of faults: synthesize, using the theoretical results, this adaptive fault-tolerant algorithm from algorithms written for the component networks: prove the correctness of the algorithm: and show that the algorithm ensures following an optimal path, in the presence of many faults, with high probability",1993,0, 415,An analysis of test data selection criteria using the RELAY model of fault detection,"RELAY is a model of faults and failures that defines failure conditions, which describe test data for which execution will guarantee that a fault originates erroneous behavior that also transfers through computations and information flow until a failure is revealed. This model of fault detection provides a framework within which other testing criteria's capabilities can be evaluated. Three test data selection criteria that detect faults in six fault classes are analyzed. This analysis shows that none of these criteria is capable of guaranteeing detection for these fault classes and points out two major weaknesses of these criteria. The first weakness is that the criteria do not consider the potential unsatisfiability of their rules. Each criterion includes rules that are sufficient to cause potential failures for some fault classes, yet when such rules are unsatisfiable, many faults may remain undetected. Their second weakness is failure to integrate their proposed rules",1993,0, 416,"Distributed, collaborative software inspection","The Collaborative Software Inspection (CSI) tool, which provides a distributed, structured environment for performing inspections on all software-development products, including specifications, designs, code, and test cases, is described. The inspection environment lets geographically distributed inspection participants meet with people in other cities through workstations at their desks. The current version of all material is accessible online. Inspection products are created online, so secondary data entry to permanent records is not necessary. The inspection information is also available for review and metrics collection. To assess the effectiveness of inspection in the distributive collaborative environment and compare it with face-to-face meetings, a case-study approach with replication logic is presented.<>",1993,0, 417,Effects of manufacturing errors on the magnetic performance of the SCC dipole magnets,"The authors summarize the work performed to evaluate the effects of manufacturing errors on the field quality of the Superconducting Super Collider dipole magnets. The multiple sensitivities to the conductor displacements were computed. There are two types of magnetic field multipole specifications: systematic (average over the entire ring) and RMS (standard deviation of the multipole distribution). The RMS multipoles induced by random variations in the magnet cross section were predicted using VSA software and the current design tolerances. In addition, the effects of variations in the beam tube, yoke and cryostat dimensions were also analyzed.<>",1993,0, 418,An empirical study of testing and integration strategies using artificial software systems,"There has been much discussion about the merits of various testing and integration strategies. Top-down, bottom-up, big-bang, and sandwich integration strategies are advocated by various authors. Also, some authors insist that modules be unit tested, while others believe that unit testing diverts resources from more effective verification processes. This article addresses the ability of the aforementioned integration strategies to detect defects, and produce reliable systems. It also explores the efficacy of spot unit testing, and compares phased and incremental versions of top-down and bottom-up integration strategies. Relatively large artificial software systems were constructed using a code generator with ten basic module templates. These systems were seeded with known defects and tested using the above testing and integration strategies. A number of experiments were then conducted using a simulator whose validity was established by comparing results against these artificial systems. The defect detection ability and resulting system reliability were measured for each strategy. Results indicated that top-down integration strategies are generally most effective in terms of defect correction. Top-down and big-bang strategies produced the most reliable systems. Results favored neither those strategies that incorporate spot unit testing nor those that do not; also, results favored neither phased nor incremental strategies",1993,0, 419,Provable improvements on branch testing,"This paper compares the fault-detecting ability of several software test data adequacy criteria. It has previously been shown that if C1 properly covers C2, then C1 is guaranteed to be better at detecting faults than C2, in the following sense: a test suite selected by independent random selection of one test case from each subdomain induced by C1 is at least as likely to detect a fault as a test suite similarly selected using C2. In contrast, if C1 subsumes but does not properly cover C2, this is not necessarily the case. These results are used to compare a number of criteria, including several that have been proposed as stronger alternatives to branch testing. We compare the relative fault-detecting ability of data flow testing, mutation testing, and the condition-coverage techniques, to branch testing, showing that most of the criteria examined are guaranteed to be better than branch testing according to two probabilistic measures. We also show that there are criteria that can sometimes be poorer at detecting faults than substantially less expensive criteria",1993,0, 420,An optimal graph-construction approach to placing program signatures for signature monitoring,"A new approach produces optimal signature placement for concurrent detection of processor and program-memory errors using signature monitoring. A program control-how graph, labeled with the overhead for placing a signature on each node and arc, is transformed into an undirected graph. For an order-independent signature function such as an XOR or arithmetic checksum, the undirected graph and a spanning tree algorithm are shown to produce an optimal placement in O(n log β(n, m)) time. Cyclic codes, which are order dependent, are shown to allow significantly lower overhead than order-independent functions. Prior work suggests overhead is unrelated to signature-function type. An O(n) graph-construction algorithm produces an optimal signature placement for cyclic codes. Experimental data show that using a cyclic code and horizontal reference signatures, the new approach can reduce average performance overhead to a fraction of a percent for the SPEC89 benchmark suite, more than 9 times lower than the performance overhead of an existing O(n2) placement algorithm",1993,0, 421,A case study of software process improvement during development,We present a case study of the use of a software process improvement method which is based on the analysis of defect data. The first step of the method is the classification of software defects using attributes which relate defects to specific process activities. Such classification captures the semantics of the defects in a fashion which is useful for process correction. The second step utilizes a machine-assisted approach to data exploration which allows a project team to discover such knowledge from defect data as is useful for process correction. We show that such analysis of defect data can readily lead a project team to improve their process during development,1993,0, 422,An evaluation of software design using the DEMETER tool,"The purpose of this study was to investigate the provision of measures suitable for evaluating software designs. These would allow some degree of predictability in estimating the quality of a coded software product. Experiments were conducted to test hypotheses concerning the evaluation of software design metrics related to code complexity. Three medium-sized software projects were used. Metrics from the design stage of these projects were collected by using a software package called DEMETER. This tool was developed specifically to provide the researcher with a usable method of collecting data for calculating design metrics. Data analysis was performed to identify relationships between the measures of design quality and software coded complexity represented by control flow as well as data complexity. The results indicated that reducing the number of interconnections between software units, together with an increase in the relationships of the elements within a module (controlled by the flow of global data), improves the resultant software.<>",1993,0, 423,"Performance analysis, quality function deployment and structured methods","Quality function deployment, (QFD), an approach to synthesizing several elements of system modeling and design into single unit, is presented. Behavioral, physical, and performance modeling are usually considered as separate aspects of system design without explicit linkages. Structured methodologies have developed linkages between behavioral and physical models before, but have not considered the integration of performance models. QFD integrates performance models with traditional structured models. In this method, performance requirements such as cost, weight, and detection range are partitioned into matrices. Partitioning is done by developing a performance model, preferably quantitative, for each requirement. The parameters of the model become the engineering objectives in a QFD analysis and the models are embedded in a spreadsheet version of the traditional QFD matrices. The performance model and its parameters are used to derive part of the functional model by recognizing that a given performance model implies some structure to the functionality of the system.<>",1993,0, 424,An examination of fault exposure ratio,"The fault exposure ratio, K, is an important factor that controls the per-fault hazard rate, and hence, the effectiveness of the testing of software. The authors examine the variations of K with fault density, which declines with testing time. Because faults become harder to find, K should decline if testing is strictly random. However, it is shown that at lower fault densities K tends to increase. This is explained using the hypothesis that real testing is more efficient than strictly random testing especially at the end of the test phase. Data sets from several different projects (in USA and Japan) are analyzed. When the two factors, e.g., shift in the detectability profile and the nonrandomness of testing, are combined the analysis leads to the logarithmic model that is known to have superior predictive capability",1993,0, 425,FINE: A fault injection and monitoring environment for tracing the UNIX system behavior under faults,"The authors present a fault injection and monitoring environment (FINE) as a tool to study fault propagation in the UNIX kernel. FINE injects hardware-induced software errors and software faults into the UNIX kernel and traces the execution flow and key variables of the kernel. FINE consists of a fault injector, a software monitor, a workload generator, a controller, and several analysis utilities. Experiments on SunOS 4.1.2 are conducted by applying FINE to investigate fault propagation and to evaluate the impact of various types of faults. Fault propagation models are built for both hardware and software faults. Transient Markov reward analysis is performed to evaluate the loss of performance due to an injected fault. Experimental results show that memory and software faults usually have a very long latency, while bus and CPU faults tend to crash the system immediately. About half of the detected errors are data faults, which are detected when the system is tries to access an unauthorized memory location. Only about 8% of faults propagate to other UNIX subsystems. Markov reward analysis shows that the performance loss incurred by bus faults and CPU faults is much higher than that incurred by software and memory faults. Among software faults, the impact of pointer faults is higher than that of nonpointer faults",1993,0, 426,Parallel processing of fault trees on a locally distributed multiple-processor network,"Large fault trees are evaluated in a distributed fashion by pooling the computing power of several networked LISP processors. Direct evaluation of fault trees of complex systems is computationally intensive and can take a long time when performed on a single processor. An iterative top-down algorithm successively recognizes and reduces known patterns, and decomposes the problem into subtasks at each level of iteration. These subtasks are distributed to multiple machines on the network. Thus, subtree evaluations which are tackled serially on a single processor, are performed in parallel by the distributed network. The reductions in elapsed computer time afforded by this approach are examined. Questions of optimal resource allocation and of scheme limitations are considered",1993,0, 427,Safety & hazard analysis for software controlled medical devices,"Advanced medical devices are progressively more controlled by software. In many cases software plays a central role in the safety critical devices. Historically, the question of safety has received more attention with respect to hardware than software. However, software can also make a significant contribution to the safeness of a device. In this paper device safety is examined in a system context. Since the study of software safety is relatively new, at first the nature of software failures is examined. This is followed by touching on the vocabulary of safety analysis. Some methods of analysis, detection, control and containment of hazards are described and some good practices for developing safe medical devices are suggested",1993,0, 428,Identifying and measuring quality in a software requirements specification,Numerous treatises exist that define appropriate qualities that should be exhibited by a well written software requirements specification (SRS). In most cases these are vaguely defined. The paper explores thoroughly the concept of quality in an SRS and defines attributes that contribute to that quality. Techniques for measuring these attributes are suggested,1993,0, 429,Dynamic system complexity,"Both operational environment and source code complexity influence the reliability of software systems. The complex modules of software systems often contain a disproportionate number of faults. However, if in a given environment, the complex modules are rarely exercised, then few of these faults are likely to become expressed as failures. Different environments will exercise a system's modules differently. A system's dynamic complexity is high in an environment that exercises the system's complex modules with high probability. It is likely that one or more potential scenarios induce inordinately large dynamic complexity. Identifying these scenarios is the first step in assuring that they receive the testing time that they warrant. The paper presents two metrics which evaluate the dynamic complexity of systems and of subsystems in the intended operational environment. The authors' analyze two software systems using both dynamic metrics",1993,0, 430,Can we measure software testing effectiveness?,"The paper examines the issues of measuring and comparing the effectiveness of testing criteria. It argues that measurement, in the usual sense, is generally not possible, but that comparison is. In particular, it argues that uniform relations that guarantee that one criterion is better at fault detection than another, according to certain types of well-understood probabilistic measures of effectiveness, are especially valuable",1993,0, 431,Constructing and testing software maintainability assessment models,"Software metrics are used to quantitatively characterize the essential features of software. The paper investigates the use of metrics in assessing software maintainability by presenting and comparing seven software maintainability assessment models. Eight software systems were used for initial construction and calibrating the automated assessment models, and an additional six software systems were used for testing the results. A comparison was made between expert software engineers' subjective assessment of the 14 individual software systems and the maintainability indices calculated by the seven models based on complexity metrics automatically derived from those systems. Initial tests show very high correlations between the automated assessment techniques and the subjective expert evaluations",1993,0, 432,Maintenance metrics for the object oriented paradigm,"Software metrics have been studied in the procedural paradigm as a quantitative means of assessing the software development process as well as the quality of software products. Several studies have validated that various metrics are useful indicators of maintenance effort in the procedural paradigm. However, software metrics have rarely been studied in the object oriented paradigm. Very few metrics have been proposed to measure object oriented systems, and the proposed ones have not been validated. This research concentrates on several object oriented software metrics and the validation of these metrics with maintenance effort in two commercial systems",1993,0, 433,A framework for evaluation and prediction of metrics program success,"The need for metrics programs as a part of the improvement process for software development and enhancement has been recognized in the literature. Suggestions have been made for the successful implementation of such programs but little empirical research has been conducted to verify these suggestions. This paper looks at the results of three metrics programs in three different software organizations with which the authors have worked, compares the metrics program development process in those organizations with an evaluation framework developed from the literature, and suggests extensions which may be needed to this framework",1993,0, 434,Towards the evaluation of software engineering standards,"The paper presents the results of work undertaken during the first half of a project whose aim is to develop a measurement based approach for assessing software engineering standards. Using a simple model of a standard as a set of requirements the authors found that it is impossible to make an objective assessment of conformance to the majority of requirements. They make recommendations for improving standards. These include the restriction of individual requirements to a specific product, process or resource, with conformance able to be assessed objectively; the restriction of standards to cohesive collections of requirements; a clear statement of benefit. The approach to evaluation of a specific standard has two stages: firstly the rationalisation and decomposition into mini-standards and secondly the (ideal) measurement-based assessment. The initial stages of a major industrial case study being used to assess the approach are described. They show, by the example of the evaluation of a specific mini-standard, that data the company already collects, and which are similar to the type of data collected by other companies, can be used to form the basis of the assessment",1993,0, 435,Nonhomogeneous Poisson process software-debugging models with linear dependence,"Past research in software reliability concentrated on reliability growth of one-version software. This work proposes models for describing the dependency of N-version software. The models are illustrated via a logarithmic Poisson execution-time model by postulating assumptions of dependency among nonhomogeneous Poisson processes (NHPPs) of debugging behavior. Two-version, three-version, and general N-version models are proposed. The redundancy techniques discussed serve as a basis for fault-tolerant software design. The system reliability, related performance measures, and parameter estimation of model parameters when N=2 are presented. Based on the assumption of linear dependency among the NHPPs, two types of models are developed. The analytical models are useful primarily in estimating and monitoring software reliability of fault-tolerant software. Without considering dependency of failures, the estimation of reliability would not be conservative",1993,0, 436,"What are we assessing: compliance, adequacy or function?","The article is based on the authors' experience and primarily on experience of assessing safety related programmable electronic systems (PESs). Therefore the thoughts expressed are directed principally towards safety, or lack of it, and PESs. The authors believe that it is possible to identify at least three important components of assessment, addressing three different aspects of what it is that the authors are trying to assess: assessment of compliance (the objective assessment of a system's compliance with a standard or standards); assessment of adequacy (the subjective assessment a system's fitness for purpose); and assessment of function (the validation of a system to assess whether it fulfils its specified functions)",1993,0, 437,A low-power generator-based FIFO using ring pointers and current-mode sensing,"The authors present a submicron full CMOS FIFO (first in, first out) memory generator for ASIC (application-specific integrated circuit) and full-custom applications that combines asynchronous access, shift register pointers, a direct word line comparison technique (DWLC) for flag generation, and current sensing in a regular structured architecture that uses circuit compaction and automated parameteric characterization. It features overread and write protection and retransmit capability at clock rates of 70-100 MHz at 5 V and 40-60 MHz at 3.3 V. The regular structure yields predictable performance and simplifies timing model synthesis for simulators. Special hardware features support full fault coverage with BIST (built-in self-test) algorithms available in hardware or software.<>",1993,0, 438,Achieving software quality through Cleanroom software engineering,"Describes a success story in the use of modern technologies for software quality improvement. The Cleanroom software engineering process for zero-defect software has been successfully applied on development projects in a variety of environments with remarkable quality results. Cleanroom is based on formal, theory-based methods for software specification, design, correctness verification, and statistical quality certification. The authors survey a number of Cleanroom projects and demonstrate achievement of the following objectives: superior quality through Cleanroom software development; successful, cost-effective Cleanroom technology transfer to software development teams; and sharp reduction in effort to maintain and evolve Cleanroom software products",1993,0, 439,Software quality: a market perspective,"The author investigates software quality in a market setting. He points out that imperfections in reputation place downward pressure on a product's price and lower the average quality of the delivered software. Testing, verification, and validation improve quality only when reputation is important to end users, when demand satiation is low, and when the marginal cost of improving quality is low in comparison with the improvement in quality. As firms improve total quality control, testing, verification, and validation become less relevant, until they may actually become counterproductive. Where complex products are produced, high quality may lower marginal costs of software, and this is shown to provide an accelerator effect that enhances other competitive strategies. Warranties can ameliorate the underprovision of quality by improving software reputation. But even with warranties, the authors shows that software developers will tend to underprovide quality",1993,0, 440,Experimental evidence of sensitivity analysis predicting minimum failure probabilities,"The authors discuss a theoretical statistical technique complementary to black-box testing, called sensitivity analysis. Black-box testing establishes an upper limit on the likely probability of software failure. Software sensitivity analysis establishes a lower limit on the probability of failures that are likely to occur. Together, these estimates can be used to establish confidence that software does not contain faults. Experimental results show that sensitivity analysis predicts a realistic, lower limit on the probability of failure. This limit is lower than can be generally predicted using testing results only. Sensitivity analysis was applied to three versions of NASA's specification for the sensor management of a redundant strapped down inertial measurement unit (RSDIMU). An RSDIMU is a component of a modern inertial navigation system to provide acceleration data that is integrated to determine velocity and position. The programs used in the experiment were originally produced for use in an N-version system for the RSDIMU",1993,0, 441,System for defect detection on material,"A hardware and software solution allowing, in a given experimental environment, defect detection during textile manufacture is defined. The method is based on detecting breaks in the periodicity of the fabric's pattern. If the periodic characteristic of the material is deleted by filtering, the defect is detected more easily",1993,0, 442,Component tolerance and circuit performance: a case study,"The author presents a continuation of work presented at APEC 1992 (see p.28-35), in which a simple circuit was simulated to predict yield in the manufacturing process. The simulation results had some anomalies. This was because the component values were assumed to be uniformly distributed between the tolerance limits. The author updates that assumption, and uses normally distributed component values. Various simulations are run with varying component tolerance and quality levels. From the mean and variance of the simulated results, plots are made that show the specification limits that would have to be set to meet a given yield requirement",1993,0, 443,The application of activity based management to a software engineering environment,"This paper describes a method that supports the use of budgets based on activities and the integration of measurable continuous improvement in the formulation of software development plans. The application of this method fosters the clear definition and documentation of the software development process, outputs, costs and process controls. These definitions enable the evaluation of performance in real-time for quality and cost of outputs, timely deliveries and activity effectiveness instead of waiting for financial results months later. The proper application of this method will result in more accurate estimates processed in days instead of weeks, identification and elimination of non-value added activities, closer correlation between estimates and actual performance and optimum utilization of resources",1993,0, 444,REACT: a synthesis and evaluation tool for fault-tolerant multiprocessor architectures,"A software testbed that performs automated life testing of a variety of multiprocessor architectures through simulated fault injection is discussed. It is being developed to meet the need for a generalized simulation tool which can evaluate system reliability and availability metrics while avoiding several of the limitations associated with combinatorial and Markov modeling. Incorporating detailed system, workload, and fault/error models, REACT can be more accurate and easier to use than many dependability prediction tools based on analytical approaches. The authors motivate the development of REACT, describe its features, and explain its use. Example applications for the software are provided, and its limitations are discussed",1993,0, 445,Reliability growth of fielded software,"A reliability growth model is presented that translates quality measures to time-dependent MTBFs (mean times between failures). Software reliability growth stems from the detection and removal of faults that are latent in the code after delivery. The fielded reliability performance is measured over time; the `noise' is smoothed; and a general form for the reliability growth profile is generated. The basic reliability growth profile is enhanced to include the impact of code updates, fault recurrence, and usage. It is concluded that the combination of theoretical analysis and actual field data presented should provide key guidance to make operational reliability decisions and determine the support analysis needs of fielded software",1993,0, 446,Systems reliability and availability prediction and comparison with field data for a mid-range computer,"The author describes reliability and availability prediction based on systems evaluation through Markov modelling and comparison of results with field observations of a sample of mid-range computer systems with mirrored disks. The system uses single level architecture with disk striping. Reliability evaluation revealed that, with mirroring, the disk system and the intermediate FRUs (field replaceable units) between the disk system and the processor will be duplicated. Hence, system failures would be limited to the processor which is not duplicated. The validity of the prediction was checked by field monitoring of the performance of four systems with mirrored disks. Good agreement between the predicted and observed reliability and availability estimates was observed reliability and availability estimates was observed for the system hardware. As a result of disk monitoring the disk system is totally fault tolerant",1993,0, 447,Faults and unbalance forces in the switched reluctance machine,"The author identifies and analyzes a number of severe fault conditions that can arise in the switched reluctance machine from the electrical and mechanical points of view. It is shown how the currents, torques, and forces may be estimated, and examples are included showing the possibility of large lateral forces on the rotor. The methods used for analysis include finite element analysis, magnetic circuit models, and experiments on a small machine specially modified for the measurement of forces and magnetization characteristics when the rotor is off-centered. The author also describes a computer program (PC-SRD Dynamic) which is used for simulating operation under fault, as well as normal, conditions. The author discusses various electrical configurations of windings and controller circuits, along with methods of fault detection and protective relaying",1993,0,643 448,An automatic finder of field defects in a large A.G. machine,"A series of program modules has been written in C, which can perform various tasks to analyze closed orbit measurements. They have been grouped into a software package which can be used by an operator to find field defects from orbit measurements. The basic algorithms used are well known and simple, based on fitting betatron oscillations. The effort has been put in the execution speed and ease of use. New algorithms have been introduced to detect wrong measurements and check the relevance of the kick calculation, which are a decisive step towards automatisation. It is presently possible to localize all relevant dipole field defects in a machine as large as LEP within less than one hour, including the check of the orbit readings",1993,0, 449,PLL subsystem for NSLS booster ring power supplies,"A high-performance digital phase-lock loop subsystem has been designed as part of the upgrade to the magnet power supplies in the booster ring at the National Synchrotron Light Source. The PLL subsystem uses a dedicated floating-point digital signal processor to implement the required filters and the startup, fault-handling, and running logic. The subsystem consists of two loops; the first loop tracks long-term changes in the line frequency, while the second tracks more rapid variations. To achieve the required performance, the order of the loop transfer functions was taken to be five, in contrast to the second-or third-order loops usually used. The use of such high-order loops required design techniques different from those normally used for PLL filter design. The hardware and software elements of the subsystem are described, and the design methodology used for the filters is presented. Performance is described and compared to theoretical predictions",1993,0, 450,MPS VAX monitor and control software architecture,"The new Machine Protection System (MPS) now being tested at the SLAC Linear Collider (SLC) includes monitoring and controlling facilities integrated into the existing VAX control system. The actual machine protection is performed by VME micros which control the beam repetition rate on a pulse-by-pulse basis based on measurements from fault detectors. The VAX is used to control and configure the VME micros, configure custom CAMAC modules providing the fault detector inputs, monitor and report faults and system errors, update the SLC database, and interface with the user. The design goals of the VAX software include a database-driven system to allow configuration changes without code changes, use of a standard TCP/IP-based message service for communication, use of existing SLCNET micros for CAMAC configuration, security and verification features to prevent unauthorized access, error and alarm logging and display updates as quickly as possible, and use of touch panels and X-windows displays for the user interface",1993,0, 451,CEBAF beam viewer imaging software,"This paper discusses the various software used in the analysis of beam viewer images at CEBAF. This software, developed at CEBAF, includes a three-dimensional viewscreen calibration code which takes into account such factors as multiple camera/viewscreen rotations and perspective imaging, and maintaining a calibration database for each unit. Additional software allows single-button beam spot detection, with determination of beam location, width, and quality, in less than three seconds. Software has also been implemented to assist in the determination of proper chopper RF control parameters from digitized chopper circles, providing excellent results",1993,0, 452,High resolution measurements of lepton beam transverse distributions with the LEP wire scanners,"A large number of improvements were carried-out on the LEP wire-scanners in preparation for the 1992 running period. They include modifications of the monitors mechanics to decrease the vibrations and the heating of the wire by the beam generated electromagnetic fields. Improvements of the detector chain and a software reorganization at the various levels for better noise rejection, improved user interface and “off-lineâ€?data analysis capabilities. It is now also possible to acquire the profiles of each of the sixteen circulating bunches, electrons and positrons, during the same sweep. As a consequence of these actions the quality of the collected data is much improved. The results are presented and discussed",1993,0, 453,Total reliability management for telecommunications software,"Total reliability management for telecommunications software is essential in achieving high quality network products and a high level of network integrity. The special characteristics and reliability goals of telecommunications software are reviewed, followed by a description of the components of an effective strategy for comprehensive software reliability management. The programs of fault prevention, fault detection, fault removal, failure detection, failure diagnosis, and failure tolerance are shown to accomplish these goals. In addition, specific and advanced techniques for building these programs are discussed. The relationship of these techniques to the special characteristics of telecommunications software, and their coordination with each other is explained. The total reliability management for telecommunications software requires efforts from both suppliers and services providers",1993,0, 454,Performance trending for management of high-speed packet-switched networks,"Due to technological advances, computer communication networks are becoming increasingly complex. In addition, applications require networks to provide high performance and reliability. Placing such demands on these networks necessitates an efficient network management. Many existing systems perform data collection and reporting only, and so operators must use their own expertise to do aid the trouble shooting. To provide a dynamic solution, it is necessary to provide a real-time automated control of network operation parameters. The objective of this research is to develop and demonstrate the capability of applying modeling and simulation techniques to maintain the network integrity of a high-speed packet-switched network. In the paper a new technique called performance trending is presented, in which key network parameters are monitored and network failure is predicted. Performance trending is also used for the network performance maintenance by means of analysis and tuning to predict and prevent network failures. A typical packet-switched network is simulated to verify the performance trending for network performance management. End-to-end delay and bit error rate (BER) are used to illustrate performance trending and their effects on the network performance",1993,0, 455,Highly-constrained neural networks with application to visual inspection of machined parts,"The authors investigate techniques for embedding domain specific spatial invariances into highly constrained neural networks. This information is used to reduce drastically the number of weights which have to be determined during the learning phase, thus allowing application of artificial neural networks to problems characterized by a relatively small number of available examples. As an application of the proposed technique, the problem of optical inspection of machined parts is studied. More specifically, the performance of a network created according to this strategy which accepts images of the parts under inspection at its input and issues at its output a flag which states whether the part is defective, is characterized. The results obtained so far show that such a classifier provides a potentially relevant approach for the quality control of metallic objects since it offers at the same time accuracy and short software development time.<>",1993,0, 456,CCD camera based automatic dimension measurement and reproduction of 3-D objects,"The paper presents an automatic CCD camera vision based measurement system having capabilities to measure dimensional parameters and reproduce a 3-D object on computer numerical controlled machines (CNC). The operations of image acquisition, digitization, storage and transfer to computer are performed by the vision system. The vision system provides the required coordinates to computer for the movement of machine tools. The CNC machines integrated with the vision system form an automatic object reproduction system. This automation enhances the quality and quantity of production. The associated analysis software employs graphic techniques like edge detection, surface filling and mensuration of regular 3D objects. The linear parameter (i.e. dimensions) of the objects are obtained by area measurement using the surface filling algorithm. The measured dimensions are in the form of coordinates to be input to the CNC part program.<>",1993,0, 457,A new approach for discriminating between switching and fault generated broadband noise on EHV transmission lines,"A novel technique for the detection of fault generated high frequency noise on transmission lines has been under development within the Power and Energy System Group at Bath University. The technique employs an arrangement involving the use of conventionally connected power line communication signal traps and capacitor voltage transformers (CVTs). This paper examines the performance of the digital fault detection technique in the presence of high frequency signals due to both arcing faults and spurious noise, with a view to improving the reliability and dependability of the technique, particularly in the presence of the latter. The performance is illustrated in relation to high frequency responses attained from a typical 400 kV vertical construction line, under both faults and switching operations on the system and for these purposes, the simulation studies for generating the high frequency phenomenon includes the incorporation of very realistic nonlinear arc models. The well known EMTP software package is used.<>",1993,0, 458,Comparison between a neural fuzzy system- and a backpropagation-based fault classifiers in a power controller,"A real-time neural fuzzy (NF) power control system is developed and compared with a backpropagation neural network (BNN) system. The objective is to develop computation hardware and software in order to implement the fault classification of a three-phase motor in real-time response. With online training capability, the NF system can be adaptive to the particular characteristics of a particular motor and can be easily modified for the customer's needs in the future. The preprocessing of a BNN-based fault classifier normalizes the magnitude between [-1,1] and transforms the number of samples to 32 for a cycle of waveform. The trained BNN is used to classify faults from the input waveforms. Real-time response is achieved through the use of a parallel processing system and the partition of the computation into parallel processing tasks. Compared with a four-processor BNN system, the NF system requires smaller cost (three processors) and recognizes waveforms faster. Moreover, with the appropriate feature extraction, the NF system can recognize temporally variant spike and chop occurring within a sin waveform",1993,0, 459,Robust FDI systems and Hâˆ?/sub>-optimization-disturbances and tall fault case,"This paper is dedicated to the design of robust FDI systems by utilizing the Hâˆ?/sub>-optimization technique. Results are derived based on the optimization of a sensitivity (performance) index. The overall effectiveness of the residual generator is examined in terms of faults vs. uncertainties, with the assumption of a bounded norm for disturbances. The results show that, for a well-conditioned system, a robust residual generator exists for all possible disturbances. This paper discusses the robust design of a tall fault system (i.e. the number of faults is no greater than the number of outputs) with additive disturbances",1993,0, 460,Control reconfiguration in the presence of software failures,"In this paper, we discuss a special approach for software fault tolerance in control applications. A full-function, high-performance, but complex control system is complemented by an error-free implementation of a highly reliable control system of lower functionality. When the correctness of the high-performance controller is in doubt, the reliable control system takes over the execution of the task. An innovative feature of the approach is the disparity between the two control systems, which is used to exploit the relative advantages of the simple/reliable vs. complex/high-performance systems. Another innovative feature is the fault detection mechanism, which is based on measures of performance and of safety of the control system. The example of a ball and beam system is used to illustrate the concepts, and experimental results obtained on a laboratory set-up are presented",1993,0, 461,Automated error detection in multibeam bathymetry data,"For a variety of reasons, multibeam swath sounding systems produce errors that can seriously corrupt navigational charts. To address this problem, the authors have developed two algorithms for identifying subtle outlier errors in a variety of multibeam systems. The first algorithm treats the swath as a sequence of images. The algorithm is based on robust estimation of autoregressive (AR) model parameters. The second algorithm is based on energy minimization techniques. The data are represented by a weak-membrane or thin-plate model, and a global optimization procedure is used to find a stable surface shape. Both of these algorithms have undergone extensive testing at bathymetric processing centers to assess performance. The algorithms were found to have a probability of detection high enough to be useful and a false-alarm rate that does not significantly degrade the data quality. The resulting software is currently being used both at processing centers and at sea as an aid to bathymetric data processors",1993,0, 462,A parametric method for the evaluation of human-computer interfaces,"The paper proposes a method of interface quality evaluation intended for product selection and based on a checklist. The method is flexible, easily understood and covers all aspects of an interface. Creation of the list rests upon a selection of more than 300 ergonomic criteria drawn from published reports and research articles in the field of interface design and evaluation. The criteria are high-level and relatively independent of the type of interface and technology. Their collection has been organised according to a new synthetic classification scheme which bears upon the functional capabilities and friendliness factors of an interface. Furthermore, the proposed method uses a parametric model to calculate the overall quality of an interface as a function of the values and weights assigned to the criteria and classes. The user can also adjust the model behaviour by modifying its sensitivity parameters. The method of evaluation has been implemented in an electronic calculator",1993,0, 463,Automatic test case generation for Estelle,"An automatic test case generation method for Estelle is proposed. A formal model is introduced to describe the dynamic properties of Estelle specifications so as to verify the difference between the behavior of the specification and the behavior of its models. Based on the difference, an algorithm is presented to produce test cases that can detect such implementation faults. The algorithm can generate test cases not only for single module specifications but also for systems containing multiple modules that run concurrently. In addition, heuristics are suggested to improve the performance of the test case generation process",1993,0, 464,Comparison and optimization of digital read channels using actual data,"Various digital equalization and detection schemes have been proposed for disk drive read channels and simulated as a Lorentzian channel [Moon and Carley, 1990]. With the design of linear recording densities greater than two channel bits per half amplitude pulsewidth (PW50), concerns over nonlinear distortions, such as partial erasure, require channel simulation with techniques other than the linear Lorentzian model. The paper describes the considerations a disk drive designer faces when optimizing the performance of digital read channels with a channel simulator that uses actual data obtained from a disk file environment. This real data channel simulator accomplishes equalization and detection in software using sampled data from a spinstand allowing the flexibility of investigating the performance of different channel schemes without having to build expensive hardware. Sample offtrack data for selected channels at various user (information) densities are shown, along with ontrack Lorentzian simulation. Goodness numbers for each channel are presented which approximately minimize bit error rate (BER) in a quick way, providing algorithms for a disk drive to check channel quality during adaptive adjustments",1993,0, 465,A computer hardware configuration expert system: Examination of its software reliability,"Describes the implementation of an expert system called SCAT for a computer hardware configuration task. It assists system engineers in enumerating all the parts necessary to build a computer system and prepares the documents for succeeding tasks. The emphasis is placed on the software reliability metrics and the development management methodology employed in SCAT. To evaluate the software reliability of SCAT, a traditional metric was employed the number of faults detected at each stage of the test process was counted and compared with the standard value. By analyzing the fault data derived from two versions of SCAT, it was found that the code review for production rules was not as effective as it was for procedural languages. It was also found that there was a possibility that the number of faults embedded in production rules in the coding process was smaller than that in procedural languages",1993,0, 466,An expert system for reviewing commodity substitutions in the consumer price index,"An expert system is being developed to help assure the quality of data in the consumer price index (CPI). The system replicates the reasoning of commodity analysis, determining if a substitute product is comparable to a product priced in the previous month. If comparable, its price can be considered in producing the CPI. The system reasons with expertise directly entered by commodity analysts. To help the commodity analysts serve as their own knowledge engineers, they are provided with a skeletal system to which they add their knowledge of particular product categories. The system was tested on six months of CPI data. Results indicate that commodity analysts can accurately encode their own expertise and use the technology to detect their own oversights. A primary function of the project is to document and refine analyst expertise that is otherwise implicit. This makes the knowledge permanent and represents it in a way that allows it to be applied consistently",1993,0, 467,Structure-based clustering of components for software reuse,"The characterization of the code reuse practices in existing production environments provides fundamental data and lessons for the establishment or improvement of effective reuse-oriented policies, and for the adoption of up-to-date technologies supporting them. The method and results of an experience of metric-aided clustering of software components, aimed at detecting and characterizing implicit reuse of code and reuse potential in a large-scale data processing environment, are presented. Similar function may be in fact replicated many times, customizing an existing source code component, but this phenomenon may be only partially apparent in the form of explicit reuse. A set of software metrics has been used to create clusters of existing components whose internal structures appear very similar. Functional similarity checks involving human experts were then performed. This was done in the context of a large reuse project, where quantitative software quality indicators are also combined with the feedback collected in pilot groups who know the applications from which the candidate components were extracted. The potential and limitations of metric support in this field are considered in the discussion of the results obtained",1993,0, 468,Improving the quality of three products through improved testing: A case study,"The keys to efficient testing are to detect problems as early as possible in the testing phase and to try and detect problems before they reach the customer. Three product offerings of a major software company are examined, and recommendations on how the testing and maintenance processes for these products can be improved are given. The rationale behind these recommendations is based on an analysis of the types of errors being reported in each product, when these errors are found, and in which modules they are found",1993,0, 469,A comparative study of predictive models for program changes during system testing and maintenance,"By modeling the relationship between software complexity attributes and software quality attributes, software engineers can take actions early in the development cycle to control the cost of the maintenance phase. The effectiveness of these model-based actions depends heavily on the predictive quality of the model. An enhanced modeling methodology that shows significant improvements in the predictive quality of regression models developed to predict software changes during maintenance is applied here. The methodology reduces software complexity data to domain metrics by applying principal components analysis. It then isolates clusters of similar program modules by applying cluster analysis to these derived domain metrics. Finally, the methodology develops individual regression models for each cluster. These within-cluster models have better predictive quality than a general model fitted to all of the observations",1993,0, 470,A transputers-based data acquisition and processing tool for measurements on power systems,"The authors describe a transputer-based data acquisition and processing system that is used for real time accurate measurements of various quantities which characterize a three-phase system. The measurement system is based on PC-bus data processing boards. The measurements of the quantities of interest are determined starting from the harmonic voltage and current amplitudes and phases obtained by the discrete Fourier transform results. A system architecture is also proposed to detect and record the variations of the power quality. Data relating to the hardware and software structure, and descriptions of the system components and their interactions are presented. The design and operation of the system are reported along with the advantages of using transputers and a PC base system. An example showing the applications of this system is also reported",1993,0, 471,1993 IEEE Instrumentation and Measurement Technology Conference,The following topics are dealt with: microwaves; medical instrumentation; quantization; sensors and transducers; antennas and electromagnetics; metrology and calibration; analog-to-digital converter architecture; system identification; analog-to-digital converter testing; optics and fiber optics; time and frequency analysis; integrated measurement; robotics; digital signal processing; fault diagnosis; neural networks; computer-based measurements and software; material characterization; artificial intelligence (AI) and fuzzy logic; data acquisition; and electronic circuits. Abstracts of individual papers can be found under the relevant classification codes in this or other issues,1993,0, 472,Model- and knowledge-based fault detection and diagnosis of gas transmission networks,"This paper describes an expert system for online fault detection and diagnosis of gas transmission networks, combining model- and knowledge-based methods. It consists of a set of hierarchically structured components which include signal processing, Luenberger-type state observation, rule-based knowledge processing as well as an advanced user interface. The diagnosis system was tested with real measurement data from a medium sized gas distribution network. Its real-time capability and effectiveness for basic fault detection purposes was demonstrated by industrial applications",1993,0, 473,Reducing the cost of test pattern generation by information reusing,"A new source of computational saving for test pattern generation, i.e., information reusing, is presented. The proposed technique can make full use of the pattern generation information from the last pattern to derive a set of new tests by means of critical path transitions. By so doing, fault propagation procedure is no longer required in the next pattern generation process and the line justification procedure is simplified. Experiments using the ISCAS-85 benchmark circuits show that when the technique is used with a deterministic test pattern generation algorithm (DTPG), computational cost is greatly reduced without a substantial increase in test length",1993,0, 474,Real-time distributed program reliability analysis,"Distributed program reliability has been proposed as a reliability index for distributed computing systems to analyze the probability of the successful execution of a program, task, or mission in the system. However, current reliability models proposed for distributed program reliability evaluation do not capture the effects of real-time constraints. We propose an approach to the reliability analysis of distributed programs that addresses real-time constraints. Our approach is based on a model for evaluating transmission time, which allow us to find the time needed to complete execution of the program, task, or mission under evaluation. With information on time-constraints, the corresponding Markov state space can then be defined for reliability computation. To speed up the evaluation process and reduce the size of the Markov state space, several dynamic reliability-preserving reductions are developed. A simple distributed real-time system is used as an example to illustrate the feasibility and uniqueness of the proposed approach",1993,0, 475,STAR: a fault-tolerant system for distributed applications,The paper presents a fault-tolerant manager for distributed applications. This manager provides an efficient recovery of hosts' failures on networks of workstations. An independent checkpointing is used to automatically recover application processes affected by host failures. Domino-effects are avoided by means of message logging and file versions management. STAR provides an efficient software failure detection by structuring hosts in a logical ring. Performance measurements in a real environment show the interest and the limits of our system,1993,0, 476,A Bayesian approach to confidence in system testing,The authors present a purely Bayesian technique for computing the probability that the conclusion reached during a fault isolation is correct given only the results of the (potentially unreliable) tests that have been run and information about the reliability of those tests. This technique is then applied to both off-line and real-time system analysis as a means of determining the potential correctness of a fault isolation conclusion as well as to guarantee (if possible) an arbitrarily selected probability that a conclusion is correct,1993,0, 477,Hardware design-for-test process improvements and resulting testability tool requirements,"Testability engineering has relevance in all program phases, at all levels of a weapon system, and for all levels of diagnostics and maintenance. Engineering tool development for testability, however, lags far behind tools for operational hardware and software development. This paper addresses the fundamental elements of the testability process in the hardware design phase and recommends attributes of tools needed to support a truly concurrent engineering environment",1993,0, 478,Design and implementation of the multilevel test philosophy,"This paper describes the design and implementation of a Test Program Set (TPS) based on a multilevel test philosophy. The multilevel test philosophy not only incorporates the concepts of vertical testability and directed diagnostics, but also explores the test strategy and test encapsulation concepts being defined by the IEEE A Broad Based Environment for Test (ABBET) subcommittee",1993,0, 479,Software architecture review for telecommunications software improvement,"Software architecture reviews (SARs) have been established to help assure that a Bellcore client company (BCC) receives network elements and network switching systems of sufficiently high quality to meet its needs. The motivation for the SAR is discussed along with its methodology, checklists, application and expected benefits. The SAR is designed to assess three aspects of software reliability and quality. The impact of implementing new requirements in existing software architectures is assessed. The robustness of software to faults that occur in the field is described, and the fault prevention methods used before product deployment to reduce the number of software faults are characterized",1993,0, 480,A fault-tolerant dynamic scheduler for distributed hard-real-time systems,"A dynamic run-time scheduler is proposed that enhances the effects of a pre-run-time scheduling algorithm for real-time applications. The tasks are of a periodic nature and each task has two schedulable versions: primary and alternate. The primary produces an accurate result while the alternate produces an approximate result but takes less time and should be scheduled if the primary fails to meet the deadline. The objective of the dynamic scheduler is to maximize the number of primaries that are scheduled. The scheduling algorithm and performance results for different failure rates are also given. The performance results show that, for lower failure probability of the primaries scheduled during pre-run-time, the algorithm succeeds in scheduling a higher number of primaries during run-time",1993,0, 481,VLSI neural network architectures,"VLSI architectures for neural networks are presented. Neural networks have wide-ranging applications in classification, control, and optimization. With the need for real-time performance, VLSI neural networks have gained significant attention. Digital, analog, and mixed-mode designs are used for this application. Modular and reconfigurable designs are necessary so that various neural network models can be easily configured",1993,0, 482,Towards a test standard for board and system level mixed-signal interconnects,"This paper describes basic requirements for a standard bus for testing analog interconnects. The viewpoint is that of prospective users. An architecture for such a standard bus is proposed. The basic conception is of a mixed-signal version of boundary-scan and is compatible with, indeed built upon, ANSI/IEEE Std 1149.1. The goals in mind are the detection of faults and the measurement of analog interconnect parameters. Among the desired benefits are test and test data commonality throughout an assembly hierarchy, from manufacturing to field service",1993,0, 483,A comparison of stuck-at fault coverage and IDDQ testing on defect levels,"This paper presents experimental data comparing the effect of functional stuck-at fault testing and IDDQ testing on defect levels. Results obtained for parts tested with varying stuck-at fault coverage are compared with results from IDDQ testing. Empirical data are analyzed against two theoretical fault models to demonstrate that correlation can be obtained. The results indicate that significant benefits can be gained from IDDQ testing, even with non-deterministic test locations and relatively few measurements. Data are also presented on the impact of IDDQ testing on burn-in monitoring and customer field failures of ASIC designs. Finally some practical issues associated with implementing IDDQ tests are discussed",1993,0, 484,Certification trails and software design for testability,"This paper investigates design techniques which may be applied to make program testing easier. We present methods for modifying a program to generate additional data which we refer to as a certification trail. This additional data is designed to allow the program output to be checked more quickly and effectively. Certification trails have heretofore been described primarily from a theoretical perspective. In this paper, we report on a comprehensive attempt to assess experimentally the performance and overall value of the certification trail method. The method has been applied to nine fundamental, well-known algorithms for the following problems: convex hull, sorting, huffman tree, shortest path, closest pair, line segment intersection, longest increasing subsequence, skyline, and voronoi diagram. Run-time performance data for each of these problems is given, and selected problems are described in more detail. Our results indicate that there are many cases in which certification trails allow for significantly faster overall program execution time than a two-version programming approach, and also give further evidence of the breadth of applicability of this method",1993,0, 485,Assessing real-valued transform algorithms for fast convolution in electric power quality calculations,"It is argued that real- versus complex-valued trigonometric transform algorithms may deserve increased awareness in systems analysis in general, and application to nonsinusoidal waveform propagation in electric power systems in particular, due to their favorable convolution property when linear, time-invariant systems are studied. The performance of the three real-valued trigonometric transforms is assessed based on timing study results. The importance of instructions other than the multiplicative and additive complexities, namely those instructions involved in data transfer operations (e.g. loads and stores), is pointed out",1993,0, 486,The business case for software reuse,"To remain competitive, software development organizations must reduce cycle time and cost, while at the same time adding function and improving quality. One potential solution lies in software reuse. Because software reuse is not free, we must weigh the potential benefits against the expenditures of time and resources required to identify and integrate reusable software into products. We first introduce software reuse concepts and examine the cost-benefit trade-offs of software reuse investments. We then provide a set of metrics used by IBM to accurately reflect the effort saved by reuse. We define reuse metrics that distinguish the savings and benefits from those already gained through accepted software engineering techniques. When used with the return-on-investment (ROI) model described in this paper, these metrics can effectively establish a sound business justification for reuse and can help assess the success of organizational reuse programs.",1993,0, 487,SONET-based integrated digital loop carrier (IDLC) software reliability modeling,"An overview of the mathematical framework for the nonhomogeneous Poisson process (NHHP) model, and the design and implementation of the software reliability test tool are presented. The operational profile-driven testing in measuring software quality from the user's point of view is presented. It is shown how NHPP can be used to predict software failure behavior and estimate how much testing time is needed to achieve given reliability objectives. It is recommended that building an automated test tool will save time and improve accuracy in failure identification",1993,0, 488,IDDQ test results on a digital CMOS ASIC,"The test program of a digital CMOS ASIC (application-specific integrated circuit) was instrumented for IDDQ current measurement. The design test included a low-fault-coverage functional set of vectors as well as high-fault-coverage scan set of vectors. Analysis of current distributions of parts failing and passing vector sets provides insight into the potential of static current testing. Many issues remain, but the data suggest that real quality improvements can be obtained from implementing static current testing on a small subset of device vectors",1993,0, 489,Designing reliable software,"In this paper we examine the quantitative software attributes that are known to be related to software faults and subsequently software failures. The object is to develop design guidelines based on these quantitative attributes that will assist the evaluation of alternatives in the software design process. In order for the several design alternatives to provide meaningful input to the design process, the software system must first be characterized in terms of a series of operational profiles of its projected use",1993,0, 490,Enhancing accuracy of software reliability prediction,"The measurement and prediction of software reliability require the use of the software reliability growth models (SRGMs). The predictive quality can be measured by the average end-point projection error. In this paper, the effects of two orthogonal classes of approaches to improve prediction capability of a SRM have been examined using a large number of data sets. The first approach is preprocessing of data to filter out short term noise. The second is to overcome the bias inherent in the model. The results show that proper application of these two approaches can be more important than the selection of the model",1993,0, 491,Dependability modeling and evaluation of recovery block systems,"The paper presents performance modeling and evaluation of recovery block systems. In order to produce a dependability model for a complete fault tolerant system we consider the interaction between the faults in the alternatives and the faults in the acceptance test. The study is based on finite state continuous time Markov model, and unlike previous works, we carry out the analysis in the time domain. The undetected and total failure probabilities (safety and reliability), as well as the average recovery block execution time expressions are obtained. Derived mathematical relations between failure probabilities (i.e. reliability and safety) and modeling parameters enable us to gain a great deal of quantitative results",1993,0, 492,Software recreate problems estimated to range 10-20 percent: A case study on two operating system products,"Software recreates are necessitated due to inadequate diagnostic capability following a failure. They impact the service process and the perception of availability, but have never been adequately quantified. This paper develops a technique to make the key measurements of: percent recreate, arrival rate and open time, from problem service data without requiring any additional instrumentation. The study is conducted over an 18 month period on two operating system products, that are among the best in the industry for diagnosis and service. The results provide the first insight into the problem and some accurate baselines",1993,0, 493,A hierarchical object-oriented approach to fault tolerance in distributed systems,"We develop an object-oriented framework to support fault tolerance in distributed systems using nested atomic actions. The inherent properties of the object-oriented programming paradigm enhance the error detection and recovery capabilities of the fault tolerance schemes. In our approach, error detection is performed locally within the object, whereas the recovery from errors is accomplished either locally within the object or globally across the active objects. We develop a queue-based backward recovery scheme for the global object restoration which greatly reduces the performance and storage overhead when compared to the existing schemes. We illustrate our approach with the help of prototype implementations",1993,0, 494,Adding capability checks enhances error detection and isolation in object-based systems,"Error detection and error isolation are becoming stringent requirements for many computational problems requiring high reliability in addition to high performance. This paper presents CAPACETI, a technique for utilizing capabilities at the application level in order to achieve dependable operations. The proposed technique is further augmented with executable assertions and other software error detection techniques. The effectiveness of the techniques to detect errors, their contribution to the overall coverage, and their performance overhead were experimentally obtained using fault/error injection. Results obtained from these experiments show that high coverage with a low performance overhead can be achieved by selectively combining different error detection techniques",1993,0, 495,Infinite failure models for a finite world: A simulation study of the fault discovery process,"Many software reliability growth models have been published in journals and conference proceedings over the last 20-25 years. Each one has been justified based on theoretical or empirical evidence. Although there are many ways of classifying these models, a particularly interesting classification involves distinguishing models based upon the asymptotic expected number of total failures. These models are termed infinite and finite failure models since each one expects infinite and finite failures respectively as time approaches infinity. As might be expected, theoretic and especially empirical justification for the appropriateness of infinite failure models came after justifications for finite failure models. Infinite failure models were associated with weak fault repair systems or possibly with highly nonuniform usage, although this term is non-specific. We demonstrate through simulations of black-box testing (and/or field usage) that infinite failure models are appropriate in situations where there is perfect and near-perfect repair and where usage is uniform for the vast majority of the system",1993,0, 496,A neural network modeling methodology for the detection of high-risk programs,"The profitability of a software development effort is highly dependent on both timely market entry and the reliability of the released product. To get a highly reliable product to the market on schedule, software engineers must allocate resources appropriately across the development effort. Software quality models based upon data drawn from past projects can identify key risk or problem areas in current similar development efforts. Knowing the high-risk modules in a software design is a key to good design and staffing decisions. A number of researchers have recognized this, and have applied modeling technqiues to isolate fault-prone or high-risk program modules early in the development cycle. Discriminant analytic classification models have shown promise in performing this task. We introduce a neural network classification model for identifying high-risk program modules, and we compare the quality of this model with that of a discriminant classification model fitted with the same data. We find that the neural network techniques provide a better management tool in software engineering environments",1993,0, 497,Application transparent fault management in fault tolerant Mach,"A general purpose operating system fault management mechanism, the sentry, has been defined and implemented for the Mach 3.0 microkernel running a UNIX 4.3 BSD server. The value of a mechanism in the operating system domain is usually judged by two criteria: the suitability of the mechanism to support a wide range of policies and the performance cost of the mechanism. Similarly, in fault detection and recovery there are a relatively large number of strategies which can be mapped onto mechanisms and policies for fault tolerance. To highlight the properties of the sentry mechanism for fault management, the suitability and performance of the proposed mechanism are being evaluated for sample fault detection policies and for sample fault recovery policies. In the fault detection domain use of the mechanism to support assertion type policy is presented and evaluated through an example. Two recovery policies have been chosen and evaluated: checkpoint/restart and checkpoint/restart/journaling.",1993,0, 498,"Faults, symptoms, and software fault tolerance in the Tandem GUARDIAN90 operating system","The authors present a measurement-based study of software failures and recovery in the Tandem GUARDIAN90 operating system using a collection of memory dump analyses of field software failures. They identify the effects of software faults on the processor state and trace the propagation of the effects to other areas of the system. They also evaluate the role of the defensive programming techniques and the software fault tolerance of the process pair mechanism implemented in the Tandem system. Results show that the Tandem system tolerates nearly 82% of reported field software faults, thus demonstrating the effectiveness of the system against software faults. Consistency checks made by the operating system detect 52% of software problems and prevent any error propagation in 31% of software problems. Results also show that 72% of reported field software failures are recurrences of known software faults and 70% of the recurrence groups have identical characteristics.",1993,0, 499,The variation of software survival time for different operational input profiles (or why you can wait a long time for a big bug to fail),"Experimental and theoretical evidence for the existence of contiguous failure regions in the program input space (blob defects) is provided. For real-time systems where successive input values tend to be similar, blob defects can have a major impact on the software survival time because the failure probability is not constant. For example, with a random walk input sequence, the probability of failure decreases as the time from the last failure increases. It is shown that the key factors affecting the survival time are the input trajectory, the rate of change of the input values, and the surface area of the defect (rather than its volume).",1993,0, 500,Simulation of software behavior under hardware faults,"A simulation-based software-model that permits application specific dependability analysis in the early design stages is introduced. The model represents an application program by decomposing it into a graph model consisting of a set of nodes, a set of edges that probabilistically determine the flow from node to node, and a mapping of the nodes to memory. The software model simulates the execution of the program while errors are injected into the program's memory space. The model provides application-dependent parameters such as detection and propagation times and permits evaluation of function on system level error detection and recovery schemes. A case study illustrates the interaction between an application program and two detection schemes. Specifically, Gaussian elimination programs running on a Tandem Integrity S2 system with memory scrubbing are studied. Results from the simulation-based software model are validated with data measured from an actual Tandem Integrity S2 system. Application dependent coverage values obtained with the model are compared with those obtained via traditional schemes that assume uniform or ramp memory access patterns. For the authors' program, some coverage values obtained with the traditional approaches were found to be 100% larger than those obtained with the software model.",1993,0, 501,Efficient memory access checking,"A new approach provides efficient concurrent checking of memory accesses using a signature embedded into each instance of a data structure, and using a new LOAD/STORE instruction that reads the data structure signature as a side effect. Software memory-access faults are detectable using this approach, including corrupted pointers, uninitialized pointers, stale pointers, and copy overruns. Hardware memory-access faults are also detectable, including faults in the memory access path and the register file. Instruction scheduling minimizes the cost of the side-effect reads, and signatures are checked with little overhead using the hardware monitor previously proposed for signature monitoring. Benchmark results for the MIPS R3000 executing code scheduled by a modified GNU C Compiler show that an average of 53% of the memory accesses are checked, and that access checking causes an average of less than 5% performance overhead.",1993,0, 502,SACEM: A fault tolerant system for train speed control,"The authors give an overview of the SACEM system which controls the train movements on RER A in Paris, which transports daily one million passengers. The various aspects of the dependability of the system are described, including the techniques aimed at insuring safety (online error detection, software validation). Fault tolerance of the onboard-ground compound system is emphasized.",1993,0, 503,A systematic and comprehensive tool for software reliability modeling and measurement,"Sufficient work has been done to demonstrate that software reliability models can be used to monitor reliability growth over a useful range of software development projects. However, due to the lack of appropriate tools, the application of software reliability models as a means for project management is not as widespread as it might be. The existing software reliability modeling and measurement programs are either difficult for a nonspecialist to use, or short of a systematic and comprehensive capability in the software reliability measurement practice. To address the ease-of-use and the capability issues, the authors have prototyped a software reliability modeling tool called CASRE, a Computer-Aided Software Reliability Estimation tool. Implemented with a systematic and comprehensive procedure as its framework, CASRE will encourage more widespread use of software reliability modeling and measurement as a routine practice for software project management. The authors explore the CASRE tool to demonstrate its functionality and capability.",1993,0, 504,A microprocessor-based measuring unit for high-speed distance protection,"This paper describes a microprocessor-based measuring unit which is suitable for use in high-speed digital distance relays. The unit uses an algorithm that provides accurate estimates of apparent resistance and inductance of the line in relatively short times. The algorithm deals effectively with the presence of nonfundamental frequency components in signals without making assumptions about their compositions. The hardware and software of the unit are described. Test results indicate that the estimates converge to their true values within about 10 ms after fault inception. Also, the ability of the unit to provide accurate estimates is not adversely affected by the system frequencies drifting from its nominal value and by the distortion of the input signals.",1993,0, 505,TTP-a protocol for fault-tolerant real-time systems,"The Time-Triggered Protocol integrates such services as predictable message transmission, clock synchronization, membership, mode change, and blackout handling. It also supports replicated nodes and replicated communication channels. The authors describe their architectural assumptions, fault hypothesis, and objectives for the TTP protocol. After they elaborate on its rationale, they give a detailed protocol description. They also discuss TTP characteristics and compare its performance with that of other protocols proposed for control applications",1994,0, 506,Exploiting instruction-level parallelism for integrated control-flow monitoring,"Computer architectures are using increased degrees of instruction-level machine parallelism to achieve higher performance, e.g., superpipelined, superscalar and very long instruction word (VLIW) processors. Full utilization of such machine parallelism is difficult to achieve and sustain, resulting in the occurrence of idle resources at run time. This work explores the use of such idle resources for concurrent error detection in processors employing instruction-level machine parallelism. The Multiflow TRACE 14/300 processor, a VLIW machine, is chosen as an experimental vehicle. Experiments indicate that significant idle resources are likely to exist across a wide range of scientific applications for the TRACE 14/300. A methodology is presented for detecting transient control-flow errors, called available resource-driven control-flow monitoring (ARC), whose resource use can be tailored to the existence of idle resources in the processor. Results of applying ARC to the Multiflow TRACE 14/300 processor show that >99% of control-flow errors are detected with negligible performance overhead. These results demonstrate that ARC is highly effective in using the idle resources of a processor to achieve concurrent error detection at a very low cost",1994,0, 507,Designing an agent synthesis system for cross-RPC communication,"Remote procedure call (RPC) is the most popular paradigm used today to build distributed systems and applications. As a consequence, the term “RPCâ€?has grown to include a range of vastly different protocols above the transport layer. A resulting problem is that programs often use different RPC protocols, cannot be interconnected directly, and building a solution for each case in a large heterogeneous environment is prohibitively expensive. We describe the design of a system that can synthesize programs (RPC agents) to accommodate RPC heterogeneities. Because of its synthesis capability, the system also facilitates the design and implementation of new RPC protocols through rapid prototyping. We have built a prototype system to validate the design and to estimate the agent development costs and cross-RPC performance. The evaluation shows that the synthesis approach provides a more general solution than existing approaches do, and with lower software development and maintenance costs, while maintaining reasonable cross-RPC performance",1994,0, 508,Managing code inspection information,"Inspection data is difficult to gather and interpret. At AT&T Bell Laboratories, the authors have defined nine key metrics that software project managers can use to plan, monitor, and improve inspections. Graphs of these metrics expose problems early and can help managers evaluate the inspection process itself. The nine metrics are: total noncomment lines of source code inspected in thousands (KLOC); average lines of code inspected; average preparation rate; average inspection rate; average effort per KLOC; average effort per fault detected; average faults detected per KLOC; percentage of reinspections; defect-removal efficiency.<>",1994,0, 509,A comparative study of pattern recognition techniques for quality evaluation of telecommunications software,"The extreme risks of software faults in the telecommunications environment justify the costs of data collection and modeling of software quality. Software quality models based on data drawn from past projects can identify key risk or problem areas in current similar development efforts. Once these problem areas are identified, the project management team can take actions to reduce the risks. Studies of several telecommunications systems have found that only 4-6% of the system modules were complex [LeGall et al. 1990]. Since complex modules are likely to contain a large proportion of a system's faults, the approach of focusing resources on high-risk modules seems especially relevant to telecommunications software development efforts. A number of researchers have recognized this, and have applied modeling techniques to isolate fault-prone or high-risk program modules. A classification model based upon discriminant analytic techniques has shown promise in performing this task. The authors introduce a neural network classification model for identifying high-risk program modules, and compare the quality of this model with that of a discriminant classification model fitted with the same data. They find that the neural network techniques provide a better management tool in software engineering environments. These techniques are simpler, produce more accurate models, and are easier to use",1994,0, 510,Software testing and sequential sampling,"The paper briefly describes sequential sampling and explains how such sampling procedures can be integrated into a software testing strategy to provide benefit. Using first-pass failure rate, (a simple software testing metric), as a quality indicator, the authors show theoretically and empirically how sequential sampling plans can be used to establish a mechanism with known statistical confidence/risk which allows for decisions to stop testing when product quality either meets the target or is worse than the expectation. Quality assessment, as well as quality certification roles, can be addressed. When sequential sampling is coupled with other simple statistical techniques, information can also be provided to help identify fault prone areas. Data from releases of the 5ESS Switch project are used in several examples to validate the usefulness of the techniques",1994,0, 511,Cost impact of telecommunications switch software problems on divested Bell operating companies,"Bellcore has developed a model to support the periodic calculation and reporting of customer costs resulting from poor quality in telecommunications switching system software. This model is designed to provide, as a primary output, total switching system software cost of poor quality, along with a breakdown of this total by major cost component. The primary input to the model is software RQMS data, described in Bellcore's reliability and qualify measurements for telecommunications systems, TR-TSY-000929, and reported by switching system suppliers to the divested Bell operating companies (telcos) on systems operated by these companies. Telco and switching system supplier alike are expected to benefit from the ongoing application of the model, through which accumulated maintenance cost information will be made available to support telco switching system management functions and supplier quality improvement efforts",1994,0, 512,When to stop testing for large software systems with changing code,"Developers of large software systems must decide how long software should be tested before releasing it. A common and usually unwarranted assumption is that the code remains frozen during testing. We present a stochastic and economic framework to deal with systems that change as they are tested. The changes can occur because of the delivery of software as it is developed, the way software is tested, the addition of fixes, and so on. Specifically, we report the details of a real time trial of a large software system that had a substantial amount of code added during testing. We describe the methodology, give all of the relevant details, and discuss the results obtained. We pay particular attention to graphical methods that are easy to understand, and that provide effective summaries of the testing process. Some of the plots found useful by the software testers include: the Net Benefit Plot, which gives a running chart of the benefit; the Stopping Plot, which estimates the amount of additional time needed for testing; and diagnostic plots. To encourage other researchers to try out different models, all of the relevant data are provided",1994,0, 513,The derivation and experimental verification of clock synchronization theory,"The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Mid-Point Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the clock system's behavior. It is found that a 100% penalty is paid to tolerate worst case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as 3 clock ticks. Clock skew grows to 6 clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst case conditions",1994,0, 514,Compressionless Routing: a framework for adaptive and fault-tolerant routing,"Compressionless Routing (CR) is a new adaptive routing framework which provides a unified framework for efficient deadlock-free adaptive routing and fault-tolerance. CR exploits the tight-coupling between wormhole routers for flow control to detect potential deadlock situations and recover from them. Fault-tolerant Compressionless Routing (FCR) extends Compressionless Routing to support end-to-end fault-tolerant delivery. Detailed routing algorithms, implementation complexity and performance simulation results for CR and FCR are presented. CR has the following advantages: deadlock-free adaptive routing in torus networks with no virtual channels, simple router designs, order-preserving message transmission, applicability to a wide variety of network topologies, and elimination of the need for buffer allocation messages. FCR has the following advantages: tolerates transient faults while maintaining data integrity (nonstop fault-tolerance), tolerates permanent faults, can be applied to a wide variety of network topologies, and eliminates the need for software buffering and retry for reliability. These advantages of CR and FCR not only simplify hardware support for adaptive routing and fault-tolerance, they also can simplify communication software layers",1994,0,965 515,The Replica Management System: a scheme for flexible and dynamic replication,"The actual gains achieved by replication are a complex function of the number of replicas, the placement of those replicas, the replication protocol, the nature of the transactions performed on the replicas, and the availability and performance characteristics of the machines and networks composing the system. This paper describes the design and implementation of the Replica Management System, which allows a programmer to specify the quality of service required for replica groups in terms of availability and performance. From the quality of service specification, information about the replication protocol to be used, and data about the characteristics of the underlying distributed system, the RMS computes an initial placement and replication level. As machines and communications systems are detected to have failed or recovered, or performance characteristics change, the RMS can be re-invoked to compute an updated mapping of replicas which preserves the desired quality of service. The result is a flexible, dynamic and dependable replication system",1994,0, 516,Integrated maintainability analysis: a practical case study,"Working from preliminary time estimates and software expectations, a system for premature maintainability analysis and prediction for a parallel signal processing system was offered on schedule to the customer. Upon review, a mutual plan was agreed that further data was needed and would be obtained, throughout the fault isolation software development and formal testing. Early fault insertion data provided a new software design direction, physical time studies verified selection of the best tools, formal BIT diagnostic testing isolated some software corrections and provided significant data points along with the formal maintainability demonstration. Using the initial maintainability model as the controlling metric throughout the program, each time a change was observed that could effect one or more of the parameters in the model, a new prediction was calculated. These reiterative calculations, integrating the progression of data, provided a consistent measure of the teams progress and assured a final best estimate of the systems maintainability capability with a level of confidence acceptable to ourselves and our customer",1994,0, 517,Achieving reliability growth on real-time systems,"This paper addresses the principles used to predict and attain reliability growth on real-time systems. System reliability modeling techniques that include software reliability, maintenance effectiveness, and failure recovery are discussed in detail. Several software reliability growth models are discussed with emphasis on measured reliability growth of fielded software. The impact of maintenance effectiveness, which is a measure of the maintainer's skill and training levels, is shown. The need to develop and measure the robustness of failure recovery algorithms is emphasized in this paper. All of these factors are combined with the failure and repair characteristics of hardware to create comprehensive reliability growth models for real-time systems. Through the authors' research, they have determined that effective failure recovery algorithms are the key to attaining highly reliable systems. Without them, redundant computer systems that run banking and air traffic control systems will come crashing down with possibly disastrous results. The modeling and measurement techniques discussed in this paper provide the reliability practitioner with the methods to predict and achieve reliability growth resulting from improved software reliability and recovery algorithms. A fault tolerant system's ability to recover from hardware and software failures is gauged by a parameter called coverage. Coverage is the conditional probability of recovery given that a failure has occurred. Because of its huge impact on system reliability, the measurement of coverage is emphasized",1994,0, 518,Fuzzy Weibull for risk analysis,"Reliability and the life of components are frequently prime safety considerations. Extensive qualitative analysis employing probabilistic risk assessment has been widely performed to minimize hazards or accidents. Weibull probability data and information is a vital tool of quantitative risk assessments, but so are qualitative methods such as fault tree analysis. Qualitative aspects of product risk are subjective and contain many uncertainties. Most risk analyses do not have a means of dealing with uncertainty. Fuzzy set theoretical methods deal with supporting reasoning with uncertainty efficiently and conclusively. A unique fuzzy logic method employing Weibulls to represent membership functions for a set of fuzzy values along with “crispâ€?values has been developed for addressing uncertainties. The paper describes a type of AI software for risk analysis. A complete description is presented with parallels to previous methods. The model discussed is called fuzzy fault tree (FFT). It employs “fuzzy Weibullâ€?membership functions which have been demonstrated in a working prototype. The prototype system which provides maximum utility in minimizing risk uncertainties is programmed in C for Apple Macintosh platforms. The results validate this application of fuzzy logic to qualitative risk assessment modeling, and lend credibility to the validity of the approach. The fuzzy Weibulls used in the modeling process perform quite well. Before developing FFT into an operational system, several calibration trials with a variety of risk assessment problems will be attempted",1994,0, 519,Real-time communication in FDDI-based reconfigurable networks,"We report our ongoing research in real-time communication with FDDI-based reconfigurable networks. The original FDDI architecture was enhanced in order to improve its fault-tolerance capability while a scheduling methodology, including message assignment, bandwidth allocation, and bandwidth management is developed to support real-time communication. As a result, message deadlines are guaranteed even in the event of network faults",1994,0, 520,Fuzzy methods for multisensor data fusion,"This paper presents a new general framework for information fusion based on fuzzy methods. In the proposed architecture, feature extraction and feature fusion are performed by adopting a class of (possibly multidimensional) membership functions which take care of the possibility and admissibility of the feature itself. Test cases of one-dimensional and image data fusion are presented",1994,0, 521,Optimal design of large software-systems using N-version programming,"Fault tolerant software uses redundancy to improve reliability; but such redundancy requires additional resources and tends to be costly, therefore the redundancy level needs to be optimized. Our optimization models determine the optimal level of redundancy within a software system under the assumption that functionally equivalent software components fail independently. A framework illustrates the tradeoff between the cost of using N-version programming and the improved reliability for a software system. The 2 models deal with: a single task, and multitask software. These software systems consist of several modules where each module performs a subtask and, by sequential execution of modules, a major task is performed. Major assumptions are: 1) several versions of each module, each with an estimated cost and reliability, are available, 2) these module versions fail independently. Optimization models are used to select the optimal set of versions for each module such that the system reliability is maximized and total cost remains within budget",1994,0, 522,Certification of software components,"Reuse is becoming one of the key areas in dealing with the cost and quality of software systems. An important issue is the reliability of the components, hence making certification of software components a critical area. The objective of this article is to try to describe methods that can be used to certify and measure the ability of software components to fulfil the reliability requirements placed on them. A usage modelling technique is presented, which can be used to formulate usage models for components. This technique will make it possible not only to certify the components, but also to certify the system containing the components. The usage model describes the usage from a structural point of view, which is complemented with a profile describing the expected usage in figures. The failure statistics from the usage test form the input of a hypothesis certification model, which makes it possible to certify a specific reliability level with a given degree of confidence. The certification model is the basis for deciding whether the component can be accepted, either for storage as a reusable component or for reuse. It is concluded that the proposed method makes it possible to certify software components, both when developing for and with reuse",1994,0, 523,Compiler assisted fault detection for distributed-memory systems,"Distributed-memory systems provide the most promising performance to cost ratio for multiprocessor computers due to their scalability. However the issues of fault detection and fault tolerance are critical in such systems since the probability of having faulty components increases with the number of processors. We propose a methodology for fault detection through compiler support. More specifically, we augment the single-program multiple-data (SPMD) execution model to duplicate selected data items in such a way that during execution, whenever a value of a duplicated data is computed, the owners of the data are tested. The proposed compiler assisted fault detection technique does not require any specialized hardware and allows for a selective choice of redundancy at compile time",1994,0, 524,Checkpointing SPMD applications on transputer networks,"Providing fault-tolerance for parallel/distributed applications is a problem of paramount importance, since the overall failure rate of the system increases with the number of processors, and the failure of just one processor can lend to the complete crash of the program. Checkpointing mechanisms are a good candidate to provide the continuity of the applications in the occurrence of failures. In this paper, we present an experimental study of several variations of checkpointing for SPMD (single process, multiple data) applications. We used a typical benchmark to experimentally assess the overhead, advantages and limitations of each checkpointing scheme",1994,0, 525,An experiment to assess different defect detection methods for software requirements inspections,"Software requirements specifications (SRS) are usually validated by inspections, in which several reviewers read all or part of the specification and search for defects. We hypothesize that different methods for conducting these searches may have significantly different rates of success. Using a controlled experiment, we show that a scenario-based detection method, in which each reviewer executes a specific procedure to discover a particular class of defects has a higher defect detection rate than either ad hoc or checklist methods. We describe the design, execution and analysis of the experiment so others may reproduce it and test our results for different kinds of software developments and different populations of software engineers",1994,0, 526,A programmer performance measure based on programmer state transitions in testing and debugging process,"To organize and manage software development teams, it as important to evaluate the capability of each programmer based on reliable and easily collected data. We present a system which automatically monitors programmer activities, and propose a programmer debugging performance measure based on data monitored by the system. The system automatically categorizes programmer activity in real time into three types (compilation, program execution, and program modification) by monitoring and analyzing key strokes of a programmer. The resulting outputs are the time sequences of monitored activities. The measure we propose is the average length of debugging time per fault, D, estimated from the data sequences monitored by the system. To estimate the debugging time per fault, we introduce a testing and debugging process model. The process model has parameters associated with the average length of a program modification, d, and the probability of a fault being fixed completely by a program modification, r. By taking account of r as well as d, the debugging time per fault can be estimated with high accuracy. The model parameters, such as d and r, are computed from the monitored data sequences by using a maximum likelihood estimation method",1994,0, 527,Collaborative modeling and visualization: an EPA HPCC initiative,The transfer of high-performance computing and visualization technologies to state environmental protection agencies is an integral part of the US Environmental Protection Agency's High-Performance Computing and Communications (HPCC) Program. One aspect of this process involves collaborative modeling for air quality efforts associated with the 1990 Clean Air Act. Another is the use of visualization as a diagnostic tool to depict modeling scenarios for various pollution control strategies. A fundamental step in this process involves using the existing Internet and the Nationwide Information Infrastructure (NII) for sharing data and electronic communication. Another step includes the development of visualization tools and user-friendly interfaces for collaborative exploration of environmental data sets.<>,1994,0, 528,Achieving higher SEI levels,"Two years or more can pass between formal SEI (Software Engineering Institute) assessments using the Capability Maturity Model (CMM). An organization seeking to monitor its progress to a higher SEI level needs a method for internally conducting incremental assessments. The author provides one that has proven successful at Motorola. A method was developed for assessing progress to higher SEI levels that lets engineers and managers evaluate an organization's current status relative to the CMM and identify weak areas for immediate attention and improvement. This method serves as an effective means to ensure continuous process improvement as well as grassroots participation and support in achieving higher maturity levels. This progress-assessment process is not intended as a replacement for any formal assessment instruments developed by the SEI, but rather as an internal tool to help organizations prepare for a formal SEI assessment. Although the author provides examples in terms of CMM version 1.1, both the self-evaluation instrument and the progress-assessment process are generic enough for use with any (similar) later version of the SEI CMM by updating the worksheets and charts used.<>",1994,0, 529,Bootstrap: fine-tuning process assessment,"Bootstrap was a project done as part of the European Strategic Program for Research in Information Technology. Its goal was to develop a method for software-process assessment, quantitative measurement, and improvement. In executing that goal, Bootstrap enhanced and refined the Software Engineering Institute's process-assessment method and adapted it to the needs of the European software industry-including nondefense sectors like banking, insurance, and administration. This adaptation provided a method that could be applied to a variety of software-producing units, small to medium software companies or departments that produce software within a large company. Although the Bootstrap project completed in 1993, its attribute-based method for assessing process maturity continues to evolve. The authors describe the elements of the method, how it can be used to determine readiness for ISO 9001 certification, and how it was applied in two instances to substantially improve processes.<>",1994,0, 530,Software process evolution at the SEL,"The Software Engineering Laboratory of the National Aeronautics and Space Administration's Goddard Space Flight Center has been adapting, analyzing, and evolving software processes for the last 18 years (1976-94). Their approach is based on the Quality Improvement Paradigm, which is used to evaluate process effects on both product and people. The authors explain this approach as it was applied to reduce defects in code. In examining and adapting reading techniques, we go through a systematic process of evaluating the candidate process and refining its implementation through lessons learned from previous experiments and studies. As a result of this continuous, evolutionary process, we determined that we could successfully apply key elements of the cleanroom development method in the SEL environment, especially for projects involving fewer than 50000 lines of code (all references to lines of code refer to developed, not delivered, lines of code). We saw indications of lower error rates, higher productivity, a more complete and consistent set of code comments, and a redistribution of developer effort. Although we have not seen similar reliability and cost gains for larger efforts, we continue to investigate the cleanroom method's effect on them.<>",1994,0, 531,Validating metrics for ensuring Space Shuttle flight software quality,"In this article, we cover the validation of software quality metrics for the Space Shuttle. Experiments with Space Shuttle flight software show that the Boolean OR discriminator function can successfully validate metrics for controlling and predicting quality. Further, we found that statement count and node count are the metrics most closely associated with the discrepancy reports count, and that with data smoothing their critical values can be used as predictors of software quality. We are continuing our research and comparing validated metrics with actual results in an attempt to determine whether the use of additional metrics provides sufficiently greater discriminative power to justify the increased inspection costs.<>",1994,0, 532,A fast method for determining electrical and mechanical qualities of induction motors using a PC-aided detector,"A unique computer-aided detector is presented that determines the mechanical and electrical qualities of single-phase and three-phase induction motors. Measurements of voltage, current, power factor, active power, and reactive power as well as evaluations of centrifugal switch and airgap eccentricity, i.e., misalignment between rotor and stator, of induction motors can be rapidly conducted, and test results are available on the screen within a few seconds. The feasibility and usefulness of this detector are impressively demonstrated through application examples",1994,0, 533,Using metrics to manage software projects,"Five years ago, Bull's Enterprise Servers Operation in Phoenix, Arizona, used a software process that, although understandable, was unpredictable in terms of product quality and delivery schedule. The process generated products with unsatisfactory quality levels and required significant extra effort to avoid major schedule slips. All but the smallest software projects require metrics for effective project management. Hence, as part of a program designed to improve the quality, productivity, and predictability of software development projects, the Phoenix operation launched a series of improvements in 1989. One improvement based software project management on additional software measures. Another introduced an inspection program, since inspection data was essential to project management improvements. Project sizes varied from several thousand lines of code (KLOC) to more than 300 KLOC. The improvement projects enhanced quality and productivity. In essence, Bull now has a process that is repeatable and manageable, and that delivers higher quality products at lower cost. We describe the metrics we selected and implemented, illustrating with examples drawn from several development projects.<>",1994,0, 534,Modeling the relationship between source code complexity and maintenance difficulty,"Canonical correlation analysis can be a useful exploratory tool for software engineers who want to understand relationships that are not directly observable and who are interested in understanding influences affecting past development efforts. These influences could also affect current development efforts. In this paper, we restrict our findings to one particular development effort. We do not imply that either the weights or the loadings of the relations generalize to all software development efforts. Such generalization is untenable, since the model omitted many important influences on maintenance difficulty. Much work remains to specify subsets of indicators and development efforts for which the technique becomes useful as a predictive tool. Canonical correlation analysis is explained as a restricted form of soft modeling. We chose this approach not only because the terminology and graphical devices of soft modeling allow straightforward high-level explanations, but also because we are interested in the general method. The general method allows models involving many latent variables having interdependencies. It is intended for modeling complex interdisciplinary systems having many variables and little established theory. Further, it incorporates parameter estimation techniques relying on no distributional assumptions. Future research will focus on developing general soft models of the software development process for both exploratory analysis and prediction of future performance.<>",1994,0, 535,Multiple fault detection in parity checkers,"Parity checkers are widely used in digital systems to detect errors when systems are in operation. Since parity checkers are monitoring circuits, their reliability must be guaranteed by performing a thorough testing. In this work, multiple fault detection of parity checkers is investigated. We have found that all multiple stuck-at faults occurring on a parity tree can be completely detected using test patterns provided by the identity matrix plus zero vector. The identity matrix contains 1's on the main diagonal and 0's elsewhere; while the zero vector contains 0's. The identity matrix vectors can also detect all multiple general bridging faults, if the bridgings result in a wired-AND effect. However, test patterns generated from the identity matrix and binary matrix are required to detect a majority of the multiple bridging faults which yield wired-OR connections. Note that the binary matrix contains two 1's at each column of the matrix",1994,0, 536,Modeling and analysis of system dependability using the System Availability Estimator,"This paper reviews the System Availability Estimator (SAVE) modeling program package. SAVE is used to construct and analyze models of computer and communication systems dependability. The SAVE modeling language consists of a few constructs for describing the components in a system, their failure and repair characteristics, the interdependencies between components, and the conditions on the individual components for the system to be considered available. SAVE parses an input file and creates a Markov chain model. For small models numerical solution methods can be used, but for larger models (the state space grows exponentially with the number of components in the model), fast simulation techniques using importance sampling have to be used. We provide software demonstrations using both these techniques.<>",1994,0, 537,PARSA: a parallel program software development tool,"This paper introduces the PARSA (PARallel program Scheduling and Assessment) parallel software development tool to address the efficient partitioning and scheduling of parallel programs on multiprocessor systems. The PARSA environment consists of a user-friendly (visual), interactive, compile-time environment for partitioning, scheduling, and performance evaluation/tuning of parallel programs on different parallel computer architectures",1994,0, 538,MONSET-a prototype software development estimating tool,"The development of large-scale computer software has traditionally been a difficult cost estimation problem. Software has been developed for more than thirty years and it is reasonable to expect that the experience gained in this time, would make software development effort predictions more reliable. One way by which an organisation can benefit from past projects is to measure, track and control each project and use the collected results to assist future project estimation. This paper describes a hybrid model for software effort prediction and its validation against available data on some large software projects. A prototype software development estimation system (MONSET-Monash Software Estimating Tool) based on the proposed model is described. The system aims to provide guidance for project managers during the software development process",1994,0, 539,A stochastic reward net model for dependability analysis of real-time computing systems,"Dependability assessment plays an important role in the design and validation of fault-tolerant real-lime computer systems. Dependability models provide measures such as reliability, safety and mean time to failure as functions of the component failure rates and fault/error coverage probabilities. In this paper we present a decomposition technique that accounts for both the hardware and software architectural characteristics of the modelled systems. Stochastic reward nets are employed as a unique modeling framework. Dependability of a railroad control computer, which relies an software techniques for fault/error handling, is analysed as an application example",1994,0, 540,Software quality measurement based on fault-detection data,"We develop a methodology to measure the quality levels of a number of releases of a software product in its evolution process. The proposed quality measurement plan is based on the faults detected in field operation of the software. We describe how fault discovery data can be analyzed and reported in a framework very similar to that of the QMP (quality measurement plan) proposed by B. Hoadley (1986). The proposed procedure is especially useful in situations where one has only very little data from the latest release. We present details of implementation of solutions to a class of models on the distribution of fault detection times. The conditions under which the families: exponential, Weibull, or Pareto distributions might be appropriate for fault detection times are discussed. In a variety of typical data sets that we investigated one of these families was found to provide a good fit for the data. The proposed methodology is illustrated with an example involving three releases of a software product, where the fault detection times are exponentially distributed. Another example for a situation where the exponential fit is not good enough is also considered",1994,0, 541,On measurement of operational security [software reliability],"Ideally, a measure of the security of a system should capture quantitatively the intuitive notion of `the ability of the system to resist attack'. That is, it should be operational, reflecting the degree to which the system can be expected to remain free of security breaches under particular conditions of operation (including attack). Instead, current security levels at best merely reflect the extensiveness of safeguards introduced during the design and development of a system. Whilst we might expect a system developed to a higher level than another to exhibit `more secure behaviour' in operation, this cannot be guaranteed; more particularly, we cannot infer what the actual security behaviour will be from knowledge of such a level. In the paper we discuss similarities between reliability and security with the intention of working towards measures of `operational security' similar to those that we have for reliability of systems. Very informally, these measures could involve expressions such as the rate of occurrence of security breaches (cf. rate of occurrence of failures in reliability), or the probability that a specified `mission' can be accomplished without a security breach (cf. reliability function). This new approach is based on the analogy between system failure and security breach, but it raises several issues which invite empirical investigation. We briefly describe a pilot experiment that we have conducted to judge the feasibility of collecting data to examine these issues",1994,0, 542,"Testability, failure rates, detectability, trustability and reliability","Discusses the relationship between several statistical measures of program dependability, including failure rates and testability. This is done by describing these concepts within the framework of a confidence-based measure called trustability. Suppose that M is a testing method, F is a class of faults and P is a class of programs. Suppose that the probability of a fault from F causing a failure is at least D when a program p∈P is tested according to M, if in fact p contains a fault of type F. Then D is called the detectability of M with respect to F and P. If we test a program using a method with detectability D, and see no faults, then we can conclude with risk at most 1-D that the program has no faults, i.e. we can have confidence at least C=D that the program is fault-free for the associated fault class F. If we have confidence at least C that a program has no faults, then we say that the program has trustability C with respect to F. More refined measures of trustability can be defined which also take fault class frequencies into account. Testability is defined to be the probability of finding a fault in a program p, if p contains a fault. The probability that a program will fail when it is tested over its operational distribution is called its failure rate. Trustability is confidence in the absence of faults and reliability is the probability of a program operating without failure. Trustability and reliability coincide if the class of faults for which we have a certain level of trustability is the class of common case faults",1994,0, 543,What is software reliability?,"Reliability refers to statistical measures an engineer uses to quantify imperfection in practice. Often we speak imprecisely of an object having “high reliabilityâ€? but technically, unless the object cannot fail at all, its reliability is arbitrarily close to zero for a long enough period of operation. This is merely an expression of the truism that an imperfect object must eventually fail. At first sight, it seems that software should have a sensible reliability, as other engineered objects do. But the application of the usual mathematics is not justified. Reliability theory applies to random (as opposed to systematic) variations in a population of similar objects, whereas software defects are all design flaws, not at all random, in a unique object. The traditional cause of failure is a random process of wear and tear, while software is forever as good (or as bad!) as new. However, software defects can be thought of as lurking in wait for the user requests that excite them, like a minefield through which the user must walk",1994,0, 544,"Testability, testing, and critical software assessment","Although the phrases “critical systemâ€?and “critical softwareâ€?encompass different degrees of “criticalityâ€?based on the user and application, I consider critical software to be that which performs a task whose success is necessary to avoid a loss of property or life. Software testability is a software characteristic that refers to the ease with which some formal or informal testing criteria can be satisfied. There are varying metrics that can be applied to this measurement. Software validation generally refers to the process of showing that software is computing an expected function. Software testing is able to judge the quality of the code produced. Software testability, on the other hand, is not able to do so, because it has no information concerning whether the code is producing correct or incorrect results. It is only able to predict the likelihood of incorrect results occurring if a fault or faults exist in the code. Software testability is a validation technique, but in a different definition of the term ≫OPEN validationâ€?that the IEEE Standard Glossary of Software Engineering Terminology allows for. Software testability is assessing behavioral characteristics that are not related to whether the code is producing correct output",1994,0, 545,Input preprocessor for creating MCNP models for use in developing large calculational databases,"With the rapid increase in workstation performance in the last decade, computational power is no longer the limiting factor for using Monte Carlo modeling to design and characterize nuclear well-logging tools. Instead, the development of tool models, modification of tool models, and the interpretation of computational results limit the throughput of MCNP modelers. A new software program called MCPREP has been developed that greatly increases the productivity of nuclear modelers in the development of large calculational databases. In addition, the use of the preprocessor can improve quality control of the modeling effort, can allow novice modelers to run complex and realistic problems, and can decrease the post-processing of the MCNP output files",1994,0, 546,Speech coding: a tutorial review,"The past decade has witnessed substantial progress towards the application of low-rate speech coders to civilian and military communications as well as computer-related voice applications. Central to this progress has been the development of new speech coders capable of producing high-quality speech at low data rates. Most of these coders incorporate mechanisms to: represent the spectral properties of speech, provide for speech waveform matching, and “optimizeâ€?the coder's performance for the human ear. A number of these coders have already been adopted in national and international cellular telephony standards. The objective of this paper is to provide a tutorial overview of speech coding methodologies with emphasis on those algorithms that are part of the recent low-rate standards for cellular communications. Although the emphasis is on the new low-rate coders, we attempt to provide a comprehensive survey by covering some of the traditional methodologies as well. We feel that this approach will not only point out key references but will also provide valuable background to the beginner. The paper starts with a historical perspective and continues with a brief discussion on the speech properties and performance measures. We then proceed with descriptions of waveform coders, sinusoidal transform coders, linear predictive vocoders, and analysis-by-synthesis linear predictive coders. Finally, we present concluding remarks followed by a discussion of opportunities for future research",1994,0, 547,A PC-based system to assist researchers in electric power distribution systems analysis,"A PC-based system has been developed as an aid to research in power engineering. This system was developed using off-the-shelf hardware cards, and a commercially available software package that provides a high-level programming “shellâ€?for hardware interfacing/control and graphic user interface development. The system consists of a data acquisition unit and a waveform generator. The data acquisition unit can monitor and record actual field measurements, whereas the waveform generator is able to recreate analog versions of stored data in a laboratory environment. The system has been used to test a prototype high-impedance fault detection unit which is under development. The system can also be used as an aid in classroom instruction. The design and development of the system reflects a simple cost-effective way for students to gain valuable practical experience in developing hardware/software-based systems within a reasonable length of time",1994,0, 548,A Markov chain model for statistical software testing,"Statistical testing of software establishes a basis for statistical inference about a software system's expected field quality. This paper describes a method for statistical testing based on a Markov chain model of software usage. The significance of the Markov chain is twofold. First, it allows test input sequences to be generated from multiple probability distributions, making it more general than many existing techniques. Analytical results associated with Markov chains facilitate informative analysis of the sequences before they are generated, indicating how the test is likely to unfold. Second, the test input sequences generated from the chain and applied to the software are themselves a stochastic model and are used to create a second Markov chain to encapsulate the history of the test, including any observed failure information. The influence of the failures is assessed through analytical computations on this chain. We also derive a stopping criterion for the testing process based on a comparison of the sequence generating properties of the two chains",1994,0, 549,Fault-tolerant routing in mesh architectures,"It is important for a distributed computing system to be able to route messages around whatever faulty links or nodes may be present. We present a fault-tolerant routing algorithm that assures the delivery of every message as long as there is a path between its source and destination. The algorithm works on many common mesh architectures such as the torus and hexagonal mesh. The proposed scheme can also detect the nonexistence of a path between a pair of nodes in a finite amount of time. Moreover, the scheme requires each node in the system to know only the state (faulty or not) of each of its own links. The performance of the routing scheme is simulated for both square and hexagonal meshes while varying the physical distribution of faulty components. It is shown that a shortest path between the source and destination of each message is taken with a high probability, and, if a path exists, it is usually found very quickly",1994,0, 550,High-resolution estimation and on-line monitoring of natural frequency of aircraft structure specimen under random vibration,"The growth of structural damage reduces the stiffness which results in decrease in the natural frequency of a specimen (coupon) under random vibration. However, the identification of the natural frequency is not a trivial problem because the coupons are shaken in a random fashion in which many closely-spaced frequencies are involved. The signal may have other redundant frequencies involved which are close to the natural frequency The simplest way to identify the frequencies of multiple sinusoids is the Fourier transform spectral analysis. However, when the frequencies of interest are closely spaced, the Fourier transform spectrum analysis fails to resolve these frequencies. In this paper these facts are shown for a real vibration signal. High resolution frequency estimation approach, such as the forward-backward linear prediction (FBLP) method, is adopted here to identify all the prominent frequencies in a certain bandwidth. From the pth order FBLP model, we estimate p positive frequencies. Among these p frequencies, the one that will decrease as the structure degenerates, is the natural frequency and is monitored in an on-line mode. An estimation of natural frequency and its prediction is made regularly (every one second) based on all available estimated natural frequencies once the structure enters the fatigue zone. The system is useful for on-line health monitoring of any structure under random vibration. We have applied this system to monitor several aluminum coupons of aircraft structure. The tests were performed at the Wright Research Lab and the software monitoring system was developed at Tennessee State University",1994,0, 551,A tool for automatically gathering object-oriented metrics,"Measuring software and the software development process are essential for organizations attempting to improve their software processes. For organizations using the object-oriented development approach, traditional metrics and metric gathering tools meant for procedural and structured software development are typically not meaningful. Chidamber and Kemerer proposed a suite of metrics for object-oriented design based upon measurement theory and guided by the insights of experienced object-oriented software developers. This paper presents the implementation of a tool to gather a subset of these proposed metrics for an object oriented software development project. Software metrics can be gathered at many levels of granularity; this tool provides measurements at two levels: class-level and system-level. Class-level metrics measure the complexity of individual classes contained in the system. These metrics can be very useful for predicting defect rates and estimating cost and schedule of future development projects. Examples of class-level metrics are depth of inheritance tree and class coupling. System-level metrics deal with a collection of classes that comprise an object-oriented system. Number of class clusters and number of class hierarchies are two examples of system-level metrics. The tool was written in C++ and currently only supports C++ software measurement. The authors plan to add the capability to measure software written in other object-oriented languages in the near future",1994,0, 552,An innovative tool for designing fault tolerant cockpit display symbology,"This research focuses on the design and development of a software package to aid display designers in creating fault tolerant fonts and symbology for monochrome dot-matrix displays. Since dot-matrix displays are subject to non-catastrophic failures [rows, columns, and individual picture elements], display designers find it necessary to address hardware reliability as a key design element when avoidance of operator reading errors is mission critical. This paper addresses row and column failure modes. Building redundancy into the design of font characters and symbology can provide additional protection from reading errors. The software package developed for the design of fault tolerant fonts, referred to herein as FontTool, operates on an IBM PC or compatible hardware platform within a Microsoft DOS environment. FontTool can simulate row or column dot-matrix display failures and “predictâ€?likely human reading errors. Based on limited testing, FontTool reading error “predictionsâ€?were found to be consistent with actual human performance reading error data about 86% of the time. FontTool uses Euclidean distance between 2-D Fourier transformed representations of dot-matrix characters as a metric for predicting character “similarityâ€? Although this metric has been applied previously, FontTool is a major advance in aiding display designers to build more fault tolerant cockpit display symbology",1994,0, 553,A knowledge based approach to fault detection and diagnosis in industrial processes: a case study,"The article proposes a scheme for on-line fault detection and diagnosis using an expert system. This approach is being tested at a beet-sugar factory in Spain. A great deal of critical situations may arise in the normal operation of such a process. They are now managed by plant operators and cover both cases: faulty equipment diagnosis, which requires module substitution, as well as the detection of major improper control actions that may cause plant problems in the long run. This prototype not only comprises aspects related with fault detection and diagnosis but also is responsible for supervisory tasks to improve the whole process performance. The paper includes a description of the system architecture, a detailed presentation of the fault detection and diagnosis modules, and some evaluation of the results obtained by the system operation at the factory",1994,0, 554,Complexity metrics for rule-based expert systems,"The increasing application of rule-based expert systems has led to the urgent need to quantitatively measure their quality, especially the maintainability which is harder and more expensive than that of conventional software because of the dynamic and evolutionary features of rules. One of the main factors that affect the maintainability of rule-based expert systems is their complexity; but so far little effort has been devoted to measure it. The paper investigates several complexity metrics for rule-based expert systems, and presents some evaluation methods based on statistical testing, analysis and comparison to assess the validity of these metrics. 71 rule-based expert systems are collected as test data from different application areas. The results reveal that a properly defined complexity metric, like our proposed RC, can be used as an effective means to measure the complexity of rule-based expert systems",1994,0, 555,Improving code churn predictions during the system test and maintenance phases,"We show how to improve the prediction of gross change using neural networks. We select a multiple regression quality model from the principal components of software complexity metrics collected from a large commercial software system at the beginning of the testing phase. Our measure of quality is based on gross change, and is collected at the end of the maintenance phase. This quality measure is attractive for study as it is both objective and easily obtained directly from the source code. Then, we train a neural network with the complete set of principal components. Comparisons of the two models, gathered from eight related software systems, shows that the neural network offers much improved predictive quality over the multiple regression model",1994,0, 556,A resynchronization method for real-time supervision,"Real-time supervision is one technique used to improve the perceived reliability of software systems. A real-time supervisor observes the inputs and outputs of the system and reports failures that occur. The approach presented in this paper uses the specification of external behavior of the system to detect failures. Failures are reported in real-time. In addition, the approach permits the assessment of the erroneous states of the system. Following a failure, the supervisor makes an assumption of the (system) erroneous state. Consequences of the same fault are not reported repeatedly. The supervisor accommodates the nondeterminism permissible under some specification formalism. The formalism considered in this paper is the CCITT SDL",1994,0, 557,On available bandwidth in FDDI-based reconfigurable networks,"Networks used in mission-critical applications must be highly fault-tolerant. The authors propose an FDDI-based reconfigurable network (FBRN) that can survive multiple faults. An FBRN consists of multiple FDDI trunk rings and has the ability to reconfigure itself in the face of extensive damage. They consider three classes of FBRNs depending on their degree of reconfigurability: N-FBRN (nonreconfigurable), P-FBRN (partially reconfigurable) and F-FBRN (fully reconfigurable). They analyze the performance of FBRNs in terms of their available bandwidth in the presence of faults. In mission-critical systems, it is not enough to ensure that the network is connected in spite of faults; the traffic carrying capacity of the network must be sufficient to guarantee the required quality of service. They present a probabilistic analysis of the available bandwidth of FBRNs given the number of faults. They find that a fully reconfigurable FBRN can provide a high available bandwidth even in the presence of a large number of faults. A partially reconfigurable FBRN is found to be an excellent compromise between high reliability and ease of implementation",1994,0, 558,"Comments on ""Analysis of self-stabilizing clock synchronization by means of stochastic Petri nets""","We point out some errors in the combinatorial approach for the steady-state analysis of a deterministic and stochastic Petri net (DSPN) presented by M. Lu, D. Zhang, and T. Murata (1990). However, the methodology for analyzing the self-stability of fault-tolerant clock synchronisation (FCS) systems introduced in that paper is applicable for FCS systems with many clocking modules, even if the combinatorial approach is not valid. This is due to the progress in improving the efficiency of the DSPN solution algorithm made in recent years. We show that the explicit computation of the steady-state solution of the DSPN can be performed with reasonable computational effort on a modern workstation by the software package DSPNexpress.<>",1994,0, 559,A class of recursive interconnection networks: architectural characteristics and hardware cost,"We propose and analyze a new class of interconnection networks, RCC, for interconnecting the processors of a general purpose parallel computer system. The RCC is constructed incrementally through the recursive application of a complete graph compound. The RCC integrate positive features of both complete networks and a given basic network. This paper presents the principles of constructing RCC and analyzes its architectural characteristics, its message routing capability, and its hardware cost. A specific instance of this class, RCC-CUBE, is shown to have desirable network properties such as small diameter, small degree, high density, high bandwidth, and high fault tolerance and is shown that they compare favorably to those of the hypercube, the 2-D, mesh, and the folded hypercube. Moreover, RCC-CUBE is shown to emulate the hypercube in constant time under any message distribution. The hardware cost and physical time performance are estimated under some packaging constraints for RCC-CUBE and compared to those of the hypercube, the folded hypercube, and the 2-D mesh demonstrating an overall cost effectiveness for RCC-CUBE. Hence, the RCC-CUBE appears to be a good candidate for next generation massively parallel computer systems",1994,0, 560,Applying various learning curves to hyper-geometric distribution software reliability growth model,"The hyper-geometric distribution software reliability growth model (HGDM) has been shown to be able to estimate the number of faults initially resident in a program at the beginning of the test-and-debug phase. A key factor of the HGDM is the “sensitivity factorâ€? which represents the number of faults discovered and rediscovered at the application of a test instance. The learning curve incorporated in the sensitivity factor is generally assumed to be linear in the literature. However, this assumption is apparently not realistic in many applications. We propose two new sensitivity factors based on the exponential learning curve and the S-shaped learning curve, respectively. Furthermore, the growth curves of the cumulative number of discovered faults for the HGDM with the proposed learning curves are investigated. Extensive experiments have been performed based on two real test/debug data sets, and the results show that the HGDM with the proposed learning curves estimates the number of initial faults better than previous approaches",1994,0, 561,An exploratory analysis of system test data,We conduct an exploratory data analysis of a specific system test data set that includes concomitant variables measuring aspects of test effort as well as the failure count and time variables considered in standard software reliability analyses. We consider a family of Poisson process models for which failure rates may depend on time as well as on the test effort variables. We analyze failure rates computed from this test data using least squares parameter estimation and model selection via the predicted residual sum of squares (PRESS),1994,0, 562,Empirical studies of predicate-based software testing,"We report the results of three empirical studies of fault detection and stability performance of the predicate-based BOR (Boolean Operator) testing strategy. BOR testing is used to develop test cases based on formal software specification, or based on the implementation code. We evaluated the BOR strategy with respect to some other strategies by using Boolean expressions and actual software. We applied it to software specification cause-effect graphs of a safety-related real-time control system, and to a set of N-version programs. We found that BOR testing is very effective at detecting faults in predicates, and that BOR-based approach has consistently better fault detection performance than branch testing, thorough (but informal) functional testing, simple state-based testing, and random testing. Our results indicate that BOR test selection strategy is practical and effective for detection of faulty predicates and is suitable for generation of safety-sensitive test-cases",1994,0, 563,On the impact of software product dissimilarity on software quality models,"The current software market favors software development organizations that apply software quality models. Software engineers fit quality models to data collected from past projects. Predictions from these models provide guidance in setting schedules and allocating resources for new and ongoing development projects. To improve model stability and predictive quality, engineers select models from the orthogonal linear combinations produced using principal components analysis. However, recent research revealed that the principal components underlying source code measures are not necessarily stable across software products. Thus, the principal components underlying the product used to fit a regression model can vary from the principal components underlying the product for which we desire predictions. We investigate the impact of this principal components instability on the predictive quality of regression models. To achieve this, we apply an analytical technique for accessing the aptness of a given model to a particular application",1994,0, 564,Temporal complexity and software faults,"Software developers use complexity metrics to predict development costs before embarking on a project and to estimate the likelihood of faults once the system is built. Traditional measures, however, were designed principally for sequential programs, providing little insight into the added complexity of concurrent systems or increased demands of real-time systems. For the purpose of predicting cost and effort of development, the CoCoMo model considers factors such as real-time and other performance requirements; for fault prediction, however, most complexity metrics are silent on concurrency. An outline for developing a measure of what we term temporal complexity, including significant and encouraging results of preliminary validation, is presented. 13 standard measures of software complexity are shown to define only two distinct domains of variance in module characteristics. Two new domains of variance are uncovered through 6 out of 10 proposed measures of temporal complexity. The new domains are shown to have predictive value in the modeling of software faults",1994,0, 565,Software trustability,"A measure of software dependability called trustability is described. A program p has trustability T if we are at least T confident that p is free of faults. Trustability measurement depends on detectability. The detectability of a method is the conditional probability that it will detect faults. The trustability model characterizes the kind of information needed to justify a given level of trustability. When the required information is available, the trustability approach can be used to determine strategies in which methods are combined for maximum effectiveness. It can be used to determine the minimum amount of resources needed to guarantee a required degree of trustability, and the maximum trustability that is achievable with a given amount of resources",1994,0, 566,Putting assertions in their place,"Assertions that are placed at each statement in a program can automatically monitor the internal computations of a program execution. However, the advantages of universal assertions come at a cost. A program with such extensive internal instrumentation will be slower than the same program without the instrumentation. Some of the assertions may be redundant. The task of instrumenting the code with correct assertions at each location is burdensome, and there is no guarantee that the assertions themselves will be correct. We advocate a middle ground between no assertions at all (the most common practice) and the theoretical ideal of assertions at every location. Our compromise is to place assertions only at locations where traditional testing is unlikely to uncover software faults. One type of testability measurement, sensitivity analysis, identifies locations where testing is unlikely to be effective",1994,0, 567,Connecting test coverage to software dependability,"It is widely felt that software quality in the form of reliability or “trustworthinessâ€? can be demonstrated by the successful completion of testing that “coversâ€?the software. However, this intuition has little experimental or theoretical support. The paper considers why the intuition is so powerful and yet misleading. Formal definitions of software “dependabilityâ€?are suggested, along with new approaches for measuring this analogy of trustworthiness",1994,0, 568,Modelling an imperfect debugging phenomenon with testing effort,"A software reliability growth model (SRGM) based on non-homogeneous Poisson processes (NHPP) is developed. The model describes the relationship between the calendar time, the testing effort consumption and the error removal process under an imperfect debugging environment. The role of learning (gaining experience) with the progress of the testing phase is taken into consideration by assuming that the imperfect debugging probability is dependent on the current software error content. The model has the in-built flexibility of representing a wide range of growth curves. The model can be used to plan the amount of testing effort required to achieve a pre-determined target in terms of the number of errors removed in a given span of time",1994,0, 569,The relationship between test coverage and reliability,"Models the relationship between testing effort, coverage and reliability, and presents a logarithmic model that relates testing effort to test coverage: statement (or block) coverage, branch (or decision) coverage, computation use (c-use) coverage, or predicate use (p-use) coverage. The model is based on the hypothesis that the enumerables (like branches or blocks) for any coverage measure have different detectability, just like the individual defects. This model allows us to relate a test coverage measure directly to the defect coverage. Data sets for programs with real defects are used to validate the model. The results are consistent with the known inclusion relationships among block, branch and p-use coverage measures. We show how the defect density controls the time-to-next-failure. The model can eliminate variables like the test application strategy from consideration. It is suitable for high-reliability applications where automatic (or manual) test generation is used to cover enemerables which have not yet been tested",1994,0, 570,A generalized software reliability process simulation technique and tool,"The paper describes the structure and rationale of the generalized software reliability process and a set of simulation techniques that may be applied for the purpose of software reliability modeling. These techniques establish a convenient means for studying a realistic, end-to-end software life cycle that includes intricate subprocess interdependencies, multiple defect categories, many factors of influence, and schedule and resource dependencies, subject to only a few fundamental assumptions, such as the independence of causes of failures. The goals of this research are dual: first, to generate data for truly satisfying the simplified assumptions of various existing models for the purpose of studying their comparative merits, and second, to enable these models to extend their merits to a less idealized, more realistic reliability life cycle. This simulation technique has been applied to data from a spacecraft project at the Jet Propulsion Laboratory; results indicate that the simulation technique potentially may lead to more accurate tracking and more timely prediction of software reliability than obtainable from analytic modeling techniques",1994,0, 571,A case study to investigate sensitivity of reliability estimates to errors in operational profile,"We report a case study to investigate the effect of errors in an operational profile on reliability estimates. A previously reported tool named TERSE was used in this study to generate random flow graphs representing programs, model errors in operational profile, and compute reliability estimates. Four models for reliability estimation were considered: the Musa-Okumoto model, the Goel-Okumoto model, coverage enhanced Musa-Okumoto model, and coverage enhanced Goel-Okumoto model. It was found that the error in reliability estimates from these models grows nonlinearly with errors in operational profile. Results from this case study lend credit to the argument that further research is necessary in development of more robust models for reliability estimation",1994,0, 572,Reliability growth modelling of software for process control systems,"Reliability growth models are the most widely known and used in practical applications. Nevertheless, the performances of these models limit their applicability to the control and assessment of the attainment of very low reliability levels. One of the reasons for their insufficient predictive quality is the scarcity of information they use for reliability estimation. They do not take into account information that could be easily collected from code observation, such as the code complexity, or during testing such as the coverage obtained. The paper analyzes the problems encountered using reliability growth models for the evaluation of software used in a measurement equipment control system",1994,0, 573,Sensitivity of field failure intensity to operational profile errors,"Sensitivity of field failure intensity estimates to operational profile occurrence probability errors is investigated. This is an important issue in software reliability engineering, because these estimates enter into many development decisions. Sensitivity was computed for 59,200 sets of conditions, spread over a wide range. For 99.4% of these points, the failure intensity was very robust with respect to occurrence probability errors, the error in failure intensity being more than a factor of 5 smaller than the occurrence probability error. Thus projects do not have to spend extra effort to obtain high precision in measuring the operational profile, nor do they need to worry about moderate changes in system usage with time",1994,0, 574,Some effects of fault recovery order on software reliability models,"Since traditional approaches to software reliability modeling allow the user to formulate predictions using data from one realization of the debugging process, it is necessary to understand the influence of the fault recovery order on predictive performance. We introduce an experimental methodology using a data structure called the debugging graph and use it to analyze the effects of various fault recovery orders on the predictive accuracy of four well-known software reliability algorithms. Further we note fault interactions and their potential effects on the predictive process. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interactions",1994,0, 575,Assessing the dynamic strength of software systems using interference analysis,"The concept of dynamic strength is closely related to reliability: the probability that a software system does not encounter a latent fault during execution. Dynamic strength is assessed by analyzing the interference between the execution profile, a probability density for system size, and the composite static strength distributions. Composite static strength is the sum of the relative software complexity metrics of each of the system's software modules. Composite static strength is a density function with size as the variate. Latent faults occur in the region of interference when the execution distribution exceeds a critical level of the strength distribution. An important relationship between the execution profile and strength distributions is characterized by the return period function",1994,0, 576,Bounding worst-case instruction cache performance,"The use of caches poses a difficult tradeoff for architects of real-time systems. While caches provide significant performance advantages, they have also been viewed as inherently unpredictable, since the behavior of a cache reference depends upon the history of the previous references. The use of caches is only suitable for real-time systems if a reasonably tight bound on the performance of programs using cache memory can be predicted. This paper describes an approach for bounding the worst-case instruction cache performance of large code segments. First, a new method called static cache simulation is used to analyze a program's control flow to statically categorize the caching behavior of each instruction. A timing analyzer, which uses the categorization information, then estimates the worst-case instruction cache performance for each loop and function in the program",1994,0, 577,Simulation based metrics prediction in software engineering,"The software industry is increasingly working towards the goal of software reuse. The use of metrics to assess software modules has long been advocated by the US Army. A combination of these two trends should make it possible for a software developer to predict the metrics of the composite software system given the metrics of the component software modules. It should also enable the developer to make an appropriate choice among many possible modules available in the software repository. The paper presents an approach to solve the above two problems. The metrics suite used is an extension of the US Army STEP metrics, the approach is simulation based, and its theoretical validation is provided from approximation theory",1994,0, 578,Measuring program structure with inter-module metrics,"A good structure is an important quality aspect of a program. Well-structured, modular programs are less costly to maintain than unstructured monolithic ones. Quantitatively assessing structure and modularity of programs can be useful to help ensure that a program is well-structured by indicating a potential need for restructuring of poorly structured code. However, previous structure metrics do not sufficiently account for the modular features used in modern, object-oriented programming languages. We propose four novel measures to assess the modular structure of a software project. Our measures are based on the principle of vocabulary hiding and measure a form of cohesion. A metrics prototype tool has been implemented for the Modula-3 programming language. Informal tests suggest that they are indeed useful to assess the quality of the program structure",1994,0, 579,A software project management system supporting the cooperation between managers and workers-design and implementation,"The authors aim at constructing an integrated software project management system with an object database. This paper describes a process model for software project management that forms the basis of the system. This paper also describes a framework for inducing the sequence of operations that the project manager must perform: issuing instructions to workers in accordance with a plan based on the process model, progress evaluation based on progress reports sent from the workers, and analysis of the impacts of any problems detected. In order to provide these facilities, we analyse the actions triggered per unit of activity from the viewpoint of project management. Based on this analysis, this paper proposes a finite state machine model involving the actions and the states changed by the actions. When an action is triggered, the system checks whether the action is effective or not, and provides the necessary information to the appropriate persons by referring to the relationships specified in the process model. This paper also describes prototype system based on the framework",1994,0, 580,Are the principal components of software complexity data stable across software products?,"The current software market is not suitable for organizations that place competitive bids, set schedules, or control projects without regard to past performance. Software quality models based upon data collected from past projects can help engineers to estimate costs of future development efforts, and to control ongoing efforts. Application of principal components analysis can improve the stability and predictive quality of software quality models. However, models based upon principal components are only appropriate for application to products having similar principal components. We apply a statistical technique for quantifying the similarity of principal components. We find that distinct but similar products developed by the same organization can share similar principal components, and that distinct products developed by distinct organizations will likely have dissimilar principal components",1994,0, 581,Measurement-driven quality improvement in the MVS/ESA operating system,"Achieving gains in software quality requires both the use of software metrics and the desire to make measurement-driven process improvements. This paper describes the experiences, changes, and kinds of metrics used to drive quantifiable results in a very large and complex software system. Developers on the IBM Multiple Virtual Storage (MVS) operating system track, respond and initiate better ways of controlling the software development process through the use of metric-based defect models, orthogonal defect classification (ODC), and object-oriented techniques. Constant attention to progress with respect to the measurements and working toward goals based on the metrics show that measurement-driven process improvement can lead to noticeably better products",1994,0, 582,Improving the performance of DSM systems via compiler involvement,"Distributed shared memory (DSM) systems provide an illusion of shared memory on distributed memory systems such as workstation networks and some parallel computers such as the Cray T3D and Convex SPP-1. On these systems, users can write programs using a shared memory style of programming instead of message passing which is tedious and error prone. Our experience with one such system TreadMarks, has shown that a large class of applications does not perform well on these systems. TreadMarks is a software distributed shared memory system designed by Rice University researchers to run on networks of workstations and massively parallel computers. Due to the distributed nature of the memory system, shared memory synchronization primitives such as locks and barriers often cause significant amounts of communication. We have provided a set of powerful primitives that will alleviate the problems with locks and barriers on such systems. We have designed two sets of primitives the first set maintains coherence and is easy to use by a programmer, the second set does not maintain coherence and is best used by a compiler. These primitives require that the underlying DSM be enhanced. We have implemented some of our primitives on the TreadMarks DSM system and obtained reasonable performance improvements on application kernels from molecular dynamics and numerical analysis. Furthermore, we have identified key compiler optimizations that use the noncoherent primitives to reduce synchronization overhead and improve the performance of DSM systems",1994,0, 583,The personal process in software engineering,"The personal software process (PSP) provides software engineers a way to improve the quality, predictability, and productivity of their work. It is designed to address the improvement needs of individual engineers and small software organizations. A graduate level PSP course has been taught at six universities and the PSP is being introduced by three industrial software organizations. The PSP provides a defined sequence of process improvement steps coupled with performance feedback at each step. This helps engineers to understand the quality of their work and to appreciate the effectiveness of the methods they use. Early experience with the PSP shows that average test defect rate improvements of ten times and average productivity improvements of 25% or more are typical",1994,0, 584,Mapping tasks into fault tolerant manipulators,"The application of robots in critical missions in hazardous environments requires the development of reliable or fault tolerant manipulators. In this paper we define fault tolerance as the ability to continue the performance of a task after immobilization of a joint due to failure. Initially, no joint limits are considered, in which case we prove the existence of fault tolerant manipulators and develop an analysis tool to determine the fault tolerant work space. We also derive design templates for spatial fault tolerant manipulators. When joint limits are introduced, analytic solutions become infeasible but instead a numerical design procedure can be used, as is illustrated through an example",1994,0, 585,Fault detection using topological case based modelimg and its application to chiller performance deterioration,"We apply a new modeling technique, Topological Case Based Modeling (TCBM), to fault diagnosis. In this paper we propose a new model which represents a continuous input/output relation using a set of numerical data, and it is possible to describe nonlinear systems. We employ the idea of case based reasoning (CBR). CBR infers a new case from stored cases and these relation.[1] It is important to define the similarity, that is a relation among the cases. We propose to define the similarity as the neighbourhood in input space corresponding to output accuracy and its accuracy is arbitrarily set before constructing the model. We name this technique Topological Case Based Modeling. In addition, we describe that TCBM has several advantages over other models. Finally we also show the example of TCBM to detect the deterioration for a chiller system",1994,0, 586,Application of artificial neural networks to parameter estimation of dynamical systems,Neural network (NN) based estimators of dynamical system parameters are introduced and compared to the least-squares-error estimators. Equations are derived to discuss the NN estimator existence and to express its covariance matrix. The results are illustrated using a numerical example of a 3-parameter system represented by multiexponential model,1994,0, 587,Efficient test sequence generation for localization of multiple faults in communication protocols,"Conformance test for communication protocols is indispensable for the production of reliable communications software. A lot of conformance test techniques have been developed. However, most of them can only decide whether an implemented protocol conforms to its specification. That is, the exact locations of faults are not determined by them. This paper presents some conditions that enable to find location of multiple faults, and then proposes a test sequence generation technique under such conditions. The characteristics of this technique are to generate test sequences based on protocol specifications and interim test results, and to find locations of multiple faults in protocol implementations. Although the length of the test sequence generated by the proposed technique is a little longer than the one generated by the previous one, the class to which the proposed technique can be applied is larger than that to which the previous one can be applied",1994,0, 588,On a unified framework for the evaluation of distributed quorum attainment protocols,"Quorum attainment protocols are an important part of many mutual exclusion algorithms. Assessing the performance of such protocols in terms of number of messages, as is usually done, may be less significant than being able to compute the delay in attaining the quorum. Some protocols achieve higher reliability at the expense of increased message cost or delay. A unified analytical model which takes into account the network delay and its effect on the time needed to obtain a quorum is presented. A combined performability metric, which takes into account both availability and delay, is defined, and expressions to calculate its value are derived for two different reliable quorum attainment protocols: D. Agrawal and A. El Abbadi's (1991) and Majority Consensus algorithms (R.H. Thomas, 1979). Expressions for the primary site approach are also given as upper bound on performability and lower bound on delay. A parallel version of the Agrawal and El Abbadi protocol is introduced and evaluated. This new algorithm is shown to exhibit lower delay at the expense of a negligible increase in the number of messages exchanged. Numerical results derived from the model are discussed",1994,0, 589,On the relationship between partition and random testing,"Weyuker and Jeng (ibid., vol. SE-17, pp. 703-711, July 1991) have investigated the conditions that affect the performance of partition testing and have compared analytically the fault-detecting ability of partition testing and random testing. This paper extends and generalizes some of their results. We give more general ways of characterizing the worst case for partition testing, along with a precise characterization of when this worst case is as good as random testing. We also find that partition testing is guaranteed to perform at least as well as random testing so long as the number of test cases selected is in proportion to the size of the subdomains",1994,0, 590,Class hierarchy based metric for object-oriented design,"Object-oriented technology, including object-oriented analysis (OOA), object-oriented design (OOD), and object-oriented programming (OOP), is a new promising approach to developing software systems to reduce software costs and to increase software extensibility, flexibility, and reusability. Software metrics are widely used to measure software complexity and assure software correctness. This paper proposes a metric to measure object-oriented software. Also, an important factor called URIs, is conducted to build the metric. This approach describes a graph-theoretical method for measuring the complexity of the class hierarchy. The proposed metric shows that inheritance has a close relation with object-oriented software complexity and reveals that misuse of repeated (multiple) inheritance will increase software complexity and be prone to implicit software errors",1994,0, 591,A neural network based perceptual audio coder,"The implementations and performance results of a neural network based perceptual audio coder is reported. The coder uses the configuration of the ISOIMPEG audio layer II coder with the perceptual analysis block replaced by a 2 layer network trained to estimate the masking thresholds required for the bit allocation. The 2 layer network is trained by a back propagation algorithm using the energies in the subbands as inputs and the masking thresholds of obtained from psychoacoustic model II of the ISOIMPEG audio coder as the reference outputs. The result is a coder which performs favourably in quality against the ISOIMPEG audio layer 2 coder at bit rates of 256 kbit/s and 192 kbit/s stereo. Performance at a bit rate of 128 kbit/s stereo was however, found to be poorer",1994,0, 592,Multi-layer perceptron ensembles for increased performance and fault-tolerance in pattern recognition tasks,"Multilayer perceptrons (MLPs) have proven to be an effective way to solve classification tasks. A major concern in their use is the difficulty to define the proper network for a specific application, due to the sensitivity to the initial conditions and to overfitting and underfitting problems which limit their generalization capability. Moreover, time and hardware constraints may seriously reduce the degrees of freedom in the search for a single optimal network. A very promising way to partially overcome such drawbacks is the use of MLP ensembles: averaging and voting techniques are largely used in classical statistical pattern recognition and can be fruitfully applied to MLP classifiers. This work summarizes our experience in this field. A real-world OCR task is used as a test case to compare different models",1994,0, 593,Improving neural network predictions of software quality using principal components analysis,"The application of statistical modeling techniques has been an intensely pursued area of research in the field of software engineering. The goal has been to model software quality and use that information to better understand the software development process. Neural network modeling methods have been applied to this field. The results reported indicate that neural network models have better predictive quality than some statistical models when predicting reliability and the number of faults. In this paper, we will explore the application of principal components analysis to neural network modeling as a way of improving the predictive quality of neural network quality models. We trained two neural nets with data collected from a large commercial software system, one with raw data, and one with principal components. Then, we compare the predictive quality of the two competing neural net models",1994,0, 594,"Internal/external test harmonization, verticality and performance metrics-a systems approach",On-line BIT (built in test) and off-line TPS (test program sets) both suffer from a history of chronic performance deficiencies. Many of these deficiencies can be remedied by more closely coordinating concurrent BIT/TPS development and by applying the same quantitative test performance metrics to both areas under a vertical test design approach,1994,0, 595,The role of test in total quality management,"The quality of a delivered product or system is ultimately limited by the quality of its design and manufacturing process. Defects that escape the design, manufacturing, and test processes are delivered to customers. Process Potential Index (Cp) and Process Capability Index (Cpk) are now widely used to express design and manufacturing quality. Design quality may be estimated from product models, if they exist, or measured using parametric test data. The difficulty in using test data is not so much in the calculations, but in the recording, storing, and selection of the data to be used in the calculations, This paper defines the metrics of quality estimation and describes an implementation of a system for estimating defect levels and process capabilities. The implementation includes the definition of a standardized parametric data exchange language for test data and an analytical system to standardize product quality measurement",1994,0, 596,Creating a diagnostic engineering tool set,"The development of supportability techniques and tools which keep pace with mission equipment technology is essential to minimize long term maintenance costs. The DoD's solution is the use of an integrated diagnostic (ID) process. An integral part of MDA's plan to implement ID is the use of automated tools to effectively assess diagnostic capabilities during the design process. The development and implementation of a procedure to effectively diagnose an item of mission equipment involves several analyses performed during various phases of the equipment development. A mixture of automated tools and manual methods are currently used to perform these analyses. The effort required to perform these diagnostic analyses can be significantly reduced and the quality of the analyses can be significantly improved by developing an automated method of implementing those portions of the analysis which are currently performed manually. Additional improvements can be realized by integrating those automated tools, utilized separately to perform each analysis, into a unified tool set operating on a common set of design data",1994,0, 597,Built-in diagnostics for advanced power management,"The Army's Diagnostic Analysis and Repair Tool Set (DARTS) is an advanced software product used to perform automated fault diagnostics that results in reduced logistics costs, decreased downtime and enhanced mission performance. DARTS enabled automated, knowledge based fault diagnostics to be embedded in the Advanced Modular Power Control System (AMPCS). AMPCS is an integrated hardware and software product for aerospace power management. DARTS was used in a concurrent engineering design environment as a computer aided engineering tool to optimize the fault detection and fault isolation characteristics of the AMPCS prototype design. Project results indicate a new method for linking the diagnostic knowledge base captured during design with the hardware under development to achieve automated fault isolation throughout the hardware life cycle. A successful demonstration of real-time fault diagnostics was conducted using the AMPCS prototype unit and the DARTS software in a hardware-in-the-loop laboratory system. This project reduced to practice the concept of automated, knowledge based diagnostics for electronic systems",1994,0, 598,Performance evaluation of admission policies in ATM based embedded real-time systems,"We study the effect of the output link scheduling discipline of an ATM switch on the ability of an ATM LAN to admit real-time connections. Three output link scheduling policies are studied: first come first served (FCFS), round robin (RR), and packet-by-packet generalized processor sharing (PGPS). We derive connection admission criteria for the three scheduling policies. To evaluate the performance of the three scheduling policies, we introduce the metric of admission probability. The admission probability gives the probability that a randomly chosen set of real-time connections will be admitted into the network. The admission probability allows system designers to study the performance of different scheduling policies over a wide range of network loads. We observe that the performance of the three scheduling policies is sensitive to message deadlines. When the deadlines are small, PGPS outperforms both RR and FCFS, and RR outperforms FCFS. When the deadlines are large, all three scheduling policies perform the same. We also note that although PGPS is better than RR and FCFS most of the time, its improved performance is achieved at the cost of high implementation complexity and run time overheads",1994,0, 599,Conversion of scanned documents to the open document architecture,"The paper presents a system for the conversion of scanned documents into the open document architecture. Unlike previous work in this field the authors use a combination of evidence sources to achieve greater robustness to document defects and noise introduced in the scanning process. Furthermore, they use optical character recognition in conjunction with other forms of image analysis as a means of detecting document structure. This enables enhanced document feature extraction and improved performance. They demonstrate the performance of the system on a specific class of input document",1994,0, 600,Assessing the fuzziness of general system characteristics in estimating software size,"Function point analysis (FPA) is a widely used method for estimating software size. However, there are subjective elements involved in the process of function point assessment. The single estimate given by FPA does not show the confidence interval. Therefore, there is no way to assess the confidence level of the estimate. The paper introduces a fuzzified function point analysis (FFPA) model to help software size estimators to express their judgment and use fuzzy B spline membership function (BMF) to derive their assessment values. Through a case study for an inhouse software development department, the paper presents the experiences of using FFPA to estimate the software size and compares it with the conventional FPA",1994,0, 601,The automatic generation of march tests,"Many memory tests have been designed in the past, one class of tests which has been proven to be very efficient in terms of fault coverage as well as test time, is the class of march tests. Designing march tests is a tedious, manual task. This paper presents a method which can, given a set of fault models, automatically generate the required march tests. It has been implemented in the programming language C and shown to be effective",1994,0, 602,A transputer-based waveform synthesizer for protection relay test and parameter estimation applications,"Power system faults can give rise to highly complex transient voltage and current waveforms, particularly during the periods immediately following the faults. Modern microprocessor-based protection relays can in principle respond intelligently to such input signals. This gives rise to the need for test equipment with the capability to evaluate protection relay performance for representative waveform data recorded for actual field conditions, or obtained alternatively by simulation using software packages such as the Electromagnetic Transients Program (EMTP). This paper describes a computer-based, multiphase, arbitrary waveform synthesizer for protection relay test applications. The system distinguishes itself from other such arrangements in that it features a highly programmable and versatile transputer-based digital to analog converter facility developed primarily with the view to obtain optimized performance for protection test applications",1994,0, 603,An experimental comparison of software methodologies for image based quality control,"In this paper we study the cost structure of a software system for the visual classification of industrial parts. We investigate two different development methodologies and compare them using a suitable cost function. The first approach aims to the development of a piece of software fully tailored to the specific problem, while the second one exploits the capabilities of adaptive signal processing techniques to reduce the software production test",1994,0, 604,Prediction of image performance and real-time quality improvement for the SIR-C/X-SAR SRL-1 mission,"For the SIR-C/X-SAR (Space Radar Lab-1) mission, launched the 9th of April 94, the expected image performance for selected sites was evaluated before the mission in terms of S/N ratio, radiometric and spatial resolution and ambiguity ratio related to the best suitable radar settings according to the nominal orbit. The software tool, named Performance Estimator (PE), which was used to calculate the parameters for such an image quality prediction is explained in detail. This tool was also used during the mission to be able to recalculate the radar settings such as PRF, Data Window Position (DWP), etc. According to any changes in the timeline to be sure that the sensor was always correctly pointing the desired scenario. Such changes can be caused either by not acceptable deviations of the shuttle from the planned attitude and orbit, or by the insertion of new experiments required by the scientist according to instantaneous appearing special conditions or effects e.g. volcanic eruptions or can be forced by reacting to contingencies. During the mission a further software tool was used to improve the image quality in real-time. This tool, called Performance Analyzer (PA), is also explained in detail. The purpose of this tool is to analyze the quality of the high rate data (off-line) and to assess the quality of the echo profile obtained with the telemetry data (on-Line) in order to allow, if necessary, the Performance engineer to optimize real-time the echo profile",1994,0, 605,Software risk management: requirements-based risk metrics,"There are several problems associated with current approaches to requirements-based risk metrics for software risk management. For example, the approaches are qualitative and limit risk considerations to probability of occurrence and severity of impact. They also do not take into account the relative size of the requirements category of which they are a part, i.e. the development effort needed to implement that requirements category. Finally, quantitative factors are not used. The advanced integrated requirements engineering system (AIRES) provides automated support for the quantitative identification and extraction of requirements-based software risk metrics throughout the requirements engineering life cycle. The AIRES risk assessment framework and techniques for the integrated application of both semantic and syntactic rules provide for effective, efficient, and comprehensive identification of requirements at risk in large complex multiple segment systems. A review of a portion of the requirements for a Navy Gun Mount System is included to provide an example of the use of AIRES to support the development of requirements-based risk metrics",1994,0, 606,Quantitative evaluation of expert systems,"The rapidly increasing application of expert systems has lead to the need to assure their quality, especially in the critical areas where a fault may be serious. Metrics can provide a quantitative basis for such assurance activities. In this paper, we investigate several formal metrics that are designed for measuring the three important characteristics of expert systems: the size, search space, and complexity. The applications of these metrics to assess or predict the quality of expert systems are also addressed. In addition, a comprehensive evaluation and comparison of the metrics is conducted with the aim to assess their validity when applying them to quantify expert systems",1994,0, 607,Phase-only sidelobe sector nulling for a tapered array with failed elements,"The application of the phase-only sidelobe sector nulling method to re-optimize the pattern of a tapered array after the detection of element failures has been developed under a Hughes RF computer aided design project. This fault correction technique uses a nonlinear computer optimization code developed by Pierre and Lowe (1975) to reshape the pattern of a partially failed array by readjusting the phase weights of the remaining good elements. The sidelobe sector nulling can be utilized to suppress interference and sea clutter from sea surface or distant airborne jammers for a shipboard radar. However, if some elements of the array fail, then the nulling region will degrade significantly. By using the computer code a new set of phase weights can be found to compensate for the lost elements and maintain the sidelobe sector nulling performance.<>",1994,0, 608,Systematic approach to selecting Hâˆ?/sub> weighting functions for DC servos,"This paper proposes an experimental solution for Hâˆ? weighting functions selection by exploiting an experimental planning method which is widely used in quality control. Conducting matrix experiments via orthogonal array, several parameters in Hâˆ?/sub> weighting functions can be determined efficiently so that the resulting controller is able to satisfy all design specifications. Of great importance, the proposed experimental method provides additional robust performance property to conventional Hâˆ?/sub> control which can only prove robust stability and nominal performance. Velocity controller design of DC servomotors is demonstrated to show the remarkable robustness and superior servo performance provided by experimental Hâˆ?/sub> design. A hardware environment is constructed to correlate the theoretical prediction and the measurement data. All specifications of robust stability and performance are validated by the experimental results",1994,0, 609,Error Recovery in Parallel Systems of Pipelined Processors with Caches,"This paper examines the problem of recovering from processor transient faults in pipelined multiprocessor systems. A pipelined machine allows out of order instruction execution and branch prediction to increase performance, thus a precise computation state may not be available. We propose a modified scheme to implement the precise computation state in a pipelined machine. The goal of this research is to implement checkpointing and rollback for error recovery in a pipelined system based on the technique to achieving precise computation state. Detailed analysis has been performed to demonstrate the effectiveness of this method.",1994,0, 610,Temporal resolution scalable video coding,"Scalable video coding is important in a number of applications where video needs to be decoded and displayed at a variety of spatial and temporal resolution scales or when resilience to errors is necessary. In this paper, we investigate a temporal-domain approach for resolution scalable video coding. This approach was originally proposed to and since then has been included in the Moving Picture Experts Group (MPEG-2) standard. Temporal scalability, although not limited to, is particularly useful for applications such as, HDTV as well as for high quality telecommunication applications that may require high frame rate (e.g., 60 Hz) progressive video. Furthermore, it allows flexibility in use of progressive or interlaced video formats for coding even though the source video may be progressive with high frame rate. Our technique builds on the motion-compensated DCT framework of nonscalable (single layer) MPEG-2 video coding and adds motion compensated inter-layer coding. In our simulations we focus on the 2- layer temporally scalable coding and compare the performance of using progressive video format for both layers to that when interlaced video format is used for both layers; all interim video formats used are derived from original high temporal resolution progressive video. In each case we compare the performance of two inter-layer prediction configurations and evaluate their prediction efficiency",1994,0, 611,Nonseparable orthogonal linear phase perfect reconstruction filter banks and their application to image compression,"We have designed a 4-channel 10×10 nonseparable orthogonal linear phase perfect reconstruction filter bank (PRFB), hereafter NSEP, and examined its performance in image compression applications. It is shown that, in addition to the orthogonality property, NSEP offers an excellent compression performance. Because of their limited spatial support, the nonseparable filters can be implemented at a reasonable computational cost. The orthogonality property also allows the use of image compression algorithms which exploit orthogonality for bit allocation purposes. We demonstrate that the NSEP filters have superior spatial localization properties when compared to filters arising in a 2-D separable PRFB generated from a l-D Daubechies FB of length 10. This superior spatial localization results in less visible ringing distortion in compressed images. In addition, the linear phase property of NSEP distributes the ripples located at image edges evenly on both sides of each edge, thus reducing further their visibility. Because of these advantages, NSEP is a viable and attractive candidate for high quality, low bit rate image compression applications",1994,0, 612,From the software process to software quality: BOOTSTRAP and ISO 9000,"The aim of the BOOTSTRAP project was to develop a method for software process assessment, quantitative measurement and improvement. BOOTSTRAP enhanced and refined the SEI method for software process assessment and adapted it to the needs of software producing units (SPUs) in the European software industry. BOOTSTRAP assesses both the SPU and the projects of the SPU with respect to policies and guidelines and their implementation. The BOOTSTRAP maturity level algorithm determines quantitative process quality profiles for individual process quality attributes. This quality profile serves as a basis for making decisions about process improvements and allows to determine the degree of satisfaction of ISO 9000",1994,0, 613,Information fusion method in fault detection and diagnosis,"In order to improve the reliability of fault detection and diagnosis (FDD) for dynamic system, it is important to make full use of the information from system knowledge and system measurements. This paper presents a scheme of information fusion in FDD. In the proposed scheme, the local fusion adopts multiple FDD strategies aiming at one measurement of the system, then the results of these strategies are fused. The global fusion will fuse the results of every local fusion",1994,0, 614,Quantization table design for JPEG compression of angiocardiographic images,"The lossy JPEG standard may be used for high performance image compression. As implemented in presently available hard- and software in most cases the so-called luminance quantization table is applied for gray level images, which may be scaled by a quality factor. The questions arise which quality factor is optimal and whether it is possible and worthwhile to specify quantization tables for the particular characteristics of angiocardiograms. Two new quantization tables are derived from the transfer function of the angiocardiographic system, which is a worst-case approach with respect to preserving sharp edges. To assess the performance, evaluations based on Hosaka-plots are developed. These diagrams compare the different errors introduced by lossy JPEG compression objectively.<>",1994,0, 615,A fault tolerant pipelined adaptive filter,"A fault tolerant pipelined architecture for high sampling rate adaptive filters is presented. The architecture, which is based on the computational requirements of delayed LMS and adaptive lattice filters, offers robust performance in the presence of single hardware faults, and software faults resulting from numerical instability. The reliability of the proposed system is analyzed and compared to the existing implementations strategies, and methods for fault detection, fault location, and recovery via hardware reconfiguration are discussed. Finally, simulation results illustrating recovery from processor fault are presented",1994,0, 616,A software metrics program,"Software metrics play a central role in software management and technology, as well as in support of general business goals. Metrics are utilized at all levels of a corporation. This paper focuses on software metrics at the multi-divisional level. It views metrics as an integral part of software process. The ideas presented are based on experiences and lessons learned with the metrics and process program within GTE Government Systems Corporation (GSC). The software metrics program at GTE GSC is described, including metrics for software estimation, tracking, and control of software project progress and quality. Trends in software metrics are identified",1994,0, 617,The role of measurement in software engineering,"The software engineering community has been slow to adapt measurement into industrial practice. This article reviews some of the probable causes of this situation, suggests a general approach to metrification, and identifies the most common industrial applications of measurement. These basic applications include project estimation, project tracking, product quality control, product design improvement, process quality control, and process improvement",1994,0, 618,CAMAC standard high-speed precise spectrometer,"The paper describes the CAMAC Standard Precise High-Speed Spectrometer for X-ray and gamma spectrometry designed for applications in a high-speed spectroscopy systems with requirement to obtain the high spectroscopy performance. The spectrometer contains the controlled. HV (high voltage) source, spectroscopy amplifier (later-amplifier), ADC (analog to digital converter) with buffer memory, CAMAC Grate controller hardware and software for PC. In this paper only the most important units, that determine quality and throughput performance of spectrometer are considered. The spectrometer has main features: maximum registration speed, pps, no less-150000; maximum input count rate, pps, -1000000; integral nonlinearity, %, no more -0.07; differential nonlinearity, %, no more -0.5; shaping symmetrical quasi-Gaussian; pile-up rejector dead time, ns, -100; ADC dead time, μs, -1, 7. Spectrometric data are processed and displayed on PC",1994,0, 619,Characterization inconsistencies in CdTe and CZT gamma-ray detectors,"In the past few years, significant developments in cadmium telluride (CdTe) and cadmium zinc telluride (CZT) semiconductor materials have taken place with respect to both quality and yield. Many of the more recent developments have occurred in the area of CZT crystal growth. This has resulted in an explosion of interest in the use of these materials in ambient temperature gamma-ray detectors. Most, if not all, of the manufacturers of CdTe and CZT have acquired government funding to continue research in development and applications, indicating the importance of these improvements in material quality. We have examined many detectors, along with the accompanying manufacturer's data, and it has become apparent that a clear standard does not exist by which each manufacturer characterizes the performance of their material. The result has been a wide variety of performance claims that have no basis for comparison and normally cannot be readily reproduced. In this paper we will first support our observations and then propose a standard that all manufacturers and users of these materials may use for characterization",1994,0, 620,A tool for assessing the quality of distributed software designs,"This paper reports a software tool for computing McCabe's cyclomatic complexity measure of a distributed system design specified in LOTOS, without presenting the underlying theory and algorithms in detail. Such a measure can be used to assess the quality of the design and determine the maximum number of independent cycles in an all-path test coverage. The tool accepts as an input a LOTOS specification in either textual or graphical form. It provides two modes for computation, the first one based on an actual transformation of the entire LOTOS specification to a Petri-net and the second one based on a set of formulas for computing the cyclomatic values constructively",1994,0, 621,Evaluating HACMP/6000: a clustering solution for high availability distributed systems,The requirements for highly available computer systems have evolved over time. In a commercial environment end-users and consumers do not tolerate systems that are unavailable when needed. This paper describes and evaluates the High Availability Cluster Multi-Processing (HACMP/6000) product. HACMP/6000 is a clustering solution for high availability distributed systems. Our approach in showing the effectiveness of HACMP/6000 is both qualitative and quantitative. We use availability analysis as a means of our quantitative analysis. Availability analysis is undertaken using a state-of-the-art availability analysis tool-System Availability Estimator (SAVE). Results obtained from the analysis are used for comparison with a non-HACMP/6000 system,1994,0, 622,Improving reliability of large software systems through software design and renovation,"Software renovation is a method to improve existing software and to incorporate new features and system functions without developing entirely new software. The paper proposes a method to increase software reliability and to determine software components that need to be renovated. The method considers the changed code in base and non-base software, probability of a software error, and sets of static and dynamic connections in large software applications. The software is renovated if the probability of error exceeds threshold or the sets of static and dynamic connections are changed and the error in code exceeds threshold. A number of experiments with changing model parameters are used to show the usefulness of the method. The study is of interest in telecommunications",1994,0, 623,Personal `progress functions' in the software process,"Individual developers can expect improvement in software productivity as a consequence of (i) a growing stock of knowledge and experience gained by repeatedly doing the same task (first-order learning) or (ii) due to technological and training programs supported by the organization (second-order learning). Organizations have used this type of progress behavior in making managerial decisions regarding cost estimation and budgeting, production and staff scheduling, product pricing, etc. Such progress is studied in productivity, product-quality and personal skills, in an experiment involving a sample of 12 software developers, who complete one project every week for ten weeks. Second-order training is provided to the subjects through Humphrey's Personal Software Process. A modified GQM method for measurement is used to execute the research method",1994,0, 624,Real-time supervision of software systems using the belief method,"With the critical role played by software systems in the operation of telecommunications networks, the ability to detect and report software failures has become of great importance. The paper presents the failure detection approach of real-time supervision based on the belief method. Real-time supervision allows failures in a software system to be detected in real-time based only on boundary signals and a specification of its external behaviour. This feature is of particular benefit when software is purchased from other vendors and there is no direct access to source code. An implementation of real-time supervision has been shown to be capable of failure detection in a small telephone exchange",1994,0, 625,A discrete software reliability growth model with testing effort,"We propose a discrete software reliability growth model with testing effort. The behaviour of the testing effort is described by a discrete Rayleigh curve. Assuming that the discrete failure intensity to the amount of current testing effort is proportional to the remaining error content, we formulate the model as a non-homogeneous poisson process. Parameters of the model are estimated. We then discuss a release policy based on cost and failure intensity criteria. Numerical results are also presented",1994,0, 626,Investigating coverage-reliability relationship and sensitivity of reliability to errors in the operational profile,"The focus of the work is an investigation into the correlation between “trueâ€?reliability of a software system and the white box testing measures such as block coverage, c-uses and p-uses coverage. We believe that software reliability and testing measures, especially white box testing, are inherently related. Results from experiments are presented to support this belief. We also demonstrate that the estimated reliability is sensitive to the operational profile defined for the software and hence errors in the operational profile may lead to incorrect reliability estimates",1994,0, 627,A software to aid reliability estimation,"Estimating the reliability of software is becoming increasingly important. There are a large number of models available for estimating reliability. Most of these models are computationally complex and require statistical inferencing to estimate the parameters. In addition, techniques for evaluating the results obtained from a given model themselves are computationally expensive. We describe a software that can apply a model of the choice of the tester (from a set of models) for estimating the reliability of a given software, and also provide data for evaluating the predictions of the model",1994,0, 628,A software reliability model in the embedded system,"We propose a software reliability model for estimating, measuring, and controlling software reliability of embedded systems, and a software test stopping equation for determining software testing time. It is not easy to correct errors occurring in embedded systems on site",1994,0, 629,Reliability growth model for object oriented software system,Object oriented design techniques are the most powerful way of developing software in the foreseen future. The techniques help in reducing the cost of development and increase the reliability of the software. We develop a software reliability growth model for an object oriented software system. The model is validated by simulating the error data. The goodness of fit and predictive validity is also tested,1994,0, 630,Testing CMOS logic gates for: realistic shorts,"It is assumed that tests generated using the single stuck-at fault model will implicitly detect the vast majority of fault-causing defects within logic elements. This may not be the case. In this paper we characterize the possible shorts in the combinational cells in a standard cell library. The characterization includes errors on the cell outputs, errors on the cell inputs, and excessive quiescent current. The characterization provides input vectors to stimulate these errors. After characterizing the faults that occur due to possible electrical shorts, we compare the coverage of the logic faults using a single stuck-at test set and tests developed specifically to detect these shorts. We discuss the effectiveness of IDDQ testing for these faults",1994,0, 631,Backplane test bus selection criteria,"The author discusses the architecture of the IEEE 1149.1 Standard, its applications and selection criteria. Although the evaluation and selection of a multi-drop addressable 1149.1 architecture are completed, the hardware has been designed such that alternative architectures could be implemented, if warranted, without a complete redesign of the payload digital electronics. The primary advantage of this architecture is the natural boundary scan test support at all levels of integration. Closed loop chains exist at each integration level and can be exercised to isolate 1149.1-detectable defects that occur at that level prior to any disassembly of the product",1994,0, 632,Estimating the intrinsic difficulty of a recognition problem,"Describes an experiment in estimating the Bayes error of an image classification problem: a difficult, practically important, two-class character recognition problem. The Bayes error gives the “intrinsic difficultyâ€?of the problem since it is the minimum error achievable by any classification method. Since for many realistically complex problems, deriving this analytically appears to be hopeless, the authors approach the task empirically. The authors proceed first by expressing the problem precisely in terms of ideal prototype images and an image defect model, and then by carrying out the estimation on pseudorandomly simulated data. Arriving at sharp estimates seems inevitably to require both large sample sizes-in the authors' trial, over a million images-and careful statistical extrapolation. The study of the data reveals many interesting statistics, which allow the prediction of the worst-case time/space requirements for any given classifier performance, expressed as a combination of error and reject rates",1994,0, 633,Non-linear circuit effects on analog VLSI neural network implementations,"We present an analog VLSI neural network for texture analysis; in particular we show that the filtering block, which is the most critical block of the architecture for precision of computation, can be implemented using simple and compact analog circuits, without significant loss in classification performance. Through an accurate analysis of the circuits it is possible to model the real circuit characteristics in the software simulation environment; the weights calculated in the learning phase (which is performed off-line using the adaptive simulated annealing algorithm), can be properly coded into analog circuit variables in order to implement the correct operation of the network",1994,0, 634,Achieving customer satisfaction through robust design,"A winning product has to consistently outperform its rivals by providing value in today's highly competitive markets. At the heart of consistent value is robust design â€?the process of ensuring that a product will perform well despite variations in raw materials, components, manufacturing, and operating conditions. It consists of steps for identifying key parameters that affect the customer, analyzing to predict and improve robustness, and measuring to confirm performance. As it exists in AT&Ts Transmission Systems Business Unit (TSBU) for hardware development, robust design is a real, effective process. It should be used to ensure design robustness for all designs, especially those intended for multi-use platforms.",1994,0, 635,Simulation statistical software: an introspective appraisal,"Simulation experiments are sampling experiments by their very nature. Statistical issues dominate all aspects of a well-designed simulation study-model validation, selection of input distributions and associated parameters, experiment design frameworks, output analysis methodologies, model sensitivity, and forecasting are examples of some of the issues which must be dealt with by simulation experimenters. There are many factors which complicate analyses, such as multivariate input distributions, serially correlated model inputs and outputs, multiple performance measures, and non-linear system response, to name a few. The purpose of this panel is to discuss any and all issues related to software tools available for dealing with these and other problems. I asked five experts from within the simulation community to share their opinions and insights on the availability and quality of software to meet the statistical needs of the simulation community. The position statements provided by them are intended to serve as a springboard for a more extensive exchange of ideas during the discussion at the conference.",1994,0, 636,Automatic Data Layout for High Performance Fortran,"High Performance Fortran (HPF) is rapidly gaining acceptance as a language for parallel programming. The goal of HPF is to provide a simple yet efficient machine independent parallel programming model. Besides the algorithm selection, the data layout choice is the key intellectual step in writing an efficient HPF program. The developers of HPF did not believe that data layouts can be determined automatically in all cases, Therefore HPF requires the user to specify the data layout. It is the task of the HPF compiler to generate efficient code for the user supplied data layout. The choice of a good data layout depends on the HPF compiler used, the target architecture, the problem size, and the number of available processors. Allowing remapping of arrays at specific points in the program makes the selection of an efficient data layout even harder. Although finding an efficient data layout fully automatically may not be possible in all cases. HPF users will need support during the data layout selection process. In particular, this support is necessary if the user is not familiar with the characteristics of the target HPF compiler and target architecture, or even with HPF itself. Therefore, tools for automatic data layout and performance estimation will be crucial if the HPF is to find general acceptance in the scientific community. This paper discusses a framework for automatic data layout for use in a data layout assistant tool for a data-parallel language such as HPF. The envisioned tool can be used to generate a first data layout for a sequential Fortran program without data layout statements, or to extend a partially specified data layout in a HPF program to a totally specified data layout. Since the data layout assistant is not embedded in a compiler and will run only a few times during the tuning process of an application program, the framework can use techniques that may be too computationally expensive to be included in a compiler. A prototype data layout assistant tool based on our framework has been implemented as part of the D system currently under development at Rice University. The paper reports preliminary experimental results. The results indicate that the framework is efficient and generates data layouts of high quality.",1995,0, 637,Determining software schedules,"Improving software productivity, shortening schedules or time to market, and improving quality are prominent topics in software journals in both contributed articles and advertising copy. Unfortunately, most of these articles and advertisements have dealt with software schedules, productivity, or quality factors in abstract terms. Now we can measure these factors with reasonable accuracy and collect empirical data on both average and best-in-class results. We are particularly interested in the wide performance gaps between laggards, average enterprises, and industry leaders, as well as differences among the various software domains. The function-point metric lets us establish a meaningful database of software performance levels. A simple algorithm raises function points to a total to obtain a useful first-order schedule estimate",1995,0, 638,RTSPC: a software utility for real-time SPC and tool data analysis,"Competition in the semiconductor industry is forcing manufacturers to continuously improve the capability of their equipment. The analysis of real-time sensor data from semiconductor manufacturing equipment presents the opportunity to reduce the cost of ownership of the equipment. Previous work by the authors showed that time series filtering in combination with multivariate analysis techniques can be utilized to perform statistical process control, and thereby generate real-time alarms in the case of equipment malfunction. A more robust version of this fault detection algorithm is presented. The algorithm is implemented through RTSPC, a software utility which collects real-time sensor data from the equipment and generates real-time alarms, Examples of alarm generation using RTSPC on a plasma etcher are presented",1995,0, 639,SEI capability maturity model's impact on contractors,"The Software Engineering Institute's Capability Maturity Model measures software development companies' ability to produce quality software within budget and on schedule. The CMM rates software development companies and government agencies at one of five maturity levels. Two key features are assessments and evaluations. Assessments represent a company's or government agency's voluntary, internal efforts to identify its current CMM maturity level and areas needing improvement. Evaluations represent an independent assessor's examination of a company's maturity level. Industry responses to the model vary. Some companies have reported that savings significantly offset the costs of achieving and maintaining a maturity level. Others have attacked the CMM on many fronts, such as its failure to incorporate Total Quality Management principles. Nevertheless, either enthusiastically or reluctantly companies are attempting to achieve higher CMM levels. The authors present both favorable and unfavorable reactions to the model as well as insight gained through discussions with various companies. They conclude that since no approach that enforces improvements will be universally acceptable in all aspects to all concerned, the CMM, on balance, can be considered a very successful model, particularly when combined with TQM principles. There may be uncertainty as to whether the costs of attaining and maintaining a CMM level will be recouped through reduced software production costs and more efficient software engineering practices (as published studies report). But with continued strong DoD sponsorship, there will likely be an increasing number of companies that base their software process improvement efforts on the CMM",1995,0, 640,Fault-tolerant features in the HaL memory management unit,"This paper describes fault-tolerant and error detection features in HaL's memory management unit (MMU). The proposed fault-tolerant features allow recovery from transient errors in the MMU. It is shown that these features were natural choices considering the architectural and implementation constraints in the MMU's design environment. Three concurrent error detection and correction methods employed in address translation and coherence tables in the MMU are described. Virtually-indexed and virtually-tagged cache architecture is exploited to provide an almost fault-secure hardware coherence mechanism in the MMU, with very small performance overhead (less than 0.01% in the instruction throughput). Low overhead linear polynomial codes have been chosen in these designs to minimize both the hardware and software instrumentation impact",1995,0, 641,Evaluating FTRE's for dependability measures in fault tolerant systems,"In order to analyze dependability measures in a fault tolerant system, we generally consider a nonstate space or a state space type model. A fault tree with repeated events (FTRE's) presents an important strategy for the nonstate space model. The paper deals with a conservative assessment to complex fault tree models, henceforth called CRAFT, to obtain an approximate analysis of the FTRE's. It is a noncutset, direct, bottom-up approach. It uses failure probability or failure rate as input and determines a bound on the probability of occurrence of the TOP event. CRAFT generalizes the concept of a cutting heuristic that obtains the signal probabilities for testability measurement in logic circuits. The method is efficient and solves coherent and noncoherent FTRE's having AND, OR, XOR, and NOT gates. In addition, CRAFT considers M/N priority AND, and two types of functional dependency, namely OR and AND types. Examples such as the Cm* architecture and a fault-tolerant software based on recovery block concept are used to illustrate the approach. The paper also provides a comparison with approaches such as SHARPE, HARP, and FTC",1995,0, 642,Test and measurement,"The author discusses the latest developments in test and measurement technology including: IDDQ testing, its benefits and disadvantages; pay-per-use testers in which a button is inserted in a fully configured system and programmed with test credits which must be purchased in advance; digital oscilloscopes; and Hewlett Packard's entry into the VXIplug and play Systems Alliance",1995,0, 643,Faults and unbalance forces in the switched reluctance machine,"The paper identifies and analyzes a number of severe fault conditions that can occur in the switched reluctance machine, from the electrical and mechanical points of view. It is shown how the currents, torques, and forces may be estimated, and examples are included showing the possibility of large lateral forces on the rotor. The methods used for analysis include finite-element analysis, magnetic circuit models, and experiments on a small machine specially modified for the measurement of forces and magnetization characteristics when the rotor is off-center. Also described is a computer program (PC-SRD dynamic) which is used for simulating operation under fault conditions as well as normal conditions. The paper discusses various electrical configurations of windings and controller circuits, along with methods of fault detection and protective relaying. The paper attempts to cover several analytical and experimental aspects as well as methods of detection and protection",1995,0, 644,Optimal test distributions for software failure cost estimation,"We generalize the input domain based software reliability measures by E.C. Nelson (1973) and by S.N. Weiss and E.J. Weyuker (1988), introducing expected failure costs under the operational distribution as a measure for software unreliability. This approach incorporates in the reliability concept a distinction between different degrees of failure severity. It is shown how to estimate the proposed quantity by means of random testing, using the Importance Sampling technique from Rare Event Simulation. A test input distribution that yields an unbiased estimator with minimum variance is determined. The practical application of the presented method is outlined, and a detailed numerical example is given",1995,0, 645,Short-circuit simulations help quantify wheeling flow,"New England transmission system owners use the results from short-circuit simulations to quantify the transmission wheeling services they provide to deliver power from the Connecticut, Maine, and Vermont Yankee Nuclear Power Plants. The wheeling service of a transmitting utility, defined in terms of MW-miles, is determined with the use of fault currents to reflect the flow of MW entitlements over the New England transmission system. The use of commercially available software to automate simulations, perform the MW-miles calculation, and tabulate the results significantly reduces the time and computational effort",1995,0, 646,Industrial perspective on static analysis,"Static analysis within industrial applications provides a means of gaining higher assurance for critical software. This survey notes several problems, such as the lack of adequate standards, difficulty in assessing benefits, validation of the model used and acceptance by regulatory bodies. It concludes by outlining potential solutions and future directions.<>",1995,0, 647,Predicting behavior of induction motors during service faults and interruptions,A neural network-based identification for induction motor speed is proposed. The backpropagation neural network technique is used to provide real-time adaptive estimation of the motor speed. The validity and effectiveness of the proposed estimator as well as its sensitivity to parameter variation are verified by digital simulations. The proposed identification performs well under vector control and therefore can lead to an improvement in the performance of speed sensorless drives. The new approach is presented in a way that will contribute to a better understanding of neural network applications to motion control,1995,0, 648,Trends in reliability and test strategies,"Software fails because it contains faults. If we could be certain that our attempts to fix a fault had been effective and that we had introduced no new faults, we would know that we had improved the software's reliability. Unfortunately, no single testing method can be trusted to give accurate results in every circumstance. There are at least two areas of uncertainty in testing. First, we cannot predict when a failure will occur as a result of choosing an input that cannot currently be processed correctly. Second, we do not know what effect fixing a fault will have on the software's reliability. Most of today's reliability models consider the software system to be a black box, using indirect measures of failure by making strong assumptions about the test cases. White-box testing measures, which are direct measures, are generally not used in software-reliability models. The assumption is that the more the program is covered, the more likely that the software is reliable. Hence we must develop new testing strategies. For example, we do not understand very well the notion of stress as it applies to software; what effect the dependence between versions of the same application has on reliability; the relationship between software reliability and white-box testing measures; and how to achieve the ultra-high reliability levels obtained through formal verification and fault-tolerance methods. These are just some of the areas for future research into improved testing strategies",1995,0, 649,Expert system for analysis of electric power system harmonics,"The proliferation of power electronics devices that cause harmonic voltages and currents and the widespread use of harmonics-sensitive electronic equipment are creating a near-epidemic of “power qualityâ€?issues that are being addressed by both users and suppliers of electric energy. Numerous types, styles, and capabilities exist in the field of instrumentation and diagnostic equipment for the detection, recording, and analysis of the distorted waveform. This article reports on one effort to employ artificial intelligence (Al) in the form of a Harmonics Analysis Expert System (HAES). The resulting PC-based software program has captured the knowledge of expert power system engineers in a “rule baseâ€?which can be applied to information about the topology, power signature, operating practices, symptoms, equipment, type and ratings, and chronology of system development to diagnose the power system for harmonics-caused problems. The goal is to accomplish the analysis with a less experienced engineer, thereby multiplying the available nationwide pool of expertise. Additionally, repeated use of the system does result in impartation of knowledge from the system to the user. The process of applying this AI approach involves: (1) training the user, (2) measurements (optional), (3) diagnosis, and (4) solution of the power system for harmonics disturbances to acceptable power quality. The emphasis of this paper is on the diagnostic module (Al program) with a usage example presented",1995,0, 650,A system approach to reliability and life-cycle cost of process safety-systems,"An analytic method, PDS, allows the designer to assess the cost effectiveness of computer-based process safety-systems based on a quantification of reliability and life-cycle cost. Using PDS in early system design, configurations and operating philosophies can be identified in which the reliability of field devices and logic control units is balanced from a safety and an economic point of view. When quantifying reliability, the effects are included of fault-tolerant and fault-removal techniques, and of failures due to environmental stresses and failures initiated by humans during engineering and operation. A failure taxonomy allows the analyst to treat hardware failures, human failures, and software failures of automatic systems in an integrated manner. The main benefit of this taxonomy is the direct relationship between failure cause and the means used to improve safety-system performance",1995,0, 651,Comparing detection methods for software requirements inspections: a replicated experiment,"Software requirements specifications (SRS) are often validated manually. One such process is inspection, in which several reviewers independently analyze all or part of the specification and search for faults. These faults are then collected at a meeting of the reviewers and author(s). Usually, reviewers use Ad Hoc or Checklist methods to uncover faults. These methods force all reviewers to rely on nonsystematic techniques to search for a wide variety of faults. We hypothesize that a Scenario-based method, in which each reviewer uses different, systematic techniques to search for different, specific classes of faults, will have a significantly higher success rate. We evaluated this hypothesis using a 3×24 partial factorial, randomized experimental design. Forty eight graduate students in computer science participated in the experiment. They were assembled into sixteen, three-person teams. Each team inspected two SRS using some combination of Ad Hoc, Checklist or Scenario methods. For each inspection we performed four measurements: (1) individual fault detection rate, (2) team fault detection rate, (3) percentage of faults first identified at the collection meeting (meeting gain rate), and (4) percentage of faults first identified by an individual, but never reported at the collection meeting (meeting loss rate). The experimental results are that (1) the Scenario method had a higher fault detection rate than either Ad Hoc or Checklist methods, (2) Scenario reviewers were more effective at detecting the faults their scenarios are designed to uncover, and were no less effective at detecting other faults than both Ad Hoc or Checklist reviewers, (3) Checklist reviewers were no more effective than Ad Hoc reviewers, and (4) Collection meetings produced no net improvement in the fault detection rate-meeting gains were offset by meeting losses",1995,0, 652,Are we developers liars or just fools [software managers],"Things were good in the old days. Very few people understood software, so those of us who were cognoscenti could get away with murder. We could explain away cost and schedule overruns by blaming “complexity.â€?Performance problems were the fault of emerging technology and if management or customers got angry, what could they do? Although nearly everything that defined “the old daysâ€?has changed, the author argues that many of the old foibles remain. According to the author, software managers and the people who work for them risk being viewed as liars or fools, a perception that can only harm the software community",1995,0, 653,Signatures and software find high impedance faults,The detection of high-impedance faults on electrical distribution systems has been one of the most persistent and difficult problems facing the electric utility industry. High-impedance faults result from the unwanted contact of a primary circuit conductor with objects or surfaces that limit the current to levels below the detection thresholds of conventional protection devices. The author describes how recent advances in digital technology have enabled a practical solution for the detection of a high percentage of these previously undetectable faults,1995,0, 654,Distribution engineering tool features a flexible framework,"The Electric Power Research Institute (EPRI) has been involved in developing a software package designed to meet the needs of distribution engineers. The result of these efforts is a workstation software package (DEWorkstation) that provides an open architecture environment with integrated data and applications. Analysis, design, and operation modules access database data and exchange data through common functions. The open architecture allows external application modules to be added to the workstation. External measurements such as voltage, current, and temperature may be imported and displayed on the circuit schematic. These imported variables are made available to any application program running within the workstation. This distribution engineering tool is designed to meet the analysis, planning, design, and operation needs of distribution engineering through its available application modules. Modules perform the following types of analyses: power flow, load estimation, line impedance calculation, fault analysis, capacitor placement, phase balancing, and more. External data such as customer information system data and distribution transformer data may be imported. The workstation is designed to provide utilities with a platform suitable for distribution automation evaluations",1995,0, 655,Performability evaluation: where it is and what lies ahead,"The concept of performability emerged from a need to assess a system's ability to perform when performance degrades as a consequence of faults. After almost 20 years of effort concerning its theory, techniques, and applications, performability evaluation is currently well understood by the many people responsible for its development. On the other hand, the utility of combined performance-dependability measures has yet to be appreciably recognized by the designers of contemporary computer systems. Following a review of what performability means, we discuss its present state with respect to both scientific and engineering contributions. In view of current practice and the potential design applicability of performability evaluation, we then point to some advances that are called for if this potential is indeed to be realized",1995,0, 656,An analysis of client/server outage data,"This paper examines client/server outage data and presents a list of outage causes extracted from the data. The outage causes include hardware, software, operations, and environmental failures, as well as outages due to planned reconfigurations. The study spans all client, server, and network devices in a typical client/server environment. The outage data is used to predict availability in a typical client/server environment and to evaluate various fault-tolerant architectures. The results are stated in terms of user outage minutes and have been validated by comparison with other outage surveys and data in the literature. The major results from the outage data study are: Each client in a client/server environment experiences an average of 12000 minutes (200 hours) annual downtime; Server outages, especially server software failures, are the dominant cause of client/server unavailability; Fault-tolerance in a client/server environment can reduce user outage minutes by nearly an order of magnitude",1995,0, 657,Assessing the effects of communication faults on parallel applications,"This paper addresses the problem of injection of faults in the communication system of disjoint memory parallel computers and presents fault injection results showing that 5% to 30% of the faults injected in the communication subsystem of a commercial parallel computer caused undetected errors that lead the application to generate erroneous results. All these cases correspond to situations in which it would be virtually impossible to detect that the benchmark output was erroneous, as the size of the results file was plausible and no system errors had been detected. This emphasizes the need for fault tolerant techniques in parallel systems in order to achieve confidence in the application results. This is especially true in massively parallel computers, as the probability of occurring faults increase with the number of processing nodes. Moreover, in disjoint memory computers, which is the most popular and scalable parallel architecture, the communication subsystem plays an important role, and is also very prone to errors. CSFI (Communication Software Fault Injector) is a versatile tool to inject communication faults in parallel computers. Faults injected with CSFI directly emulate communication faults and spurious messages generated by non fail-silent nodes by software, allowing the evaluation of the impact of faults in parallel systems and the assessment of fault tolerant techniques. The use of CSFI is nearly transparent to the target application as it only requires minor adaptations. Deterministic faults of different nature can be injected without user intervention and fault injection results are collected automatically by CSFI",1995,0, 658,On integrating error detection into a fault diagnosis algorithm for massively parallel computers,"Scalable fault diagnosis is necessary for constructing fault tolerance mechanisms in large massively parallel multiprocessor systems. The diagnosis algorithm must operate efficiently even if the system consists of several thousand processors. We introduce an event-driven, distributed system-level diagnosis algorithm. It uses a small number of messages and is based on a general diagnosis model without the limitation of the number of simultaneously existing faults (an important requirement for massively parallel computers). The algorithm integrates both error detection techniques like 〈I'm aliveâŒ?messages, and built in hardware mechanisms. The structure of the implemented algorithm is presented and the essential program modules are described. The paper also discusses the use of test results generated by error detection mechanisms for fault localization. Measurement results illustrate the effect of the diagnosis algorithm, in particular the error detection mechanism by 〈I'm aliveâŒ? messages, on the application performance",1995,0, 659,Design and analysis of efficient fault-detecting network membership protocols,"Network membership protocols determine the present nodes and links in a computer network and, therefore, contribute to making the huge amount of general distributed systems and their inherent redundancy available for fault tolerance. Two different protocols, GNL and GLV, are described in detail which solve the problem for a very general set of assumptions not covered by existing solutions of related research areas like network exploration, system level diagnosis and group membership: neither the topology nor a superset of the nodes are known in advance. Nodes do not require any special initial knowledge except a unique relative signature (Leu, 1994) for authentication. Faults may affect any number of nodes and can cause arbitrary behaviour except for breaking cryptographic fault detection mechanisms like digital signatures. The protocols are compared in terms of execution time and global communication costs based on simulation experiments on randomly generated topologies. It is shown that the GLV protocol dominates the GNL protocol clearly for nearly all practically relevant cases",1995,0, 660,Dependability models for iterative software considering correlation between successive inputs,"We consider the dependability of programs of an iterative nature. The dependability of software structures is usually analysed using models that are strongly limited in their realism by the assumptions made to obtain mathematically tractable models and by the lack of experimental data. The assumption of independence between the outcomes of successive executions, which is often false, may lead to significant deviations from the real behaviour of the program under analysis. In this work we present a model in which dependencies among input values of successive iterations me taken into account in studying the dependability of iterative software. We consider also the possibility that repeated, nonfatal failures may together cause mission failure. We evaluate the effects of these different hypotheses on (1) the probability of completing a fixed-duration mission, and (2) a performability measure",1995,0, 661,A high integrity monitor for signalling interference prevention at 50 Hz,"The design of a high integrity monitor for the prevention of signalling interference is described. It belongs to the family known as Interference Current Monitor Units, or ICMUs, and has to be extremely reliable since a fault would otherwise compromise railway safety; a calculated value of Mean Time Between Wrong Side Failure (MTBWSF) of five thousand million hours was specified and achieved. High levels of out of band currents had to be rejected, in particular up to 2 kA DC The basic design was kept as simple as possible with only one frequency band being detected. A hardware only solution, i.e. no software, was adopted and triplication of the circuit was used to achieve the MTBWSF specification",1995,0, 662,On efficiently tolerating general failures in autonomous decentralized multiserver systems,We consider a multiserver system consisting of a set of servers that provide some service to a set of clients by accessing some shared objects. The goal is to provide reliable service in spite of client or server failures such that the overhead during normal operating periods is low. We consider a relatively general fault model where a faulty processor can write spurious data for a period of time before it is detected and removed from the system. We first develop a solution in an autonomous physical world of clients and servers. The basic approach is to divide the servers into groups such that each server has some limitations which prevent it from arbitrarily damaging the system. This solution is then mapped to a distributed system of processor and memory units followed by an assessment of its performance,1995,0, 663,Shape classification of flaw indications in three-dimensional ultrasonic images,"The rapid evolution of computing hardware technology now allows sophisticated software techniques to be employed which will aid the NDT data interpreter in the process of defect detection and classification. The paper describes an investigation into the area of three-dimensional ultrasonic image evaluation and, more specifically, the problem of characterising the shape of suspect flaw regions. A backpropagation neural network is used as the classifier for a series of four three-dimensional feature extraction methods which are individually assessed on two particular recognition problems. The optimum technique was determined for inclusion in an evaluation environment called the NDT Workbench, which has been designed for the processing of real data. Two acquired ultrasonic data sets are assessed using the best-performing classification method",1995,0, 664,Asynchronous circuit synthesis with Boolean satisfiability,"Asynchronous circuits are widely used in many real time applications such as digital communication and computer systems. The design of complex asynchronous circuits is a difficult and error-prone task. An adequate synthesis method will significantly simplify the design and reduce errors. In this paper, we present a general and efficient partitioning approach to the synthesis of asynchronous circuits from general Signal Transition Graph (STG) specifications. The method partitions a large signal transition graph into smaller and manageable subgraphs which significantly reduces the complexity of asynchronous circuit synthesis. Experimental results of our partitioning approach with large number of practical industrial asynchronous circuit benchmarks are presented. They show that, compared to the existing asynchronous circuit synthesis techniques, this partitioning approach achieves many orders of magnitude of performance improvements in terms of computing time, in addition to the reduced circuit implementation area. This lends itself well to practical asynchronous circuit synthesis from general STG specifications",1995,0, 665,Noise cancellation by a whole-cortex SQUID MEG system,"We report on the noise cancellation performance of several whole-cortex MEG systems operated under diverse noise conditions, ranging from unshielded environments to moderately shielded rooms. The noise cancellation is performed by means of spatial filtering using high order gradiometers (2nd or 3rd), which can be formed optionally either by SQUID electronics firmware or by software. The spatial dependence of the gradiometer responses was measured in an unshielded environment and compared with simulations to yield an estimate of the system common mode magnitudes relative to field, 1st, and 2nd gradients. High order gradiometers were also formed when the system was operated in moderately shielded rooms. The combination of the spatial filtering and room shielding resulted in very high combined noise rejections approaching that of high quality shielded rooms. Examples of noise cancellation under various conditions are shown.<>",1995,0, 666,Complexity measure evaluation and selection,"A formal model of program complexity developed earlier by the authors is used to derive evaluation criteria for program complexity measures. This is then used to determine which measures are appropriate within a particular application domain. A set of rules for determining feasible measures for a particular application domain are given, and an evaluation model for choosing among alternative feasible measures is presented. This model is used to select measures from the classification trees produced by the empirically guided software development environment of R.W. Selby and A.A. Porter, and early experiments show it to be an effective process",1995,0, 667,Maximum likelihood voting for fault-tolerant software with finite output-space,"When the output space of a multiversion software is finite, several software versions can give identical but incorrect outputs. This paper proposes a maximum likelihood voting (MLV) strategy for multiversion software with finite output-space under the assumption of failure independence. To estimate the correct result, MLV uses the reliability of each software version and determines the most likely correct result. In addition, two enhancements are made to MLV: (1) imposition of a requirement α* that the most likely correct output must have probability of at least α*; and (2) the voter can estimate when it has received one or more outputs from the software versions. If the probability that the estimated result is correct and is at least α*, then it immediately gives this estimated output. Since the voter need not wait for all the outputs before it can estimate, the required mean execution time can be reduced. The numerical results show that these MLV strategies have better performance than consensus voting and majority voting, especially when the variation of software version reliability is large. Enhancement #2 can appreciably reduce the mean execution time, especially when the software versions have larger execution time standard-deviation",1995,0, 668,Character recognition without segmentation,"A segmentation-free approach to OCR is presented as part of a knowledge-based word interpretation model. It is based on the recognition of subgraphs homeomorphic to previously defined prototypes of characters. Gaps are identified as potential parts of characters by implementing a variant of the notion of relative neighborhood used in computational perception. Each subgraph of strokes that matches a previously defined character prototype is recognized anywhere in the word even if it corresponds to a broken character or to a character touching another one. The characters are detected in the order defined by the matching quality. Each subgraph that is recognized is introduced as a node in a directed net that compiles different alternatives of interpretation of the features in the feature graph. A path in the net represents a consistent succession of characters. A final search for the optimal path under certain criteria gives the best interpretation of the word features. Broken characters are recognized by looking for gaps between features that may be interpreted as part of a character. Touching characters are recognized because the matching allows nonmatched adjacent strokes. The recognition results for over 24,000 printed numeral characters belonging to a USPS database and on some hand-printed words confirmed the method's high robustness level",1995,0, 669,Making process improvement personal,"The personal software process (PSP) is a structured set of process descriptions, measurements and methods that can help engineers improve their personal performance. The PSP provides the forms, scripts and standards that help them estimate and plan their work. It shows them how to define processes and how to measure their quality and productivity. The PSP acknowledges that everyone is unique and that a method that suits one engineer may not suit another. The PSP helps each engineer measure and track his own work so that he can find the method best for him",1995,0, 670,Intelligent alarm method by fuzzy measure and its application to plant abnormality prediction,"Traditional methods for diagnosis/prediction of plant abnormality rely on the acquisition of fragmentary knowledge or rules from experts and the application of the state vectors of plant to portions of the knowledge or rules thus obtained to derive a conclusion. Therefore as the complexity of the plant to be evaluated increases, the degree of uncertainty of knowledge obtained from experts increases and thus the accuracy of evaluation decreases. To solve this problem, this paper proposes an intelligent alarm method which uses a fuzzy integral model based on fuzzy measure that can provide an accurate model of human subjective evaluation mechanism. The intelligent alarm method is unique in that it can quantify the uncertainty of the information to be evaluated and of human being who evaluates. To achieve these objectives, this paper provides a simulation of the expert's overall plant evaluation by using an “interpretation of plant state based on expert knowledgeâ€?and a “overall judgement of plant state which amplifies the essence of the state interpretation informationâ€? The method was applied to the prediction of abnormality for a drying process of a chemical plant to demonstrate that abnormality prediction by a simulation method is possible, that even for a plant without abnormality the method can provide estimate of the potential of abnormality, and that the method is useful in alleviating the burden of the monitoring and checking work for plant",1995,0, 671,The automatic generation of load test suites and the assessment of the resulting software,"Three automatic test case generation algorithms intended to test the resource allocation mechanisms of telecommunications software systems are introduced. Although these techniques were specifically designed for testing telecommunications software, they can be used to generate test cases for any software system that is modelable by a Markov chain provided operational profile data can either be collected or estimated. These algorithms have been used successfully to perform load testing for several real industrial software systems. Experience generating test suites for five such systems is presented. Early experience with the algorithms indicate that they are highly effective at detecting subtle faults that would have been likely to be missed if load testing had been done in the more traditional way, using hand-crafted test cases. A domain-based reliability measure is applied to systems after the load testing algorithms have been used to generate test data. Data are presented for the same five industrial telecommunications systems in order to track the reliability as a function of the degree of system degradation experienced",1995,0, 672,Localizing multiple faults in a protocol implementation,"
First Page of the Article
",1995,0, 673,Evaluation of software dependability based on stability test data,"The paper discusses a measurement-based approach to dependability evaluation of fault-tolerant, real-time software systems based on failure data collected from stability tests of an air traffic control system under development. Several dependability analysis techniques are illustrated with the data: parameter estimation, availability modeling of software from the task level, applications of the parameter estimation and model evaluation in assessing availability, identifying key problem areas, and predicting required test duration for achieving desired levels of availability and quantification of relationships between software size, the number of faults, and failure rate for a software unit. Although most discussion is focused on a typical subsystem, Sector Suite, the discussed methodology is applicable to other subsystems and the system. The study demonstrates a promising approach to measuring and assessing software availability during the development phase, which has been increasingly demanded by the project management of developing large, critical systems.<>",1995,0, 674,On-line error monitoring for several data structures,"We present several examples of programs which efficiently monitor the answers from queries performed on data structures to determine if any errors are present. Our paper includes the first efficient on-line error monitor for a data structure designed to perform nearest neighbor queries. Applications of nearest neighbor queries are extensive and include learning, categorization, speech processing, and data compression. Our paper also discusses on-line error monitors for priority queues and splittable priority queues. On-line error monitors immediately detect if an error is present in the answer to a query. An error monitor which is not on-line may delay the time of detection until a later query is being processed which may allow the error to propagate or may cause irreversible state changes. On-line monitors can allow a more rapid and accurate response to an error.<>",1995,0, 675,Implicit signature checking,"Proposes a control flow checking method that assigns unique initial signatures to each basic block in a program by using the block's start address. Using this strategy, implicit signature checking points are obtained at the beginning of each basic block, which results in a short error detection latency (2-5 instructions). Justifying signatures are embedded at each branch instruction, and a watchdog timer is used to detect the absence of a signature checking point. The method does not require the building of a program flow graph and it handles jumps to destinations that are not fixed at compile/link-time, e.g. subroutine calls using function pointers in the C language. This paper includes a generalized description of the control flow checking method, as well as a description and evaluation of an implementation of the method.<>",1995,0, 676,Fault-tolerance for off-the-shelf applications and hardware,"The concept of middleware provides a transparent way to augment and change the characteristics of a service provider as seen from a client. Fault tolerant policies are ideal candidates for middleware implementation. We have defined and implemented operating system based middleware support that provides the power and flexibility needed by diverse fault tolerant policies. This mechanism, called the sentry, has been built into the UNIX 4.3 BSD operating system server running on a Mach 3.0 kernel. To demonstrate the effectiveness of the mechanism several policies have been implemented using sentries including checkpointing and journaling. The implementation shows that complex fault tolerant policies can be efficiently and transparently implemented as middleware. Performance overhead of input journaling is less than 5% and application suspension during the checkpoint is typically under 10 seconds in length. A standard hard disk is used to store journal and checkpoint information with dedicated storage requirements of less than 20 MB.<>",1995,0, 677,The multi-element mercuric iodide detector array with computer controlled miniaturized electronics for EXAFS,"Construction of a 100-element HgI2 detector array, with miniaturized electronics, and software developed for synchrotron applications in the 5 keV to 35 keV region has been completed. Recently, extended X-ray absorption fine structure (EXAFS) data on dilute (~1 mM) metallo-protein samples were obtained with up to seventy-five elements of the system installed. The data quality obtained is excellent and shows that the detector is quite competitive as compared to commercially available systems. The system represents the largest detector array ever developed for high resolution, high count rate X-ray synchrotron applications. It also represents the first development and demonstration of high-density miniaturized spectroscopy electronics with this high level of performance. Lastly, the integration of the whole system into an automated computer-controlled environment represents. A major advancement in the user interface for XAS measurements. These experiments clearly demonstrate that the HgI2 system, with the miniaturized electronics and associated computer control functions well. In addition it shows that the new system provides superior ease of use and functionality, and that data quality is as good as or better than with state-of-the-art cryogenically cooled Ge systems",1995,0, 678,Using a neural network to predict test case effectiveness,"Test cases based on command language or program language descriptions have been generated automatically for at least two decades. More recently, domain based testing (DBT) was proposed as an alternative method. Automated test data generation decreases test generation time and cost, but we must evaluate its effectiveness. We report on an experiment with a neural network as a classifier to learn about the system under test and to predict the fault exposure capability of newly generated test cases. The network is trained on test case metric input data and fault severity level output parameters. Results show that a neural net can be an effective approach to test case effectiveness prediction. The neural net formalizes and objectively evaluates some of the testing folklore and rules-of-thumb that are system specific and often require many years of testing experience",1995,0, 679,SEL's software process improvement program,"We select candidates for process change on the basis of quantified Software Engineering Laboratory (SEL) experiences and clearly defined goals for the software. After we select the changes, we provide training and formulate experiment plans. We then apply the new process to one or more production projects and take detailed measurements. We assess process success by comparing these measures with the continually evolving baseline. Based upon the results of the analysis, we adopt, discard, or revise the process",1995,0, 680,Development of computer-based measurements and their application to PD pattern analysis,"After referring briefly to the earlier developments of computer-based PD measurements in the 1970s, the paper outlines how the introduction of new technology has resulted in the design and production of a number of digital systems for use in test laboratories and on site. In order to avoid overgeneralization, the basis of the hardware required for detection and recording of the PD pulses is described by reference to a particular system, it being understood that many detailed variations are possible depending on the application. The paper describes a number of parameters that can be calculated by software and used to characterize the PD behavior. These include the IEC-270 integrated quantities, statistical moments and other fingerprints for recognizing the PD patterns within the power frequency cycle. Examples are given of the patterns obtained on equipment and samples. These are necessarily selective and are intended to indicate how the techniques may be applied. It is concluded that these new methods will become valuable in procedures for assessing the condition of electrical insulation, in particular faults or defects, but must be used in conjunction with existing techniques until improved confidence is achieved in interpreting results, especially in respect of the various phase-resolved distributions and calibration procedures",1995,0, 681,Automated X-ray inspection of chip-to-chip interconnections of Si-on-Si MCM's,"The silicon-on-silicon (Si-on-Si) multichip module (MCM) is one of the newest system integration technologies that has the potential for high functional integration, enhanced performance, and cost effectiveness. Aside from functional tests, X-ray inspection is required to ensure good quality chip-to-chip interconnections during the fabrication process. This paper presents an automated inspection system that is capable of detecting defects such as “swollenâ€?solders, misaligned solders, missing solders, solder robbing, and solder bridging in a semi-finished MCM. The semi-finished MCM is in wafer form and has completed the following operations: stencil printing device placing, and solder reflowing. The defect detection methodology is detailed. Over a test set of 54 sample images of wafer tiles, 100% inspection accuracy was obtained. This system has the potential to automate the manual visual inspection operation which is tedious, slow, and error-prone",1995,0, 682,The impact of software enhancement on software reliability,"This paper exploits the `relationship between functional-enhancement (FE) activity and the distribution of software defects' to develop a discriminant model that identifies high-risk program modules, `FE activity' and `defect data captured during the FE of a commercial programming language processing utility' serve to fit and test the predictive quality of this model. The model misclassification rates demonstrate that FE information is sufficient for developing discriminant models identifying high-risk program modules. Consideration of the misclassified functions leads us to suggest that: (1) the level of routines in the calling hierarchy introduces variation in defect distribution; and (2) the impact of a defect indicates the risk that it presents. Thus consideration of defect impact can improve the discriminant results",1995,0, 683,A software system evaluation framework,"The objective of a software system evaluation framework is to assess the quality and sophistication of software from different points of view. The framework explicitly links process and product aspects with the ultimate utility of systems and it provides a basic set of attributes to characterize the important dimensions of software systems. We describe such a framework and its levels of categorization, and we analyze examples of project classifications. Then we draw some conclusions and present ideas for further research. This evaluation framework assesses a software system's quality by consolidating the viewpoints of producers, operators, users, managers, and stakeholders",1995,0, 684,EYE: a tool for measuring the defect sensitivity of IC layout,"The development of new aggressively target IC processes brings new challenges to the design and fabrication of high yielding ICs. As much as 50% of yield loss in digital ICs can be attributed to the metallisation stages of fabrication. This can be a particular problem when new process technologies such as multilevel interconnect are being introduced. This paper addresses the problem of design for manufacture of IC layout by presenting a tool that enables the measurement of critical area and hence layout defect sensitivity. The EYE (Edinburgh Yield Estimator) tool has two main uses, yield prediction and the comparison of defect sensitivities of place and routing algorithms. Its use as a tool to measure and optimise the defect sensitivity and hence the manufacturability of the IC layout produced by automated routing algorithms will be explored in this paper",1995,0, 685,Inside front cover,"We present the first closed loop image segmentation system which incorporates a genetic algorithm to adapt the segmentation process to changes in image characteristics caused by variable environmental conditions such as time of day, time of year, clouds, etc. The segmentation problem is formulated as an optimization problem and the genetic algorithm efficiently searches the hyperspace of segmentation parameter combinations to determine the parameter set which maximizes the segmentation quality criteria. The goals of our adaptive image segmentation system are to provide continuous adaptation to normal environmental variations, to exhibit learning capabilities, and to provide robust performance when interacting with a dynamic environment. We present experimental results which demonstrate learning and the ability to adapt the segmentation performance in outdoor color imagery.",1995,0, 686,Predictions for increasing confidence in the reliability of safety critical software,We show how residual faults and failures and time to next failure can be used in combination to assist in assuring the safety of the software in safety critical systems like the NASA Space Shuttle Primary Avionics Software System,1995,0, 687,Using speculative execution for fault tolerance in a real-time system,"Achieving fault-tolerance using a primary-backup approach involves overhead of recovery such as activating the backup and propagating execution states, which may affect the timeliness properties of real-time systems. We propose a semi-passive architecture for fault-tolerance and show that speculative execution can enhance overall performance and hence shorten the recovery time in the presence of failure. The compiler is used to detect speculative execution, to insert check-points and to construct the updated messages. Simulation results are reported to show the contribution of speculative execution under the proposed architecture",1995,0, 688,Multivariate assessment of complex software systems: a comparative study,"Assessment of large complex systems requires robust modeling techniques. Multivariate models can be misleading if the underlying metrics are highly correlated. Munson and Khoshgoflaar propose using principal components analysis to avoid such problems. Even though many have used the technique, the advantages have not previously been empirically demonstrated, especially for large complex systems. Our case study illustrates that principal components analysis can substantially improve the predictive quality of a software quality model. This paper presents a case study of a sample of modules representing about 1.3 million lines of code, taken from a much larger real-time telecommunications system. This study used discriminant analyse's for classification of fault-prone modules, based on measurements of software design attributes and categorical variables indicating new, changed, and reused modules. Quality of fit and predictive quality were evaluated",1995,0, 689,A real time software-only H.261 codec,"Video and audio conferencing over networks is becoming increasingly popular due to the availability of video and audio I/O as standard equipment on many computer systems. So far, many algorithms have concentrated on playback only capability. This generally results in unacceptable real-time performance with respect to latency and encoder complexity. We describe a software-only system that allows full duplex video communication. For our analysis and implementation we chose a DCT based method that uses motion estimation and is modelled on the CCITT H.261 standard. We discuss the algorithm, followed by an analysis of the computational requirements for each major block. The results presented show the effect of computational simplifications on signal to noise ratio and image quality. We also examine the processing needs for full resolution coding and project when this will become available",1995,0, 690,Fault emulation: a new approach to fault grading,"In this paper, we propose a method of using an FPGA-based emulation system for fault grading. The real-time simulation capability of a hardware emulator could significantly improve the run-time of fault grading, which is one of the most resource-intensive tasks in the design process. A serial fault emulation algorithm is employed and enhanced by two speed-up techniques. First, a set of independent faults can be emulated in parallel. Second, simultaneous injection of multiple dependent faults is also possible by adding extra supporting circuitry. Because the reconfiguration time spent on mapping the numerous faulty circuits into the FPGA boards could be the bottleneck of the whole process, using extra logic for injecting a large number of faults per configuration can reduce the number of reconfigurations, and thus, significantly improve the efficiency. Some modeling issues that are unique in the fault emulation environment are also addressed. The performance estimation indicates that this approach could be several orders of magnitude faster than the existing software approaches for large designs.",1995,0, 691,Implementation and demonstration of a multiple model adaptive estimation failure detection system for the F-16,"A multiple model adaptive estimation (MMAE) algorithm is implemented with the fully nonlinear six-degree-of-motion simulation rapid-prototyping facility (SRF) VISTA F-16 software simulation tool. The algorithm is composed of a bank of Kalman filters modeled to match particular hypotheses of the real world. Each presumes a single failure in one of the flight-critical actuators, or sensors, and one presumes no failure. For dual failures, a hierarchical structure is used to keep the number of on-line filters to a minimum. The algorithm is demonstrated to be capable of identifying flight-critical aircraft actuator and sensor failures at a low dynamic pressure (20,000 ft, 4 Mach). Research includes single and dual complete failures. Tuning methods for accommodating model mismatch, including addition of discrete dynamics pseudonoise and measurement pseudonoise, are discussed and demonstrated. Scalar residuals within each filter are also examined and characterized for possible use as an additional failure declaration voter. An investigation of algorithm performance off the nominal design conditions is accomplished as a first step towards full flight envelope coverage",1995,0, 692,In-service signal quality estimation for TDMA cellular systems,"In-service interference plus noise power (I+N) and signal-to-interference plus noise power SI(I+N) estimation methods are examined for TDMA cellular systems. A simple (I+N) estimator is developed whose accuracy depends on the channel and symbol estimate error statistics. Improved (I+N) and S/(I+N) estimators are developed whose accuracy depends only on the symbol error statistics. The proposed estimators are evaluated through software simulation with an IS-54 frame structure. For high speed mobiles, it is demonstrated that S/(I+N) can be estimated to within 2 dB in less than a second",1995,0, 693,Computer aided partial discharge testing of electrical motors and capacitors,"The present methods for testing of low voltage equipment, in particular electrical motors and capacitors are discussed. It is demonstrated that those tests realising high voltage overstress of the object tested cannot guarantee reliable performance of solid insulation throughout the equipment life. Partial discharge (PD) measurement as an alternative is a true nondestructive test method providing detailed information about the duality of manufacturing of the equipment tested and the selection of suitable insulation materials. As mentioned, above two different types of test objects have been investigated. The measurements have been performed by means of a computerised partial discharge measuring system with particular software features which simplify its application. Based on experimental results it is demonstrated that PD tests are of great importance for quality assurance of electrical motors and capacitors. Moreover it seems to be possible to predict the quality of the final product by precise PD investigations and measurements of the materials and components used",1995,0, 694,Military requirements constrain COTS utilization,"In the past, our military systems have required technology not readily available in the industrial and commercial areas. This is no longer the case. Commercial products and practices offer the potential of reduced costs, lower risk and faster technology infusion. In order to achieve the potential benefits, our current methods of acquisition need change but it is important to review the background behind the institution of military standards and to assess their continued validity and to understand why. Also, before we rush to the conclusion that military specifications and standards are preferred for avionics over commercial products and practices, it is useful to review the specific requirements. In the past, in the absence of military specifications and standards, situations arose where items were developed using existing industrial and commercial products, processes and practices that at times led to catastrophic results or were inadequate to meet the threat. It is comforting that those who demand change show wisdom by asking that change be conducted with care and the impact understood. However, some use hyperbole in condemning the reliance on military specifications and standards and insist that their elimination is a good thing. In the domain of high performance aircraft avionics the potential for change is likely to be limited without an in-depth comprehension of the military environment, requirements and the capability of commercial products to satisfy them",1995,0, 695,Traffic Alert and Collision Avoidance System (TCAS) transition program (TTP): a status update,"In response to United States (U.S.) law mandating installation, by December 31, 1993, of TCAS IT on all air carrier aircraft with more than thirty passenger seats operating in U.S. Airspace, the first in-service flight of a certified TCAS II production unit occurred in June 1990. Currently, worldwide installations of TCAS IT in air carrier and business aircraft are estimated at over 7,000 units. Total accumulated flight operations exceed 50 million hours, with over 1.5 million additional hours accumulated each month. This paper provides a status update on the TCAS Transition Program (TTP) which was established to monitor the operational performance and assist with the introduction of TCAS II avionics into the National Airspace System (NAS). The update is provided by means of a qualitative performance comparison between the software logic initially fielded (Version 6.02) and the latest version (Version 6.04A) developed to address various flight performance dynamics and interface issues between TCAS and the air traffic control (ATC) system. This latest software logic has been in operation in all TCAS II-equipped aircraft since January 1, 1995",1995,0, 696,GPS ground antenna and monitor station upgrades: software requirements analysis,"This paper discusses the software aspects of the Global Positioning System (GPS) remote site modernization study, specifically, the software requirements analysis. At the beginning of the software requirements analysis phase, a qualitative trade-off study was performed to determine if a structured analysis or an object-oriented analysis (OOA) approach would be followed. The latter was chosen because it offers a number of advantages over the life-cycle of a software development project. This paper outlines these advantages. In conjunction with this methodology study, a study of Computer-aided Software Engineering (CASE) tools was performed to ascertain if available software requirements analysis and design tools would aid the software development process, in general, and requirements analysis, in particular. The results of this study are also summarized in this paper. The paper also discusses the experiences of Draper Laboratory software engineers in using OOA to perform software requirements analysis for a large software project (over 100,000 lines of Ada source code estimated), and how this approach was followed while specifying requirements using the Software Requirements Specification Data Item Description (DID) that accompanies DOD-STD-2167A",1995,0, 697,Assessing the capabilities of military software maintenance organizations,"The objectives of the assessment of military avionics software maintenance organizations (SMOs) are to determine the current and future software maintenance capabilities at a specific organization, evaluate the software products and processes, review the organizational infrastructure, identify areas of technical, programmatic and cost risk, estimate the costs associated with maintenance, and identify actions which could reasonably be taken to improve business efficiency and software quality. This paper focuses primarily on our process for assessing SMOs and then highlights our findings by identifying a number of issues that are pervasive across military software maintenance organizations. These assessments provide us with practical insight into the world of software maintenance and allow us to provide direction so that SMOs can plan and be prepared for more legacy avionics software in the future",1995,0, 698,A distributed safety-critical system for real-time train control,An architecture and methodology for executing a train control application in an ultra-safe manner is presented in this paper. Prior work in advanced train control systems are summarized along with their assumptions and drawbacks. A flexible architecture that allows fault-tolerant and fail-safe operation is presented for a distributed control system. A safety assurance technique which detects errors in software and hardware for simplex systems is presented in this paper. Performance results important to real-time control are obtained from modeling and simulating the architecture. An experimental prototype implementing the architecture and safety assurance technique is described together with experimental results,1995,0, 699,Application of virtual reality devices to the quantitative assessment of manual assembly forces in a factory environment,"The paper describes a system which has been developed to meet an industrial requirement for performing in-situ quantitative measurements of manual forces and postures applied by human assembly line operators. Virtual reality devices are shown to offer a valuable means of meeting the necessary sensory requirements. A set of force transducers incorporated into a pair of Datacq gloves together with a force triggered image capturing facility all controlled by a PC compatible, comprise the main hardware elements of the basic system. Comprehensive software provides online and playback facilities, allowing nonmedical staff to assess the assembly process, including maximum forces applied and a measure of the effort or work done per assembly cycle. Invaluable physiological information is made available for post processing and further medical analysis as well as for database archiving purposes. The instrument is seen to have generic application for ergonomic design of manufacturing processes involving humans, in the quest to avoid repetitive health problems such as strain injury in the upper body. The system can also serve as a tool to monitor the design of new product types as well as setting ergonomically relevant quality targets for subcontractor supplied components",1995,0, 700,Networked analysis systems,"Summary form only given. In-line and off-line metrology measurement and interpretation are a critical, but time consuming part of yield learning. Yield learning occurs during pilot line development and early volume manufacture. Networking analysis systems and whole wafer analysis tools can greatly reduce the cycle time associated with metrology during yield learning. Although in-line metrology tools have whole wafer capability, other tools such as scanning electron microscopes (equipped with energy dispersive spectroscopy: SEM/EDS) for defect review are recent developments. These SEM/EDS defect review tools (DRT) have coordinate locating stages and software capable of reading wafer defect maps from optical defect detection systems. In this paper, we discuss networked analysis systems and whole wafer tools for in/off-line analysis. The issues associated with data interpretation and management become greater with each new technology generation. We also discuss new network capabilities such as presorting electrical defects into similar types before physical characterization using ""model yield learning"".",1995,0, 701,Electrical two and three dimensional modelling of high-speed board to board interconnections,"In today's high speed electronic systems, board-to-board connectors can contribute to signal degradation, reflections, and crosstalk. Methods exist for using computer modeling to predict the electrical behavior of interconnections from a knowledge of their materials and construction. A typical modeling process utilizes a combination of 2 or 3 dimensional electromagnetic field solvers to derive an approximate circuit model of the connector. Predictions of connector electrical performance are then obtained by plugging this connector model into a model test circuit and simulating the combination with circuit simulation software such as SPICE. As an example a six-row 2 mm-grid backplane connector is discussed in this paper",1995,0, 702,Distributed off-line testing of parallel systems,This paper deals with off-line testing of parallel systems. The applicability of distributed self-diagnosis algorithms is investigated. Both static and adaptive testing assignment strategies are studied and evaluated using a queueing network model of the system under test. Key results include testing latency and message load of each algorithm,1995,0, 703,Testability analysis of co-designed systems,"This paper focus on the testability analysis of co-designed data-flow specifications. The co-designed specification level implies a high level testability analysis, independent of the implementation choices. With respect to testability, the difficulties of generating test sets, detecting and diagnosing faults are discussed and estimates are proposed. A hardware modelling, based on information transfers and called the Information Transfer Graph, is adapted to the specifications. A real case study supplied by Aerospatiale illustrates all the evaluations",1995,0, 704,A CSIC implementation with POCSAG decoder and microcontroller for paging applications,"This paper presents a CSIC (Customer Specification Integrated Circuit) implementation, which includes a 512/1200/2400 bps POCSAG decoder, PDI2400 and MC68HC05 changed by PANTECH. It can receive all the data with the rate of 512/1200/2400 bps of a single clock of 76.8 KHz. It is designed to have maximum 2 own frames for service enhancement. To improve receiver quality, a preamble detection considering frequency tolerance and a SCW (Synchronization Code Word) detection at every 4 bit is suggested. Also we consider an error correction of address and message up to 2 bits. Furthermore, it is possible with proposed PF (Preamble Frequency) error to achieve a battery life increase due to the turn-off of RF circuits when the preamble signal is detected with noises. The chip is designed using VHDL code from PDI2400 micro-architecture level. It is verified with VHDL simulation software of PowerView. Its logic diagrams are synthesized with VHDL synthesis software of PowerView. Proposed decoder and MC68HC05 CPU of MOTOROLA are integrated with about 88000 transistors by using 1.0 μm HCMOS process and named MC68HC05PD6. It is proved that the wrong detection numbers of preamble of noises are significantly reduced in the pager system that uses our chip through the real field test. The system receiving performance is improved by 20% of average, compared with other existing systems",1995,0, 705,A defect identification algorithm for sequential and parallel computers,"The comparison of images containing a single object of interest, where one of them contains the model object, is frequently used for defect identification. This is often a problem of interest to industrial applications. This paper introduces sequential and parallel versions of an algorithm that compares original (reference) and processed images in the time and frequency domains. This comparison combined with histogram data from both domains can identify differences in the images. Extracted data is also compared to database data in an attempt to pinpoint specific changes, such as rotations, translations, defects, etc. The first application considered here is recognition of an object which has been translated and/or rotated. For illustration purposes, an original image of a computer-simulated centered needle is compared to a second image of the hypodermic needle in a different position. This algorithm will determine if both images contain the same object regardless of position. The second application identifies changes (defects) in the needle regardless of position and reports the quality of the needle. This quality will be a quantitative measurement depending on error calculations in the spatial and frequency domains and comparisons to database data. Finally, the performance of sequential and parallel versions of the algorithm for a Sun SPARCstation and an experimental in-house built parallel DSP computer with eight TMS320C40 processors is included. The results show that significant speedup can be achieved through incorporation of parallel processing techniques",1995,0, 706,On the analysis of subdomain testing strategies,Weyuker and Jeng (1991) have investigated the conditions that affect the performance of partition testing and have compared analytically the fault-detecting ability of partition testing and random testing. Chen and Yu (1994) have generalized some of Weyuker and Jeng's results. We extend the analysis to subdomain testing in which subdomains may overlap. We derive several results for a special case and demonstrate a technique to extend some of our results to more general cases. We believe that this technique should be very useful in further investigating the behaviour of subdomain testing,1995,0, 707,An integrated approach for criticality prediction,"The paper provides insight in techniques for criticality prediction as they are applied within the development of Alcatel 1000 S12 switching software. The primary goal is to identify critical components and to make failure predictions as early as possible during the life cycle and hence reduce managerial risk combined with too early or too late release. The approach is integrated in the development process and starts with complexity based criticality prediction of modules. Modules identified as overly complex are given additional tests or review efforts. Release time prediction and field performance prediction are both based on tailored ENHPP reliability models. For the complete approach of criticality prediction, recent data from the development of a switching system with around 2 MLOC is provided. The switching system is currently in operational use, thus allowing for validation and tuning of the prediction models",1995,0, 708,Detection of fault-prone program modules in a very large telecommunications system,"Telecommunications software is known for its high reliability. Society has become so accustomed to reliable telecommunications, that failures can cause major disruptions. This is an experience report on application of discriminant analysis based on 20 static software product metrics, to identify fault prone modules in a large telecommunications system, so that reliability may be improved. We analyzed a sample of 2000 modules representing about 1.3 million lines of code, drawn from a much larger system. Sample modules were randomly divided into a fit data set and a test data set. We simulated utilization of the fitted model with the test data set. We found that identifying new modules and changed modules mere significant components of the discriminant model, and improved its performance. The results demonstrate that data on module reuse is a valuable input to quality models and that discriminant analysis can be a useful tool in early identification of fault prone software modules in large telecommunications systems. Model results could be used to identify those modules that would probably benefit from extra attention, and thus, reduce the risk of unexpected problems with those modules",1995,0, 709,Analysis of review's effectiveness based on software metrics,"The paper statistically analyzes the relationship between review and software quality and the relationship between review and productivity, utilizing software metrics on 36 actual projects executed in the OMRON Corporation from 1992 to 1994. Firstly, by examining the relationship between review effort and field quality (the number of faults after delivery) of each project, and the relationship between the number of faults detected in review and field quality of each project, we reasoned that: (1) greater review effort helps to increase field quality (decrease the number of faults after delivery); (2) source code review is more effective in order to increase field quality than design review; (3) if more than 10% of total design and programming effort is spent on review, one can achieve a quite stable field quality. We noticed that no relevant effects were recognized in productivity (LOC/staff month) with respect to a review rate of up to 20%. As a result of the analysis above, we recommended that 15% of review effort is a suitable percentage to use as a guideline for our software project management",1995,0, 710,On the correlation between code coverage and software reliability,"We report experiments conducted to investigate the correlation between code coverage and software reliability. Black-, decision-, and all-use-coverage measures were used. Reliability was estimated to be the probability of no failure over the given input domain defined by an operational profile. Four of the five programs were selected from a set of Unix utilities. These utilities range in size from 121 to 8857 lines of code, artificial faults were seeded manually using a fault seeding algorithm. Test data was generated randomly using a variety of operational profiles for each program. One program was selected from a suite of outer space applications. Faults seeded into this program were obtained from the faults discovered during the integration testing phase of the application. Test cases were generated randomly using the operational profile for the space application. Data obtained was graphed and analyzed to observe the relationship between code coverage and reliability. In all cases it was observed that an increase in reliability is accompanied by an increase in at least one code coverage measure. It was also observed that a decrease in reliability is accompanied by a decrease in at least one code coverage measure. Statistical correlations between coverage and reliability were found to vary between -0.1 and 0.91 for the shortest two of the five programs considered; for the remaining three programs the correlations varied from 0.89 to 0.99",1995,0, 711,Hyper-geometric distribution software reliability growth model with imperfect debugging,"Debugging actions during the test/debug phase of software development are not always performed perfectly. That is, not all the software faults detected are perfectly removed without introducing new faults. This phenomenon is called imperfect debugging. The hyper-geometric distribution software reliability growth model (HGDM) was developed for estimating the number of software faults initially in a program. We propose an extended model based on the HGDM incorporating the notion of imperfect debugging",1995,0, 712,A new method for increasing the reliability of multiversion software systems using software breeding,"The paper proposes a new method for increasing the reliability of multiversion software systems. The software using software breeding is more reliable than one using N version programming. But software breeding is not suitable for real time application because program versions are executed several times for detecting faulty modules. In the proposed method, the detection of faulty modules is performed in the background when program versions fail and the software continues the execution in the foreground. When the detection of faulty modules is finished, the combination of module versions in program versions are changed. Ten simulations, each of which executed program versions 106 times, were performed to analyse the effectiveness of the new method. This resulted in the reduction of the number of failures to range from 33% to 76% with an average of 56%",1995,0, 713,Performability modeling of N version programming technique,"The paper presents a detailed, but efficiently solvable model of the N version programming for evaluating reliability and performability over a mission period. Employing a hierarchical decomposition we reduce the model complexity and provide a modeling framework for evaluating the NVP failure and execution time behavior and the operational environment, as well. The failure and execution rates are treated as random variables and the operational profile is analyzed on the microstructure level, looking at probabilities of occurrence, failure and execution rates for each partition of input space. The reliability submodel that represents per run behavior of NVP, includes both functional failures and timing failures thus resulting in system reliability which accounts for performance requirements. The successive runs are modeled by the performance submodel, that represents the iterative nature of the software execution. Combining the results of both submodels, we assess the performability over a mission period that represents the collective effect of multiple system attributes on the NVP effectiveness",1995,0, 714,Predicting software's minimum-time-to-hazard and mean-time-to-hazard for rare input events,"The paper turns the concept of input distributions on its head to exploit inverse input distributions. Although such distributions are not always true mathematical inverses, they do capture an intuitive property: inputs that have high frequencies in the original distribution will have low frequencies in the inverse distribution, and vice versa. We can use the inverse distribution in several different quality checks during development. We provide a fault based (fault injection) method to determine minimum time to failure and mean time to failure for software systems under normal operational and non normal operational conditions (meaning rare but legal events). In our calculations, we consider how various programmer faults, design errors, and incoming hardware failures are expected to impact the observability of the software system",1995,0, 715,Towards a unified approach to the testability of co-designed systems,"The paper deals with the testability analysis of dataflow co designed systems. As a data flow specification is independent from the hardware/software implementation choice, a uniform approach may be used to evaluate the specification with respect to testability. The difficulty of generating test sets, and of detecting and diagnosing faults is discussed and estimated. We chose to use an existing hardware testability model which is suitable for data flow software specification; this model, based on information transfers, is called the Information Transfer Graph. A real case study supplied by Aerospatiale illustrates the proposed testability estimates",1995,0, 716,Parameter estimation of hyper-geometric distribution software reliability growth model by genetic algorithms,"Usually, parameters in software reliability growth models are not known, and they must be estimated by using observed failure data. Several estimation methods have been proposed, but most of them have restrictions such as the existence of derivatives on evaluation functions. On the other hand, genetic algorithms (GA) provide us with robust optimization methods in many fields. We apply GA to the parameter estimation of the hyper-geometric distribution software reliability growth model. Experimental result shows that GA is effective in the parameter estimation and removes restrictions from software reliability growth models",1995,0, 717,ROBUST: a next generation software reliability engineering tool,"In this paper, we propose an approach for linking the isolated research results together and describe a tool supporting the incorporation of the various existing modeling techniques. The new tool, named ROBUST, is based on recent research results validated using actual data. It employs knowledge of static software complexity metrics, dynamic failure data and test coverages for software reliability estimation and prediction at different software development phases",1995,0, 718,The current analysis program-a software tool for rotor fault detection in three phase induction motors,This paper shows that it is possible to monitor the supply current to an induction motor during the motors acceleration period and be able to detect the nonstationary frequency components previously monitored whilst stationary during the motors steady state operation. Results show that the increase in component amplitude observed with a faulty rotor when monitoring in steady state under full load conditions can clearly be observed when monitoring the transient signal under no load conditions. Using a signal processing technique known as wavelet decomposition it is shown that the sidebands may be tracked during the transient period. This technique has been simplified and implemented along with a data acquisition system to form a complete current transient health monitoring system which determines the health of the motor from its no load supply current transient signal. The software which controls both the acquisition and analysis part of the monitoring system has been developed so that the operator has access to all main parameters used within the analysis,1995,0, 719,A DSP based scheme for the on-line monitoring of performance and parameter determination of three phase induction motors,"The paper discusses developments, and practical implementation of a novel digital signal processor (DSP) based system for the online real time monitoring of performance and parameter determination of three-phase induction motors. A software based algorithm is used to measure the performance and calculate the parameters of the three-phase induction motor in real-time, without the need to connect a mechanical load or drive to the machine's rotor shaft. The new algorithm takes into account the machine parameter variations due to changes in electric supply, skin effect, mechanical load and any fault in the overall system, so the system can be used for condition monitoring of any electrical drive system and can be permanently or temporarily connected to such a system for diagnosis",1995,0, 720,Detection of faults in induction motors using artificial neural networks,"When faults begin to develop, the dynamic processes in an induction motor change and this is reflected in the shape of the vibration spectrum. Thus, one can detect and identify machine faults by analyzing the vibration spectrum by examining whether any characteristics frequencies appear on the spectra. Here, the authors describe how fault detection and identification using such a vibration method on a induction motor was accomplished using a simple neural network program. Two machine faults, of bearing wear and unbalanced supply fault, are simulated and tested. Acceptable results are obtained and faults are classified accordingly",1995,0, 721,Computerised fault diagnosis of induction motor drives,"In spite of the fact that the probability of breakdowns of 3-phase induction motors is very low, 2 to 3% per annum, monitoring their operating condition is essential, particularly when they are working in sophisticated workshops (for example automated production lines). In many industries lots of machines depend on mutual operation, and the cost of unexpected breakdowns is very high. In such cases the breakdowns of machines in operation usually involve higher losses in the production process than the cost of their repairs or even the initial costs of the machines themselves. In addition to the faults, i.e. short circuits, winding failures, bearing seizures, causing an immediate stop in the production process often has safety implications and may even expose human beings to danger. Both technical and economical considerations suggested the development of a new computer-based diagnostic system to predict the dates and places of expected faults by condition monitoring. The computer-based measurement and analysis system was designed and tested under a cooperation scheme between the Middlesex University, London and the University of Miskolc, Hungary. This system, after analysing various quantities of the drive unit, predicts the possible breakdowns, their expected dates and places. The authors discuss the criteria for a new diagnostic system, and then describe the hardware and software of the new system",1995,0, 722,Hybrid systems long term simulation,"The paper deals with the estimation of the long term performances of “hybrid systemsâ€?for electric energy production. Such systems integrate conveniently different kind of generators and usually they are constituted by traditional electric energy plants as diesel generating set combined with renewable energy generators. A software for the energetic/economic evaluation of the performances of hybrid systems as a function of their configuration and components characteristics as well as of the renewable energy sources availability is presented. Such a package has been developed as part of the Hybrid Simulator Facility built under an EC-JOULE II project. Tests performed on the facility are presented to show the capability of these kind of systems to cover the needs of small communities. Also giving a very good quality of the electric service. Furthermore it is demonstrated how to reduce the cost of the kWh with respect to the one produced by diesel stations for electric energy production in isolated communities. Finally, a simulation relative to the Greek island of Angathonisi is reported and the results show the fuel savings obtained with the integration of a 20 kW wind turbine and 10 kWp PV field",1995,0, 723,Probing and fault injection of protocol implementations,"Ensuring that a distributed system with strict dependability constraints meets its prescribed specification is a growing challenge that confronts software developers and system engineers. This paper presents a technique for probing and fault injection of fault-tolerant distributed protocols. The proposed technique, called script-driven probing and fault injection, can be used for studying the behavior of distributed systems and for detecting design and implementation errors of fault-tolerant protocols. The focus of this work is on fault injection techniques that can be used to demonstrate three aspects of a target protocol: i) detection of design or implementation errors, ii) identification of violations of protocol specifications, and iii) insight into design decisions made by the implementers. The emphasis of our approach is on experimental techniques intended to identify specific “problemsâ€?in a protocol or its implementation rather than the evaluation of system dependability through statistical metrics such as fault coverage. To demonstrate the capabilities of this technique, the paper describes a probing and fault injection tool, called the PFI tool (Probe/Fault Injection Tool), and a summary of several extensive experiments that studied the behavior of two protocols: the transmission control protocol (TCP) and a group membership protocol (GMP)",1995,0, 724,Topology detection for adaptive protection of distribution networks,A general purpose network topology detection technique suitable for use in adaptive relaying applications is presented in this paper. Three test systems were used to check the performance of the proposed technique. Results obtained from the tests are included. The proposed technique was implemented in the laboratory as a part of the implementation of the adaptive protection scheme. The execution times of the topology detection software were monitored and were found to be acceptable,1995,0, 725,Performance prediction of WLAN speech and image transmission,"The effects of multipath, noise, and cochannel interference on performance are shown for two wireless local area networks (WLAN). Measured impulse responses from an office environment provided the multipath distortion. The first WLAN system uses frequency hopped (FH) Gaussian frequency shift keying (GFSK) and the second uses direct sequence (DS) differential binary phase shift keying (DBPSK). Both modulations are part of the proposed WLAN IEEE 802.11 standard. Vocoded speech and image quality were predicted for wireless systems through software simulation by relating quality classes to the following system and channel operational parameters: bit error rate (BER), signal-to-noise ratio (SNR), and carrier-to-interference ratio (CIR). Experimental results are summarized by overlaying image and vocoded speech quality classes on a BER, SNR, and CIR graph",1995,0, 726,Dealing with Non-Functional Requirements: Three Experimental Studies of a Process-Oriented Approach,"Quality characteristics are vital for the success of software systems. To remedy the problems inherent in ad hoc development, a framework has been developed to deal with non-functional requirements (quality requirements or NFRs). Taking the premise that the quality of a product depends on the quality of the process that leads from high-Ievel NFRs to the product, the framework's objectives are to represent NFR-specific requirements, consider design tradeoffs, relate design decisions to IYFRs, justify the decisions, and assist defect detection. The purpose of this paper is to give an initial evaluation of the extent to which the framework's objectives are met. Three small portions of information systems were studied by the authors using the framework. The framework and empirical studies are evaluated herein, both from the viewpoint of domain experts who have reviewed the framework and studies, and ourselves as framework developers and users. The systems studied have a variety of characteristics, reflecting a variety of real application domains, and the studies deal with three important classes of NFRs for systems, namely, accuracy, security, and performance. The studies provide preliminary support for the usefulness of certain aspects of the framework, while raising some open issues.",1995,0, 727,A pricing policy for scalable VOD applications,"Many video applications are scalable due to human tolerance to the degradation in picture quality, frame loss and end-to-end delay. Scalability allows the network to utilize its resources efficiently for supporting additional connections, thereby increasing revenue and number of supported customers. This can be accomplished by a dynamic admission control scheme which scales down existing connections to support new requests. However, users will not be willing to tolerate quality degradation unless it is coupled with monetary or availability incentives. We propose a pricing policy and a corresponding admission control scheme for scalable VOD applications. The pricing policy is two-tiered, based on a connection setup component and a scalable component. Connections which are more scalable are charged less but are more liable to be degraded. The proposed policy trades off performance degradation with monetary incentives to improve user benefit and network revenue, and to decrease the blocking probability of connection requests. We demonstrate by means of simulation that this policy encourages users to specify the scalability of an application to the network",1995,0, 728,Parallel test generation with low communication overhead,"In this paper we present a method of parallelizing test generation for combinational logic using boolean satisfiability. We propose a dynamic search-space allocation strategy to split work between the available processors. This strategy is easy to implement with a greedy heuristic and is economical in its demand for inter-processor communication. We derive an analytical model to predict the performance of the parallel versus sequential implementations. The effectiveness of our method and analysis is demonstrated by an implementation on a Sequent (shared memory) multiprocessor. The experimental data shows significant performance improvement in parallel implementation, validates our analytical model, and allows predictions of performance for a range of time-out limits and degrees of parallelism",1995,0, 729,Software-quality improvement using reliability-growth models,"In the traditional software development model, the system test phase is the last stage where the reliability, capability, performance and other important dimensions of the quality of a system can be evaluated. During this stage, a point is eventually reached where a decision must be made to either continue or stop testing. This is a crucial task that requires an objective assessment of the benefits and costs associated with each alternative. This paper outlines a methodology that improves the effectiveness of management decisions by providing estimates of the total number of errors to be found through testing, the number of remaining errors, the additional testing time needed to achieve reliability goals and the impact of these parameters on product quality, project cost and project duration",1995,0, 730,Built-in-test adequacy-evaluation methodology for an air-vehicle system,"Rapid improvement in built-in-test (BIT) system capability is probably one of the most important activities in an air-vehicle system development test and evaluation program. An effective BIT adequacy evaluation methodology can accomplish this. The methodology must provide the development tester BIT parameters that serve as indicators of air-vehicle BIT system performance. Deficiencies that are pinpointed can then be corrected to improve the overall BIT system's performance. A BIT adequacy evaluation methodology developed and implemented during a recently conducted Air Force development and test program did just that. It provided real-time technical information at a particular period that indicated the status of the BIT system in relation to specified requirements. It also served as a springboard to evaluate the BIT system, and to document the results, which provided the manufacturer with valuable data on the BIT system's performance. Finally, it gave the manufacturer the opportunity to implement corrective actions within the allocated schedule, and develop the air-vehicle BIT system concurrently",1995,0, 731,A verification tool to measure software in critical systems,"Previously, software metrics have been established to evaluate the software development process throughout the software life cycle, and have been effective in helping to determine how a software design is progressing. These metrics are used to uncover favorable and unfavorable design trends and identify potential problems and deficiencies early in the development process to reduce costly redesign or the delivery of immature error prone software. One area where design metrics plays an important role is in the identification of misunderstandings between the software engineer and the system or user requirements due to incorrect or ambiguous statements of requirements. However, the metrics developed to date do not consider the additional interface to the safety engineer when developing critical systems. Because a software error in a computer controlled critical system can potentially result in death, injury, loss of equipment or property, or environmental harm, a safety metrics set was developed to ensure that the safety requirements are well understood and correctly implemented by the software engineer. This paper presents a safety metrics set that can be used to evaluate the maturity of hazard analysis processes and its interaction with the software development process",1995,0, 732,Analyzing empirical data from a reverse engineering project,"This study reports the analysis of data collected during the execution of a reverse engineering process in a real environment. Essentially, the analysis assessed productivity aspects. The experience showed the need for automatic tools which can be adapted to the context the process is performed in and the possibility of anchoring the effort required for the reverse engineering to the desired product quality",1995,0, 733,An operational expert system to analyze and locate faults occurring on MV networks,"This paper describes an operational expert system analyzing faults occurring on medium voltage networks. After a brief description of the auscultation method, that led to the development of the software, the LAURE functions and architecture are explained. The analysis and calculation module is detailed, as well as the configuration and the request parts. The paper also displays the experimentation results and the organization held to operate the system",1995,0, 734,"Design, implementation and performance evaluation of a new digital distance relaying algorithm","This paper presents the design, simulation, implementation and performance evaluation of a computationally efficient and accurate digital distance relaying algorithm. Published historical data were used in the first phase for validation purposes. Sample results illustrating highly accurate fault impedance estimates for various conditions are reported. The second phase uses voltage and current signals generated by the Alternative Transients Program (ATP) and a sample power system for various first-zone, second-zone and third-zone faults. Results of these studies, confirming the stability and computational efficiency of the algorithm, are presented and discussed. In the third phase, a prototype of the relay was developed and tested using real-time fault data generated from physical models of the transmission lines. Oscillographs for these conditions were recorded. Results of these tests indicating high speed relay operation are also discussed. The performance evaluation studies reported in this paper conclusively demonstrate that the new algorithm provides fast and accurate fault impedance estimates for the three-zone protection of transmission lines",1995,0,785 735,Experimental evaluation of the impact of processor faults on parallel applications,"This paper addresses the problem of processor faults in distributed memory parallel systems. It shows that transient faults injected at the processor pins of one node of a commercial parallel computer, without any particular fault-tolerant techniques, can cause erroneous application results for up to 43% of the injected faults (depending on the application). In addition to these very subtle faults, up to 19% of the injected faults (almost independent on the application) caused the system to hang up. These results show that fault-tolerant techniques are absolutely required in parallel systems, not only to ensure the completion of long-run applications but, and more important, to achieve confidence in the application results. The benefits of including some fairly simple behaviour based error detection mechanisms in the system were evaluated together with Algorithm Based Fault Tolerance (ABFT) techniques. The inclusion of such Mechanisms in parallel systems seems to be very important for detecting most of those subtle errors without greatly affecting the performance and the cost of these systems",1995,0, 736,"QUEST-a tool for automated performance, dependability, and cost tradeoff support","The Quantitative Expert System Tool (QUEST) provides automated support for quantitative evaluation and tradeoff analysis of complex computer system designs. Three basic components are provided. The design capture component is based on an information model that incorporates performance, dependability, and cost considerations within a single, integrated representation. The quantitative modeling and analysis component provides a suite of mathematical modeling capabilities that calculate a range of performance, dependability, and cost metrics relative to candidate designs. The knowledge-based tradeoff support component interprets design information and modeling results to assess system compliance with requirements, identify design problems contributing to non-compliance, and recommend design alternatives that will bring designs into compliance with all system requirements.",1995,0, 737,Integration testing based on software couplings,"Integration testing is an important part of the testing process, but few integration testing techniques have been systematically studied or defined. This paper presents an integration testing technique based on couplings between software components. The coupling-based testing technique is described, and 12 coverage criteria are defined. The coupling-based technique is also compared with the category-partition method on a case study, which found that the coupling-based technique detected more faults with fewer test cases than category-partition. This modest result indicates that the coupling-based testing approach can benefit practitioners who are performing integration testing on software. While it is our intention to develop algorithms to fully automate this technique, it is relatively easy to apply by hand",1995,0, 738,Fault tolerant behavior of redundant flight controls as demonstrated with the sure and assist modeling programs,"A comparison and evaluation between a triple redundant and quadruple redundant flight control system (FCS) is performed through the use of NASA's ASSIST and SURE software programs. To enhance the productivity of these programs, the development of a user friendly, front-end, menu driven interface routine was initiated. The front-end tool provides a user environment based on a system block diagram concept and automatically converts the system inputs into ASSIST code. Two test case models were used to compute the Probability of Loss of Control (PLOC), the Probability Of Mission Abort (PMA), and the Probability of Hazard Condition (PHC) values for the two different architectures. The solutions obtained by these tools were compared against existing solutions obtained from approximate equations calculations for a certain set of system requirements. The data produced by these software tools displayed more accuracy then the conventional approximation approach; however, the overall system predictions remained consistent for both approaches. The quadruple architecture design surpassed the triple architecture design for the given system parameters by a significant safety factor margin. The software tools approach also demonstrated how the coverage percentage values, which represent the voter capabilities in recognizing and handling failures, significantly affected the system performance. The ability of the software tools to provide user friendly interactions that produce accurate results in a quick and easy manner make this method of predicting fault-tolerant reliabilities for flight control systems the preferred choice of analysis",1995,0, 739,Evaluation of a multiple-model failure detection system for the F-16 in a full-scale nonlinear simulation,"A multiple model adaptive estimation (MMAE) algorithm is implemented with the fully nonlinear six-degree-of-motion, simulation rapid-prototyping facility (SRF) VISTA F-16 software simulation tool. The algorithm is composed of a bank of Kalman filters modeled to match particular hypotheses of the real world. Each presumes a single failure in one of the flight-critical actuators, or sensors, and one presumes no failure. The algorithm is demonstrated to be capable of identifying flight-critical aircraft actuator and sensor failures at a low dynamic pressure (20,000 ft, .4 Mach). Research considers single hardover failures. Tuning methods for accommodating model mismatch, including addition of discrete dynamics pseudonoise and measurement pseudonoise, are discussed and demonstrated. Robustness to sensor failures provided by MMAE-based control is also demonstrated",1995,0, 740,Understanding the domain of the Army's CTS,"This paper describes the development of the elements of the domain of the Army's Contact Test Set (CTS). This work is motivated by the anticipated proliferation of applications of the CTS, the testing classifications, and the roles of Test Measurement and Diagnostic Equipment (TMDE) and Software Engineering Directorate (SED). This work includes both the top-down development of the domain, following the Domain-Specific Software Architecture (DSSA) Pedagogical Example and the bottom-up analysis of existing Contact Test Set software. Several tools and methods are being evaluated, and currently two applications are currently being studied with others being added. A pilot application is to be implemented.",1995,0, 741,Recasting the avionic support paradigm,"Cost and performance problems associated with the current avionic support paradigm, which had its origins in the 1960s and is based largely on TPS/ATS technology, would suggest that a fundamental change in paradigm should be considered. A new paradigm, that more closely integrates the mission related and support related aspects of both acquisition and development, could resolve many chronic deficiencies while being more compatible with evolving prime system technology.",1995,0, 742,An intelligent approach to sensor fusion-based diagnostics,"This paper describes the integration of diverse sensor technology capable of analyzing units under test (UUT) from different perspectives, with advanced analysis techniques that provide new insight into static, dynamic, and historical UUT performance. Such methods can achieve greater accuracy in failure diagnosis and fault prediction; reduction in cannot duplicate (CND), retest-OK (RTOK) rates, and ambiguity group size; and improved confidence in performance testing that results in the determination of UUT ready for issue status.",1995,0, 743,Using neural networks for functional testing,"This paper discusses using Neural Networks for diagnosing circuit faults. As a circuit is tested, the output signals from a Unit Under Test can vary as different functions are invoked by the test. When plotted against time, these signals create a characteristic trace for the test performed. Sensors in the ATS can be used to monitor the output signals during test execution. Using such an approach, defective components can be classified using a Neural Network according to the pattern of variation from that exhibited by a known good card. This provides a means to develop testing strategies for circuits based upon observed performance rather than domain expertise. Such capability is particularly important with systems whose performance, especially under faulty conditions, is not well documented or where suitable domain knowledge and experience does not exist. Thus, neural network solutions may in some application areas exhibit better performance than either conventional algorithms or knowledge-based systems. They may also be retrained periodically as a background function, resulting with the network gaining accuracy over time.",1995,0, 744,Applying TQ principles to the requirements phase of system development,"This paper examines techniques that have evolved for specifying user requirements definition during systems development. It concentrates on potential impacts of applying total quality management (TQM) principles to the user requirements phase of the system development life cycle. TQM methodologies have been successful in many arenas. There are high expectations that TQM can also improve the quality of the most critical aspect of systems development, that is, defining user requirements",1995,0, 745,Parallel software engineering - Goals 2000,"The challenge of parallel software engineering is to provide predictable and effective products and processes for a new breed of application. To do this, parallel computing must apply to all concurrent application types. Assured delivery and predictable time response are the metrics to determine success. In addition, high integrity factors (availability, reliability, security and graceful reaction to changes) are necessary. Infrastructure and life cycle support issues are important to consider when creating large scale information and command and control systems. The approach is to define an encompassing engineering process that accounts for quantitative and predictive measures of time response, integrity and application complexity",1995,0, 746,Design and analysis of a multiprocessor system with extended fault tolerance,"Hardware approaches to fault-tolerance in designing a scalable multiprocessor system are discussed. Each node is designed to support multi-level fault-tolerance enabling a user to choose the level of fault-tolerance with a possible resource or performance penalty. Various tree-type interconnection networks using switches are compared in terms of reliability, latency, and implementation complexity. A practical duplicate interconnection network with the increased reliability is proposed in consideration of implementation issues under the physical constraint",1995,0, 747,A leader election algorithm in a distributed computing system,A leader election algorithm based on the performance and the operation rate of nodes and links as proposed. The performance of the existing leader election algorithms should be improved because a distributed system pauses until a node becomes a unique leader among all nodes in the system. The pre-election algorithm that elects a provisional leader while the leader is executing is also introduced,1995,0, 748,Automated test generation for improved protocol testing,"To enhance the overall quality of its network protocol testing process, Bellcore has developed TACIT, a toolkit for automated conformance and interoperability testing based on a formal and automated approach to test script generation. We first give a technical description of TACIT in which the different steps of the automated test script generation method are outlined, such as protocol specification and compilation, generic test script generation and executable test script encoding. We then relate how TACIT was experimentally used for testing of ISDN primary rate interface protocol. The results of this experiment are finally assessed and compared to the current script generation approach used at Bellcore",1995,0, 749,On the organization level structure of the hierarchical intelligent machine,A novel structure of the organization level of the hierarchical intelligent machine is proposed. The proposed model has the ability to search complete activities by learning without referring to the knowledge base of the incompatible event pairs. Also it has fault tolerant capability by using directly a performance measure from low levels as a probability. Application of the novel structure to a simple mobile robot organization problem is considered to demonstrate its effectiveness by computer simulations,1995,0, 750,"Software quality measurement and modeling, maturity, control and improvement","The goal of the paper is to provide a measurement structure needed for the improvement of software quality by uniting the Rome Laboratory Software Quality Framework, a reconstructed concept of measurement based maturity, statistical analysis, data quality and requisite information structures. The structure involves both standards of practice and standards of performance relating to product and process, and the need for extended forms of analysis for them to be developed. Such a structure is essential if software quality is to be assessed and improved in a timely manner",1995,0, 751,Using fault injection to assess software engineering standards,"Standards for quality software are increasingly important, especially for critical systems. Development standards and practices must be subjected to quantitative analyses; it is no longer adequate to encourage practices because they “make senseâ€?or “seem reasonable.â€?Process improvement must be demonstrated by a history of improved products. Fault-injection methods can be used to assess the quality of software itself and to demonstrate the effectiveness of software processes. Fault-injection techniques can help developers move beyond the practical limitations of testing. Fault-injection techniques focus on software behavior, not structure; process-oriented techniques cannot measure behavior as precisely. Fault-injection methods are dynamic, empirical, and tractable; as such, they belie the notion that measuring the reliability of critical software is futile. Before focusing too narrowly on the assessment of software development processes, we should further explore the measurement of software behaviors",1995,0, 752,Trillium: a model for the assessment of telecom software system development and maintenance capability,"Since 1982 Bell Canada has been developing a model to assess the software development process of existing and prospective suppliers as a means to minimize the risks involved and ensure both the performance and timely delivery of software systems purchased. This paper presents the revised Trillium model (version 3.0). The basis of the process assessment models for software relies on benchmarking, e.g. comparing your practices with the best and successful organizations. It is also a basic tool that you will find in the TQM literature. For software assessment we have used initially two levels of benchmarks to develop the model: professional, national and international standards; and comparisons with other organizations in the same market segment. The software assessment model should therefore map to existing engineering standards as well as quality standards. It should also provide an output that can be used easily to benchmark against “best-in-classâ€?organizations",1995,0, 753,Detecting program modules with low testability,"We model the relationship between static software product measures and a dynamic quality measure, testability. To our knowledge, this is the first time a dynamic quality measure has been modeled using static software product measures. We first give an overview of testability analysis and discriminant modeling. Using static software product measures collected from a real time avionics software system, we develop two discriminant models and classify the component program modules as having low or high testability. The independent variables are principal components derived from the observed software product measures. One model is used to evaluate the quality of fit and one is used to assess classification performance. We show that for this study, the quality of fit and classification performance of the discriminant modeling methodology are excellent and yield a potentially useful insight into the relationship between static software measures and testability",1995,0, 754,ORCA's oceanographic sensor suite,"The Mapping, Charting and Geodesy Branch of the Naval Research Laboratory (NRL) at Stennis Space Center, MS, is conducting a multi-year program for the development of unmanned, untethered sensor systems for the collection of tactical oceanographic data in littoral regions. This paper reviews the sensor systems, program progress to date and the future plans for a comprehensive oceanographic survey system. The prototype platform currently in use for this project is the ORCA semi-submersible. The ORCA is an air-breathing vessel which travels just below the water surface. The vessel utilizes a direct radio link for real-time data and control communications, as well as a DGPS system for precise platform positioning. The primary sensor installed on ORCA is the Simrad EM950 system which collects bathymetry and collocated acoustic imagery in water up to 300 meters in depth. With realtime data telemetry to the ORCA host vessel and the NRL developed HMPS bathymetry post-processing software, the system is capable of same-day chart production. In contrast to a full size survey vessel, ORCA is able to collect bathymetric data of the same quantity and quality, but will have one fortieth the life-cycle costs. Other sensors being integrated into ORCA include an acoustic sediment classification system, an acoustic Doppler current profiler, and obstacle avoidance systems",1995,0, 755,An expert fault manager using an object meta-model,"This paper describes the design and implementation of an expert fault management (EFM) system, based on an object-oriented meta-model, which can isolate causes of performance problems in a distributed environment. Error diagnosis can integrate application, system, and network related causes and identify the root cause, as well as affected applications. Diagnosis can be proactive or reactive. The implementation is based on the GEN-X expert system, and intelligent agents written in PERL. It is self-configuring, and has demonstrated it's ability to detect problems earlier and more accurately than humans",1995,0, 756,Production IDDQ testing with passive current compensation,"In an ideal world, all test vectors selected as IDDQ test vectors would have very low measurement current. In many circumstances, however, a complete set of tests cannot be defined when relying on zero-current considerations. IDDQ test vectors selected from existing test sets must be capable of compensating for passive currents. The identification of valid test vectors, and the magnitude of the passive current, can be predicted in the ASIC environment via macrocell-specific information. This paper presents one methodology to support this test generation using digital simulation, including the selection of multiple IDDQ test vectors",1995,0, 757,Yield learning via functional test data,This paper presents a methodology to estimate the defect Pareto in an IC process through the use of production functional test data. This Pareto can then be used for yield improvement activities. We demonstrate the concept on several benchmark circuits. We show how limited IDDQ current testing can significantly improve the Pareto accuracy,1995,0, 758,From hardware to software testability,"This paper presents the application of some hardware testability concepts to data-flow software. Testability is concerned with three difficulties: generating test sets, interpreting test results and diagnosing faults. This threefold aspect of testability is discussed and estimates are proposed",1995,0, 759,Deep submicron: is test up to the challenge?,"ICs fabricated in increasingly deep submicron technologies are fundamentally shifting the problem areas on which to focus when testing these devices. Several major paradigm shifts are required in order for current test technologies to be extended to meet the challenges. In the past, chip sizes and operational speeds were such that numerous assumptions could be employed throughout the manufacturing test process which greatly eased pattern generation and application. Among those assumptions were: high stuck-at fault coverage is sufficient to guarantee high quality testing (other forms of testing can be neglected); the primary problem to be solved by test/DFT is defect detection (as opposed to defect isolation); parasitic effects can be ignored; test is a “stand-alone processâ€? Unfortunately, it is being discovered that deep submicron technologies invalidate many of these assumptions. It is suggested that increased emphasis should be placed on design and test generation techniques for delay defects. The industry should formalize an acceptable metric to measure the coverage of such defects. When and where appropriate, test generation software should comprehend parasitic effects and thereby create tests which are more robust with respect to test application. A tester model should be put into place to more easily debug test patterns without the necessity of actual silicon. Finally, test generation cannot be performed in a vacuum. There is a great deal of commonality between the design requirements for the use of static timing analysis and those of testability. There also must be a merging of functionalities, such as between critical path identification and path-based test generation, to ease the use of these technologies",1995,0, 760,Disk space guarantees as a distributed resource management problem: A case study,"In the single system UNIX, successful completion of a write system call implies a guarantee of adequate disk space for any new pages created by the system call. To support such a guarantee in a distributed file system designers need to solve the problems of accurately estimating the space needed, communication overhead, and fault tolerance. In the Calypso file system, which is a cluster-optimized, distributed UNIX file system, we solve these problems using an advance-reservation scheme. Measurements show that the overhead of this scheme for typical UNIX usage patterns is 1% to 3%",1995,0, 761,PACS in the operating room: experience at the Baltimore VA Medical Center,The objective of this study was to evaluate the performance of a picture archiving and communication system (PACS) in the operating room (OR) environment in a filmless hospital. Surgeons and OR staff were interviewed to assess the degree to which the PACS in the OR met their requirements and expectations. PACS was favored over film in the OR for the majority of those surveyed. Image quality was perceived as equivalent to or better than film. Image availability was cited as the major advantage over film. Areas for improvement in workstation design and PACS training were described and solutions offered. A filmless OR was preferable to a film environment for surgeons and staff. Attention to the specific requirements of different hospital environments such as the OR during PACS design and implementation may increase the utility of PACS for non-radiologist users,1995,0, 762,STAREX SELF-REPAIR ROUTINES: SOFTWARE RECOVERY IN THE JPL-STAR COMPUTER,"
First Page of the Article
",1995,0, 763,ON THE DIAGNOSABILITY OF DIGITAL SYSTEMS,"
First Page of the Article
",1995,0, 764,An Adaptive Distributed System-Level Diagnosis Algorithm and Its Implementation,"
First Page of the Article
",1995,0, 765,Image processing for CT-assisted reverse engineering and part characterization,"Computed tomography (CT) systems have the ability to rapidly and nondestructively produce images of both exterior and interior surfaces and regions. Because CT images are densitometrically accurate, complete morphological and part characterization information can be obtained without need of physical sectioning. CT data can be processed to create computer-aided design (CAD) representations of the part, to extract dimensional measurement, or to detect, size and locate defects. The key image processing steps which are required both for CT-assisted reverse engineering and CT-assisted part characterization are examined. Two desirable characteristics of the underlying image processing algorithms are accuracy of the produced results and the ability to handle large volumetric images of complex parts",1995,0, 766,On-line sensing and modeling of mechanical impedance in robotic food processing,"Measurement of mechanical impedance is useful in robotic food processing. In cutting meat, fish, and other inhomogeneous objects by means of a robotic cutter, for instance, it is useful to sense the transition regions between soft meat, hard meat, fat, shin, and bone. Product quality and yield can be improved through this, by accurately separating the desired product from the parts that should be discarded. Mechanical impedance at the cutter-object interface is known to provide the necessary information for this purpose. Unfortunately, the conventional methods of measuring mechanical impedance require sensing of both force and velocity simultaneously. Instrumenting a robotic end-effector for these measurements can make the cutter unit unsatisfactorily heavy, sluggish, and costly. An approach for on-line sensing of mechanical impedance, using the current of the cutter motor and the displacement (depth of cut) of the cutter, has been developed by us. A digital filter computes the mechanical impedance on this basis. For model-based estimation, performance evaluation of the on-line sensor, and also for model-based cutter control, it is useful to develop a model of the cutter-object interface. This paper illustrates these concepts using a laboratory system consisting of a robotic gripper and a flexible object. The prototype consists of an industrial-quality robotic gripper, a control computer, and associated hardware and software for data acquisition and processing. A model of the process interface between the end-effector and object has been developed. Some illustrative results from laboratory experiments are given",1995,0, 767,A neural fuzzy system to evaluate software development productivity,"Managing software development and maintenance projects requires early knowledge about quality and effort needed for achieving this quality level. Quality-based productivity management is introduced as one approach for achieving and using such process knowledge. Fuzzy rules are used as a basis for constructing quality models that can identify outlying software components that might cause potential quality problems. A special fuzzy neural network is introduced to obtain the fuzzy rules combining the metrics as premises and quality factors as conclusions. Using the law of DeMorgan, this net structure is able to learn premises just by changing the weights. Note that the authors change neither the number of neurons nor the number of connections. This new type of net allows for the extraction of knowledge acquired by training on the past process data directly in the form of fuzzy rules. Beyond that, it is possible to transfer all the known rules to the neural fuzzy system in advance. The suggested approach and its advantages towards common simulation and decision techniques is illustrated with experimental results. Its application area is in maintenance productivity. A module quality model-with respect to changes-provides both quality of fit (according to past data) and predictive accuracy (according to ongoing projects)",1995,0, 768,Image quality and readability,"Determining the readability of documents is an important task. Human readability pertains to the scenario when a document image is ultimately presented to a human to read. Machine readability pertains to the scenario when the document is subjected to an OCR process. In either case, poor image quality might render a document unreadable. A document image which is human readable is often not machine readable. It is often advisable to filter out documents of poor image quality before sending them to either machine or human for reading. This paper is about the design of such a filter. We describe various factors which affect document image quality and the accuracy of predicting the extent of human and machine readability possible using metrics based on document image quality",1995,0, 769,Architectural timing verification of CMOS RISC processors,"We consider the problem of verification and testing of architectural timing models (""timers"") coded to predict cycles-per-instruction (CPI) performance of advanced CMOS superscalar (RISC) processors. Such timers are used for pre-hardware performance analysis and prediction. As such, these software models play a vital role in processor performance tuning as well as application-based competitive analysis, years before actual product availability. One of the key problems facing a designer, modeler, or application analyst who uses such a tool is to understand how accurate the model is, in terms of the actual design. In contrast to functional simulators, there is no direct way of testing timers in the classical sense, since the “correctâ€?execution time (in cycles) of a program on the machine model under test is not directly known or computable from equations, truth tables, or other formal specifications. Ultimate validation (or invalidation) of such models can be achieved under actual hardware availability, by direct comparisons against measured performance. However, deferring validation solely to that stage would do little to achieve the overall purpose of accurate pre-hardware analysis, tuning, and projection. We describe a multilevel validation method which has been used successfully to transform evolving timers into highly accurate pre-hardware models. In this paper, we focus primarily on the following aspects of the methodology: a) establishment of cause-effect relationships in terms of model defects and the associated fault signatures; b) derivation of application-based test loop kernels to verify steady-state (periodic) behavior of pipeline flow, against analytically predicted signatures; and c) derivation of synthetic test cases to verify the “coreâ€?parameters characterizing the pipeline-level machine organization as implemented in the timer model. The basic tenets of the theory and its application are described in the co- - ntext of an example processor, comparable in complexity to an advanced member of the PowerPCâ„?6XX processor family.",1995,0, 770,Logic vs. current testing: an evaluation using CrossCheck Technology's CM-iTest,"This paper examines the effectiveness of using the CrossCheck software tool CM-iTest to select high-fault coverage, robust test vectors for measuring IDDQ. In the experiments section of the paper data from 4 ASIC designs is presented which examines the defect detection of stuck-at fault (SAF) functional testing versus IDDQ current testing for varying degrees of SAF and transistor-level I DDQ fault coverages. The IDDQ current testing further compares the defect detection of using CrossCheck CM-iTest selected IDDQ vectors versus randomly selected and customer selected IDDQ vectors",1995,0, 771,Clustering OCR-ed texts for browsing document image database,"Document clustering is a powerful tool for browsing throughout a document database. Similar documents are gathered into several clusters and a representative document of each cluster is shown to users. To make users infer the content of the database from several representatives, the documents must be separated into tight clusters, in which documents are connected with high similarities. At the same time, clustering must be fast for user interaction. We propose an O(n2) time, O(n) space cluster extraction method. It is faster than the ordinal clustering methods, and its clusters compare favorably with those produced by Complete Link for tightness. When we deal with OCR-ed documents, term loss caused by recognition faults can change similarities between documents. We also examined the effect of recognition faults to the performance of document clustering",1995,0, 772,Prediction of OCR accuracy using simple image features,"A classifier for predicting the character accuracy achieved by any Optical Character Recognition (OCR) system on a given page is presented. This classifier is based on measuring the amount of white speckle, the amount of character fragments, and overall size information in the page. No output from the OCR system is used. The given page is classified as either “goodâ€?quality (i.e. high OCR accuracy expected) or “poorâ€?(i.e. low OCR accuracy expected). Results of processing 639 pages show a recognition rate of approximately 85%. This performance compares favorably with the ideal-case performance of a prediction method based upon the number of reject-markers in OCR generated text",1995,0, 773,Ground-truthing and benchmarking document page segmentation,"We describe a new approach for evaluating page segmentation algorithms. Unlike techniques that rely on OCR output, our method is region-based: the segmentation output, described as a set of regions together with their types, output order etc., is matched against the pre-stored set of ground-truth regions. Misclassifications, splitting, and merging of regions are among the errors that are detected by the system. Each error is weighted individually for a particular application and a global estimate of segmentation quality is derived. The system can be customized to benchmark specific aspects of segmentation (e.g., headline detection) and according to the type of error correction that might follow (e.g., re-typing). Segmentation ground-truth files are quickly and easily generated and edited using GroundsKeeper, an X-Window based tool that allows one to view a document, manually draw regions (arbitrary polygons) on it, and specify information about each region (e.g., type, parent)",1995,0, 774,Providing Predictable Performance To A Network-Based Application Over ATM,"
First Page of the Article
",1995,0, 775,Fast bounds for ATM quality of service parameters,We demonstrate how fast estimates of quality of service parameters can be obtained for models of VBR traffic queuing in the buffer of an ATM multiplexer. The technique rests on conservative estimates of cell-loss ratios in the buffer. These are obtained through Martingale methods; the cell-loss ratio from a buffer of size x is bounded as Prob[cell-loss]⩽e-u-δx. An estimate of the mean cell delay can also be derived from this. We discuss briefly the algorithms used to compute the bounds. One attraction of the method is that the speed of computation is independent of the number of sources present in the traffic stream arriving at the multiplexer. This gives great advantage over estimates derived from a complete solution of the model queueing problem: these generally require the analysis of matrices whose dimension is proportional to the number of sources present. We give an application using a model of bursty traffic proposed by Finnish Telecom: a Markovian traffic model in which burst and silence are geometrically distributed subject to a fixed maximum burst length. A demonstration of a prototype software package based on the above methods is included,1995,0, 776,Evaluating the impact of object-oriented design on software quality,"Describes the results of a study where the impact of object-oriented (OO) design on software quality characteristics is experimentally evaluated. A suite of Metrics for OO Design (MOOD) was adopted to measure the use of OO design mechanisms. Data collected on the development of eight small-sized information management systems based on identical requirements were used to assess the referred impact. Data obtained in this experiment show how OO design mechanisms such as inheritance, polymorphism, information hiding and coupling, can influence quality characteristics like reliability or maintainability. Some predictive models based on OO design metrics are also presented",1996,1, 777,A validation of object-oriented design metrics as quality indicators,"This paper presents the results of a study in which we empirically investigated the suite of object-oriented (OO) design metrics introduced in (Chidamber and Kemerer, 1994). More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described in (Li and Henry, 1993) where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber and Kemerer's OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than “traditionalâ€?code metrics, which can only be collected at a later phase of the software development processes",1996,1, 778,Predicting fault-prone software modules in telephone switches,"An empirical study was carried out at Ericsson Telecom AB to investigate the relationship between several design metrics and the number of function test failure reports associated with software modules. A tool, ERIMET, was developed to analyze the design documents automatically. Preliminary results from the study of 130 modules showed that: based on fault and design data one can satisfactorily build, before coding has started, a prediction model for identifying the most fault-prone modules. The data analyzed show that 20 percent of the most fault-prone modules account for 60 percent of all faults. The prediction model built in this paper would have identified 20 percent of the modules accounting for 47 percent of all faults. At least four design measures can alternatively be used as predictors with equivalent performance. The size (with respect to the number of lines of code) used in a previous prediction model was not significantly better than these four measures. The Alberg diagram introduced in this paper offers a way of assessing a predictor based on historical data, which is a valuable complement to linear regression when prediction data is ordinal. Applying the method described in this paper makes it possible to use measures at the design phase to predict the most fault-prone modules",1996,1, 779,Early quality prediction: a case study in telecommunications,"Predicting the quality of modules lets developers focus on potential problems and make improvements earlier in development, when it is more cost-effective. The authors applied discriminant analysis to identify fault-prone modules in a large telecommunications system prior to testing",1996,0, 780,Synchronization of multimedia data for a multimedia news-on-demand application,"We present a complete software control architecture for synchronizing multiple data streams generated from distributed media-storing database servers without the use of a global clock. Independent network connections are set up to remote workstations for multimedia presentations. Based on the document scenario and traffic predictions, stream delivery scheduling is performed in a centralized manner. Certain compensation mechanisms at the receiver are also necessary due to the presence of random network delays. A stream synchronization protocol (SSP) allows for synchronization recovery, ensuring a high quality multimedia display at the receiver. SSP uses synchronization quality of service parameters to guarantee the simultaneous delivery of the different types of data streams. In the proposed architecture, a priority-based synchronization control mechanism for MPEG-2 coded data streams is also provided. A performance modeling of the SSP is presented using the DSPN models. Relevant results such as the effect of the SSP, the number of synchronization errors, etc., are obtained",1996,0, 781,ScanSAR processing using standard high precision SAR algorithms,"Processing ScanSAR or burst-mode SAR data by standard high precision algorithms (e.g., range/Doppler, wavenumber domain, or chirp scaling) is shown to be an interesting alternative to the normally used SPECAN (or deramp) algorithm. Long burst trains with zeroes inserted into the interburst intervals can be processed coherently. This kind of processing preserves the phase information of the data-an important aspect for ScanSAR interferometry. Due to the interference of the burst images the impulse response shows a periodic modulation that can be eliminated by a subsequent low-pass filtering of the detected image. This strategy allows an easy and safe adaptation of existing SAR processors to ScanSAR data if throughput is not an issue. The images are automatically consistent with regular SAR mode images both with respect to geometry and radiometry. The amount and diversity of the software for a multimode SAR processor are reduced. The impulse response and transfer functions of a burst-mode end-to-end system are derived. Special attention is drawn to the achievable image quality, the radiometric accuracy, and the effective number of looks. The scalloping effect known from burst-mode systems can be controlled by the spectral weighting of the processor transfer function. It is shown that the fact that the burst cycle period is in general not an integer multiple of the sampling grid distance does not complicate the algorithm. An image example using X-SAR data for simulation of a burst system is presented",1996,0, 782,Observation and analysis of multiple-phase grounding faults caused by lightning,"This paper describes four phase and five-phase grounding faults caused by lightning on a 154-kV overhead transmission line. The authors measured insulator voltages and currents flowing along the ground wire and the tower. In addition, they photographed lightning strokes to the transmission line and flashovers between the arcing horns and examined the fault phases at substations. The paper analyzes insulator voltage waveforms using the Electromagnetic Transients Program (EMTP) and estimates the fault processes",1996,0, 783,On the use of testability measures for dependability assessment,"Program “testabilityâ€?is informally, the probability that a program will fail under test if it contains at least one fault. When a dependability assessment has to be derived from the observation of a series of failure free test executions (a common need for software subject to “ultra high reliabilityâ€?requirements), measures of testability can-in theory-be used to draw inferences on program correctness. We rigorously investigate the concept of testability and its use in dependability assessment, criticizing, and improving on, previously published results. We give a general descriptive model of program execution and testing, on which the different measures of interest can be defined. We propose a more precise definition of program testability than that given by other authors, and discuss how to increase testing effectiveness without impairing program reliability in operation. We then study the mathematics of using testability to estimate, from test results: the probability of program correctness and the probability of failures. To derive the probability of program correctness, we use a Bayesian inference procedure and argue that this is more useful than deriving a classical “confidence levelâ€? We also show that a high testability is not an unconditionally desirable property for a program. In particular, for programs complex enough that they are unlikely to be completely fault free, increasing testability may produce a program which will be less trustworthy, even after successful testing",1996,0, 784,On the expected number of failures detected by subdomain testing and random testing,"We investigate the efficacy of subdomain testing and random testing using the expected number of failures detected (the E-measure) as a measure of effectiveness. Simple as it is, the E-measure does provide a great deal of useful information about the fault detecting capability of testing strategies. With the E-measure, we obtain new characterizations of subdomain testing, including several new conditions that determine whether subdomain testing is more or less effective than random testing. Previously, the efficacy of subdomain testing strategies has been analyzed using the probability of detecting at least one failure (the P-measure) for the special case of disjoint subdomains only. On the contrary, our analysis makes use of the E-measure and considers also the general case in which subdomains may or may not overlap. Furthermore, we discover important relations between the two different measures. From these relations, we also derive corresponding characterizations of subdomain testing in terms of the P-measure",1996,0, 785,"Design, implementation and performance evaluation of a new digital distance relaying algorithm","This paper presents the design, simulation, implementation and performance evaluation of a computationally efficient and accurate digital distance relaying algorithm. Published historical data were used in the first phase for validation purpose. Sample results illustrating highly accurate fault impedance estimates for various conditions are reported. The second phase uses voltage and current signals generated by Alternative Transients Program (ATP) and a sample power system for various first-zone, second-zone and third-zone faults. Results of these studies confirming stability and computational efficiency of the algorithm are presented and discussed. In the third phase, a prototype of the relay was developed and tested using real-time fault data generated from physical models of the transmission lines. Oscillographs for these conditions were recorded. Results of these tests indicating high speed relay operation are also discussed. The performance evaluation studies reported in this paper conclusively demonstrate that the new algorithm provides fast and accurate fault impedance estimates for the three-zone protection of transmission lines",1996,0, 786,An input-domain based method to estimate software reliability,"A method is presented for software reliability estimation that is input-domain based. It was developed to overcome some of the difficulties in using existing reliability models in critical applications. The method classifies the faults that could be in a software program. Then it accounts for the distribution over the input-domain of input values which could activate each fault-type. The method assumes that these distributions change, by reducing their extent, with the number of test cases correctly executed. Using a simple example, the paper suggests a convenient fault classification and a choice of distributions for each fault-type. The introduction of the distributions permit better use of the information collected during the testing phase",1996,0, 787,A formal analysis of the subsume relation between software test adequacy criteria,"Software test adequacy criteria are rules to determine whether a software system has been adequately tested. A central question in the study of test adequacy criteria is how they relate to fault detecting ability. We identify two idealized software testing scenarios. In the first scenario, which we call prior testing scenario, software testers are provided with an adequacy criterion in addition to the software under test. The knowledge of the adequacy criterion is used to generate test cases. In the second scenario, which we call posterior testing scenario, software testers are not provided with the knowledge of adequacy criterion. The criterion is only used to decide when to stop the generation of test cases. In 1993, Frankl and Weyuker proved that the subsume relation between software test adequacy criteria does not guarantee better fault detecting ability in the prior testing scenario. We investigate the posterior testing scenario and prove that in this scenario the subsume relation does guarantee a better fault detecting ability. Two measures of fault detecting ability will be used, the probability of detecting faults and the expected number of exposed errors",1996,0, 788,Inheritance graph assessment using metrics,"Presents a new method integrated in Bell Canada's software acquisition process to assess software. This paper is focused on the assessment of the understandability of the inheritance graph of object-oriented software. The method is based on metrics and on the graphical illustration of the inheritance graph. A technique to decompose the inheritance graph of an object-oriented program into sub-graphs using metrics is described. Metrics are also used to identify complex sub-graphs. On the selected graphs, a technique to improve the understandability of the graphical illustration representing the inheritance graph is also described. This technique is based on extracting the main tree on the inheritance graph. A C++ case study containing 1080 classes is presented, on which this assessment method was applied",1996,0, 789,An experiment to assess cost-benefits of inspection meetings and their alternatives: a pilot study,"We hypothesize that inspection meetings are far less effective than many people believe and that meetingless inspections are equally effective. However two of our previous industrial case studies contradict each other on this issue. Therefore, we are conducting a multi-trial, controlled experiment to assess the benefits of inspection meetings and to evaluate alternative procedures. The experiment manipulates four independent variables: the inspection method used (two methods involve meetings, one method does not); the requirements specification to be inspected (there are two); the inspection round (each team participates in two inspections); and the presentation order (either specification can be inspected first). For each experiment we measure 3 dependent variables: the individual fault detection rate; the team fault detection rate; and the percentage of faults originally discovered after the initial inspection phase (during which phase reviewers individually analyze the document). So far we have completed one run of the experiment with 21 graduate students in computer science at the University of Maryland as subjects, but we do not yet have enough data points to draw definite conclusions. Rather than presenting preliminary conclusions, we describe the experiment's design and the provocative hypotheses we are evaluating. We summarize our observations from the experiment's initial run, and discuss how we are using these observations to verify our data collection instruments and to refine future experimental runs",1996,0, 790,Parameter estimation of the manifold growth model using Z-graph,"The manifold growth model that unifies existing software reliability growth models can cover a wide range of accumulated fault data including various types of data which are difficult to treat with existing models. However, there are problems. For example, it sometimes takes a long time to solve transcendental equations to estimate parameters of the model, and the solution is difficult to obtain in some cases. This paper shows that the most important parameters of the differential equation that defines the manifold growth model can be easily estimated for the given data by using a “Z-graphâ€?to express the Z-equation derived from the differential equation in a 2-dimensional graph. This paper also shows that the most appropriate type of software reliability growth model, or the most appropriate shape of curve, can be determined simply by observing a part of the data sequence from Z-graph, since Z-graph makes it possible that the variation of the value of parameters at any time can be recognized visually. Finally, the effectiveness of the Z-graph is shown by applying the Z-graph to both ideal and actual data",1996,0, 791,An empirical study of the correlation between code coverage and reliability estimation,"Existing time-domain models for software reliability often result in an overestimation, of such reliability because they do not take the nature of testing techniques into account. Since every testing technique has a limit to its ability to reveal faults in a given system, as a technique approaches its saturation region fewer faults are discovered and reliability growth phenomena are predicted from the models. When the software is turned over to field operation, significant overestimates of reliability are observed. We present a technique to solve this problem by addressing both time and coverage measures for the prediction of software failures. Our technique uses coverage information collected during testing to extract only effective data from a given operational profile. Execution time between test cases which neither increase coverage nor cause a failure as reduced by a parameterized factor. Experiments using this technique were conducted on a program created in a simulation environment with simulated faults and on an industrial automatic flight control project which contained several natural faults. Results from both experiments indicate that overestimation of reliability is reduced significantly using our technique. This new approach not only helps reliability growth models make more accurate predictions, but also reveals the efficiency of a testing profile so that more effective testing techniques can be conducted",1996,0, 792,Reliability and risk analysis for software that must be safe,"Remaining failures, total failures, test time required to attain a given fraction of remaining failures, and time to next failure are useful reliability metrics for: providing confidence that the software has achieved reliability goals; rationalizing how long to test a piece of software; and analyzing the risk of not achieving remaining failure and time to next failure goals. Having predictions of the extent that the software is not fault free (remaining failures) and whether it is likely to survive a mission (time to next failure) provide criteria for assessing the risk of deploying the software. Furthermore, the fraction of remaining failures can be used as both a program quality goal in predicting test time requirements and, conversely as an indicator of program quality as a function of test time expended. We show how these software reliability predictions can increase confidence in the reliability of safety critical software such as the NASA Space Shuttle Primary Avionics Software",1996,0, 793,Assessing neural networks as guides for testing activities,"As test case automation increases, the volume of tests can become a problem. Further, it may not be immediately obvious whether the test generation tool generates effective test cases. Indeed, it might be useful to have a mechanism that is able to learn, based on past history, which test cases are likely to yield more failures versus those that are not likely to uncover any. We present experimental results on using a neural network for pruning a testcase set while preserving its effectiveness",1996,0, 794,Cyclomatic complexity and the year 2000,"The year 2000 problem is omnipresent, fast approaching, and will present us with something we're not used to: a deadline that can't slip. It will also confront us with two problems, one technical, the other managerial. My cyclomatic complexity measure, implemented using my company's tools, can address both of these concerns directly. The technical problem is that most of the programs using a date or time function have abbreviated the year field to two digits. Thus, as the rest of society progresses into the 21st century, our software will think it's the year 00. The managerial problem is that date references in software are everywhere; every line of code in every program in every system will have to be examined and made date compliant. In this article, I elaborate on an adaptation of the cyclomatic complexity measure to quantify and derive the specific tests for date conversion. I originated the use of cyclomatic complexity as a software metric. The specified data-complexity metric is calculated by first removing all control constructs that do not interact with the referenced data elements in the specified set, and then computing cyclomatic complexity. Specifying all global data elements gives an external coupling measure that determines encapsulation. Specifying all the date elements would quantify the effort for a year-2000 upgrade. This effort will vary depending on the quality of the code that must be changed",1996,0, 795,Analysis of a telecommuting experience: a case study,"The use of telecommuting by a large, Australian based, organization is discussed. Fifty employees and twelve managers were involved. These participants completed questionnaires evaluating the method of telecommuting from both telecommuters' and their managers' perspectives. The results indicated that since the introduction of telecommuting: (1) telecommuters' productivity had increased; (2) in general there was a positive attitude towards telecommuting and both the telecommuters and their managers were keen to recommend it to others; (3) an important link was revealed between the quality of supervision of telecommuters and the telecommuters' performance and attitudes. The strongest predictors of employee productivity were the quality of supervision that they had received and the number of days per week that they were telecommuting. The results suggested that there were certain problems relating to the effectiveness of communications, feelings of isolation and sense of loss of belonging to the organization. These problems were predominant in the work groups which included some telecommuters who were dissatisfied with the quality of supervision",1996,0, 796,Spatial metadata and GIS for decision support,"The proliferation of GIS technology has greatly increased the access to and the usage of spatial data. Making maps is relatively easy even for those who do not have much cartographic training. Nonetheless, the concerns for spatial data quality among GIS and spatial data users have just began to sprout partly because information about spatial data quality is not readily available or useful. Metadata, which refer to data describing data, include the quality and accuracy information of the data. The Federal Geographic Data Committee has proposed content standards of metadata for spatial databases. However, the standards are not adequate to document the spatial variation in data quality in geographic data. The paper argues that information about the quality of spatial data over a geographical area, which can be regarded as spatial metadata, should be derived and reported to help users of spatial data to make intelligent spatial decisions or policy formulations. While cartographers focus on the representation of spatial data quality, and statisticians emphasize the quantitative measures of data quality, this paper proposes that GIS are logical tools to assess certain types of error in spatial databases because GIS are widely used to gather, manipulate, analyze, and display spatial data. A framework is proposed to derive several types of data quality information using GIS. These types of quality information include positional accuracy, completeness, attribute accuracy, and to some extent logical consistency. However, not every type of spatial metadata can be derived from GIS",1996,0, 797,Experiences of software quality management using metrics through the life-cycle,"Many software quality metrics to objectively grasp software products and process have been proposed in the past decades. In actual projects, quality metrics has been widely applied to manage software quality. However, there are still several problems with providing effective feedback to intermediate software products and the software development process. We have proposed a software quality management using quality metrics which are easily and automatically measured. The purpose of this proposal is to establish a method for building in software quality by regularly measuring and reviewing. The paper outlines a model for building in software quality using quality metrics, and describes examples of its application to actual projects and its results. As the results, it was found that quality metrics can be used to detect and remove problems with process and products in each phase. Regular technical reviews using quality metrics and information on the change of the regularly measured results was also found to have a positive influence on the structure and module size of programs. Further, in the test phase, it was found that with the proposed model, the progress of corrective action could be quickly and accurately grasped",1996,0, 798,Analytical and empirical evaluation of software reuse metrics,"How much can be saved by using existing software components when developing new software systems? With the increasing adoption of reuse methods and technologies, this question becomes critical. However, directly tracking the actual cost savings due to reuse is difficult. A worthy goal would be to develop a method of measuring the savings indirectly by analyzing the code for reuse of components. The focus of the paper is to evaluate how well several published software reuse metrics measure the “time, money and qualityâ€?benefits of software reuse. We conduct this evaluation both analytically and empirically. On the analytic front, we introduce some properties that should arguably hold of any measure of “time, money and qualityâ€?benefit due to reuse. We assess several existing software reuse metrics using these properties. Empirically, we constructed a toolset (using GEN+S) to gather data on all published reuse metrics from CS+ code; then, using some productivity and quality data from “nearly replicatedâ€?student projects at the University of Maryland, we evaluate the relationship between the known metrics and the process data. Our empirical study sheds some light on the applicability of our different analytic properties, and has raised some practical issues to be addressed as we undertake broader study of reuse metrics in industrial projects",1996,0, 799,System acquisition based on software product assessment,"The procurement of complex software product involves many risks. To properly assess and manage those risks, Bell Canada has developed methods and tools that combine process capability assessment with a static analysis based software product assessment. This paper describes the software product assessment process that is part of our risk management approach. The process and the tools used to conduct a product assessment are described. The assessment is in part based on static source code metrics and inspections. A summary of the lessons learned since the initial implementation in 1993 is provided. Over 20 products totalling more than 100 million lines of code have gone through this process",1996,0, 800,Understanding and predicting the process of software maintenance releases,"One of the major concerns of any maintenance organization is to understand and estimate the cost of maintenance releases of software systems. Planning the next release so as to maximize the increase in functionality and the improvement in quality are vital to successful maintenance management. The objective of the paper is to present the results of a case study in which an incremental approach was used to better understand the effort distribution of releases and build a predictive effort model for software maintenance releases. The study was conducted in the Flight Dynamics Division (FDD) of NASA Goddard Space Flight Center (GSFC). The paper presents three main results: (1) a predictive effort model developed for the FDD's software maintenance release process, (2) measurement-based lessons learned about the maintenance process in the FDD, (3) a set of lessons learned about the establishment of a measurement-based software maintenance improvement program. In addition, this study provides insights and guidelines for obtaining similar results in other maintenance organizations",1996,0, 801,Reducing and estimating the cost of test coverage criteria,"Test coverage criteria define a set of entities of a program flowgraph and require that every entity is covered by some test. We first identify Ec, the set of entities to be covered according to a criterion c, for a family of widely used test coverage criteria. We then present a method to derive a minimum set of entities, called a spanning set, such that a set of test paths covering the entities in this set covers every entity in Ec. We provide a generalised algorithm, which is parametrized by the coverage criterion. We suggest several useful applications of spanning sets of entities to testing. In particular they help to reduce and to estimate the number of tests needed to satisfy test coverage criteria",1996,0, 802,A reliability model combining representative and directed testing,"Directed testing methods, such as functional or structural testing, have been criticized for a lack of quantifiable results. Representative testing permits reliability modeling, which provides the desired quantification. Over time, however, representative testing becomes inherently less effective as a means of improving the actual quality of the software under test. A model is presented which permits representative and directed testing to be used in conjunction. Representative testing can be used early, when the rate of fault revelation is high. Later results from directed testing can be used to update the reliability estimates conventionally associated with representative methods. The key to this combination is shifting the observed random variable from interfailure time to a post-mortem analysis of the debugged faults, using order statistics to combine the observed failure rates of faults no matter how those faults were detected",1996,0, 803,Modeling software testing processes,"The production of a high quality software product requires application of both defect prevention and defect detection techniques. A common defect detection strategy is to subject the product to several phases of testing such as unit, integration, and system. These testing phases consume significant project resources and cycle time. As software companies continue to search for ways for reducing cycle time and development costs while increasing quality, software testing processes emerge as a prime target for investigation. The paper proposes the utilization of system dynamics models for better understanding testing processes. Motivation for modeling testing processes is presented along with a an executable model of the unit test phase. Some sample model runs are described to illustrate the usefulness of the model",1996,0, 804,Prediction-based routing for cell-switched networks,"ATM is the target switching and multiplexing technology for implementation of B-ISDN. However, existing ATM software systems are deficient in many respects, such as QoS-based routing. Noting that the route of a virtual circuit remains fixed throughout a session, it is clear that route selection will have long-term effects on network congestion, especially for long sessions. Hence, it is important that routing decisions be based not only on current network state information, but also on predicted future network state information. In this paper we investigate a new prediction-based routing methodology. An ATM LAN software simulator, SimATM, is developed and utilized in this research. SimATM is used to investigate the relative performances of various routing algorithms and link weight assignment strategies. The results clearly indicate that prediction-based routing significantly improves performance with respect to average cell delay, throughput, and jitter. Furthermore, this improvement is shown not to come at the expense of background traffic performance",1996,0, 805,Test and testability techniques for open defects in RAM address decoders,"It is a prevalent assumption that all RAM address decoder defects can be modelled as RAM array faults influencing one or more RAM cells. Therefore, can be implicitly detected by testing the RAM matrix with the march tests. Recently, we came across some failures in embedded SRAMs which were not detected by the march tests. The carried out analysis demonstrated the presence of open defects in address decoders that cannot be modelled as the conventional coupling faults, therefore, are not detected by the march tests. In this article, we present the test and testability strategies for such hard-to-detect open defects",1996,0, 806,Hardware/software partitioning using integer programming,"One of the key problems in hardware/software codesign is hardware/software partitioning. This paper describes a new approach to hardware/software partitioning using integer programming (IP). The advantage of using IP is that optimal results are calculated respective to the chosen objective function. The partitioning approach works fully automatic and supports multi-processor systems, interfacing and hardware sharing. In contrast to other approaches where special estimators are used, we use compilation and synthesis tools for cost estimation. The increased time for calculating the cost metrics is compensated by an improved quality of the estimations compared to the results of estimators. Therefore, fewer iteration steps of partitioning will be needed. The paper shows that using integer programming to solve the hardware/software partitioning problem is feasible and leads to promising results",1996,0, 807,A high performance image compression technique for multimedia applications,A novel image compression technique is presented for low cost multimedia applications. The technique is based on quadtree segmented two-dimensional predictive coding for exploiting correlation between adjacent image blocks and uniformity in variable block size image blocks. Low complexity visual pattern block truncation coding (VP-BTC) defined with a set of sixteen visual patterns is employed to code the high activity image blocks. Simulation results showed that the new technique achieved high performance with superior subjective quality at low bit rate,1996,0, 808,"Model-integrated toolset for fault detection, isolation and recovery (FDIR)","Fault detection, isolation and recovery (FDIR) functions are essential components of complex engineering systems. The design, validation, implementation, deployment and maintenance of FDIR systems are extremely information intensive tests requiring in-depth knowledge of the engineering system. The paper gives an overview of a model-integrated toolset supporting FDIR from design, into post deployment. The toolset is used for development of large, complex spacecraft systems during the engineering design phase",1996,0, 809,Categorization of programs using neural networks,"The paper describes some experiments based on the use of neural networks for assistance in the quality assessment of programs, especially in connection with the reengineering of legacy systems. We use Kohonen networks, or self-organizing maps, for the categorization of programs: programs with similar features are grouped together in a two-dimensional neighbourhood, whereas dissimilar programs are located far apart. Backpropagation networks are used for generalization purposes: based on a set of example programs whose relevant aspects have already been assessed, we would like to obtain an extrapolation of these assessments to new programs. The basis for these investigation is an intermediate representation of programs in the form of various dependency graphs, capturing the essentials of the programs. Previously, a set of metrics has been developed to perform an assessment of programs on the basis of this intermediate representation. It is not always clear, however, which parameters of the intermediate representation are relevant for a particular metric. The categorization and generalization capabilities of neural networks are employed to improve or verify the selection of parameters, and might even initiate the development of additional metrics",1996,0, 810,An empirical validation of four different measures to quantify user interface characteristics based on a general descriptive concept for interaction points,"The main problem of standards (e.g. ISO 9241) in the context of usability of software quality is that they cannot measure all relevant product features in a task independent way. We present a new approach to measure user interface quality in a quantitative way. First, we developed a concept to describe user interfaces on a granularity level, that is detailed enough to presence important interface characteristics, and is general enough to cover most of known interface types. We distinguish between different types of “interaction pointsâ€? With these kinds of interaction points we can describe several types of interfaces (CUI: command, menu, form-fill-in; GUI: desktop, direct manipulation, multimedia, etc.). We carried out two different comparative usability studies to validate our quantitative measures. The results of one other published comparative usability study can be predicted. Results of six different interfaces are presented and discussed. One of the most important result is that the dialog flexibility must exceed a threshold of 15-measured with two of our metrics-to increase significantly the usability",1996,0, 811,The use of meta-analysis in validating the DeLone and McLean information systems success model,"The measurement of systems success is important to any research on information systems. A research study typically uses several dependent measures as surrogates for systems success. These dependent measures are generally treated as outcome variables which, as a group, vary as a function of certain independent variables. Recently, a process perspective has been advocated to measure system success. This process perspective recognizes that some of the dependent measures, along with independent variables, also have an impact on outcome variables. A comprehensive success model has been developed by DeLone and McLean (1992) to group success measures cited in the literature into three categories: quality, use, and impact. This model further predicts that quality affects use, which in turn affects impact. This paper explores the plausibility of using meta-analysis to validate this success model. Advantages of using meta-analysis over other research methodologies in validating process models are examined. Potential problems with meta-analysis in validating this particular success model are discussed. A research plan that utilizes past empirical studies to validate this success model is described",1996,0, 812,The Midcourse Space Experiment (MSX),"The MSX is designed and built around an infrastructure of eight scientific teams; it will be the first and only extended duration, multiwavelength (0.1 to 28 μm) phenomenology measurement program funded and managed by BMDO. During its 16 month cryogen lifetime and five year satellite lifetime, MSX will provide high quality target, Earth, Earthlimb, and celestial multiwavelength phenomenology data. This data is essential to fill critical gaps in phenomenology and discrimination data bases, furthering development of robust models of representative scenes, and assessing optical discrimination algorithms. The MSX organization is comprised of self-directed work teams in six functional areas. Experiments formulated by each of the eight scientific teams will be executed on the satellite in a 903 km near polar orbit (99.23° inclination), with an eccentricity of 0.001, argument of perigee of 0, and the right ascension of the ascending node is 250.0025. Two dedicated target missions are planned consisting of one Strategic Target System launch and two Low Cost Launch Vehicles launches. These target missions will deploy various targets, enabling the MSX principal investigator teams to study key issues such as metric discrimination, deployment phase tracking, cluster tracking, fragment bulk filtering, tumbling re-entry vehicle signatures, etc. A data management infrastructure to ensure that the data is processed, analyzed, and archived will be available at launch time. The raw data and its associated calibration files and software will be archived, providing the customer with a cataloged database. This paper describes the MSX program objectives, target missions, data management architecture, and organization",1996,0, 813,Safety evaluation using behavioral simulation models,"This paper describes a design environment called ADEPT (advanced design environment prototype tool) which enables designers to assess the dependability of systems early in the design process using behavioral simulation models. ADEPT is an interactive graphical design environment which allows design and analysis of systems throughout the entire design cycle. ADEPT supports functional verification, performance evaluation, and dependability analysis early in the design cycle from a single model in order to dramatically reduce design cycles and deliver products on schedule. In this paper, ADEPT is applied to the design of a distributed computer system used to control trains. Two distinct experiments were run to illustrate dependability evaluation using behavioral simulation models. The first experiment evaluates the effectiveness of using a simple (7,4) Hamming code for protecting information in a distributed system. The second experiment evaluates the effectiveness of a watchdog monitor whose role is to detect hardware and software errors in the distributed system. The experiments illustrate dependability analysis using behavioral simulation models. The first simulation demonstrates estimation of the error coverage of the (7,4) code and the mean time to hazardous event (MTTHE). The second experiment demonstrates functional verification and controllability of behavioral simulation experiments by testing the response of a watchdog monitor design to rare malicious events",1996,0, 814,An extension of goal-question-metric paradigm for software reliability,"The driving need in software reliability is to mature the “physics of failureâ€?and design aspects related to software reliability. This type of focus would then enhance one's ability to effect reliable software in a predictable form. A major challenge is that software reliability, in essence, requires one to measure compliance to customer/user requirements. Customer/user requirements can range over a wide spectrum of software product attributes that relate directly or indirectly to software performance. Identifying and measuring these attributes in a structured way to minimize risk and allow pro-active preventive action during software development is no easy task. The goal-question-metric paradigm, discussed by Dr. Vic Basili (1992), is one popular and effective approach to measurement identification. However, in practice, additional challenges in using this approach have been encountered. Some of these challenges, though, seem to be alleviated with use of a reliability technique called success/fault tree analysis. Experience has shown that the goal-question-metric paradigm is conducive to the building of G-Q-M trees which may be analyzed using reliability success/fault tree logic",1996,0, 815,CATS-an automated user interface for software development and testing,"CATS has been developed as a user interface to allow users at client sites to access and use the ACT (analysis of complexity tool) of McCabe and Associates easily and automatically. It produces various detailed reports based on the ACT tool, and in addition it establishes and updates a complexity-tracking database at the user site. CATS has been implemented at various pilot sites. It has been used both intensively to increase test coverage for a given module and extensively to identify areas on which to focus test and assessment efforts. By using CATS, developers, testers, and managers can obtain key information on software structure, focus their efforts on those areas where defects are more likely to reside, more quickly understand and incorporate re-engineered or legacy code, and estimate the level of effort required for a project",1996,0, 816,Relating operational software reliability and workload: results from an experimental study,"This paper presents some results from an experimental and analytical study to investigate the failure behavior of operational software for several types of workload. The authors are primarily interested in the so-called highly reliable software where enough evidence exists to indicate a lack of known faults. For purposes of this study, a “goldâ€?version of a well known program was used in the experimental study. Judiciously selected errors were introduced, one at a time, and each mutated program was executed for three types of workload: constant; random; and systematically varying. Analytical analyses of the resulting failures were undertaken and are summarized here. The case when workload is random is of particular interest in this paper. An analytical model for the resulting failure phenomenon based on the Wald equation is found to give excellent results",1996,0, 817,FIRE: a fault-independent combinational redundancy identification algorithm,"FIRE is a novel Fault-Independent algorithm for combinational REdundancy identification. The algorithm is based on a simple concept that a fault which requires a conflict as a necessary condition for its detection is undetectable and hence redundant. FIRE does not use the backtracking-based exhaustive search performed by fault-oriented automatic test generation algorithms, and identifies redundant faults without any search. Our results on benchmark and real circuits indicate that we find a large number of redundancies (about 80% of the combinational redundancies in benchmark circuits), much faster than a test-generation-based approach for redundancy identification. However, FIRE is not guaranteed to identify all redundancies in a circuit.",1996,0, 818,Parameters for system effectiveness evaluation of distributed systems,"In a Distributed Computing System (DCS), the failure of one or more system components causes the degradation in its effectiveness to complete a given task as opposed to a complete network breakdown. This paper addresses the issue of degraded system effectiveness evaluation by introducing two static measures, namely, Distributed Program Performance Index (DPPI) and Distributed System Performance Index (DSPI). These metrics can be used to compare networks with different features for application execution. It can be used to determine if the network with high reliability and low capacity, or low reliability and high capacity is better for a given program execution. An algorithm is also developed for computing these indices, and it is shown to be not only efficient, but general enough to compute many other existing measures such as computer networks, distributed systems, transaction based systems, etc",1996,0, 819,Better software through operational dynamics,"Much of the existing theory about software focuses on its static behaviour, based on analysis of the source listing. Explorers of requirements, estimation, design, encapsulation, dataflow, decomposition, structure, and code complexity all study the static nature of software, concentrating on source code. I call this the study of software statics, an activity that has improved software quality and development, and one that we should continue to investigate. However, quality software remains difficult to produce because our understanding has an incomplete theoretical foundation. There is little theory that addresses software's dynamic behaviour in the field, in particular, how software performs under load. I use the term operational dynamics to differentiate this activity from software statics and from system dynamics, which involves process simulation",1996,0, 820,Exploratory analysis tools for tree-based models in software measurement and analysis,"The paper presents our experience with the adaptation and construction of exploratory analysis tools for tree-based models used in quality measurement and improvement. We extended a commercial off-the-shelf analysis tool to support visualization, presentation, and various other exploration functions. The user-oriented, interactive exploration of analysis results is supported by a new tool we constructed for this specific purpose. These tools have been used successfully in supporting software measurement and quality improvement for several large commercial software products developed in the IBM Software Solutions Toronto Laboratory",1996,0, 821,SoftTrak: an industrial case study,"SoftTrak is a project performance management tool based on earned value tracking of a software project's progress. Performance management tools are needed to report problems back to management. SoftTrak looks to the functional work completed as compared to the work estimation and tracks projects by comparing the actual accomplishments against the planned activities. Projected completion date and effort based on the historical data reported for the project are an integral part of the outputs. SoftTrak was developed by Soft Quest Systems in conjunction with input received from leading industries in Israel. The paper describes the process by which a software development project tracking methodology, using SoftTrak, was defined and later evaluated by B.V.R. Technologies",1996,0, 822,Assessing the potentials of CASE-tools in software process improvement: a benchmarking study,"CASE tools have been thought as one of the most important means for implementing the derived quality programs. Two basic questions should be answered to find the right CASE tool: what attributes the CASE tools should exhibit and how the existing tools can be ranked according to their capability to support the quality program goals. We propose our solution based on the concept of software benchmarking. First, we summarise the QR method, and then we report on how it was applied in CASE tools assessment. Next we derive appropriate benchmark sets for CASE tools on the base of the CMM, the standard ISO 9000 and the MBNQA-criteria. We propose and implement a framework for CASE tools evaluation and selection",1996,0, 823,Detection of shorted-turns in the field winding of turbine-generator rotors using novelty detectors-development and field test,"A method for detecting shorted windings in operational turbine-generators is described. The method is based on the traveling wave method described by El-Sharkawi, et. al. (1971). The method is extended in this paper to operational rotors by the application of a neural network feature extraction and novelty detection. The results of successful laboratory experiments are also reported",1996,0, 824,Microcontroller-based performance enhancement of an optical fiber level transducer,"The paper proposes a digital microcontrolled transducer for liquid level measurement based on two optical fibers on which the cladding was removed in suitable zones at a known distance one from each other. A 1 m full scale, 6.25 mm resolution prototype was realized and tested. The adopted hardware and software strategies assure a low cost, high static and dynamic performance, good reliability, and the possibility of remote control. These features were confirmed by the experimental tests carried out on fuel liquids",1996,0, 825,Use of artificial neural networks in estimation of Hydrocyclone parameters with unusual input variables,"The accuracy of the estimation of the Hydrocyclone parameter, d50 c, can substantially be improved by application of an Artificial Neural Network (ANN). With an ANN, many non-conventional Hydrocyclone variables, such as water and solid split ratios, overflow and underflow densities, apex and spigot flowrates, can easily be incorporated into the prediction of d50c. Selection of training parameters is also shown to affect the accuracy",1996,0, 826,The use of genetic algorithms for advanced instrument fault detection and isolation schemes,"An advanced scheme for Instrument Fault Detection and Isolation is proposed. It is mainly based on Artificial Neural Networks (ANNs), organized in layers and handled by knowledge-based analytical redundancy relationships. ANN design and training is performed by genetic algorithms which allow ANN architecture and parameters to be easily optimized. The diagnostic performance of the proposed scheme is evaluated with reference to a measurement station for automatic testing of induction motors",1996,0, 827,Parametric optimization of measuring systems according to the joint error criterion,"The paper presents a method and some results of the modelling and design of measuring systems. The minimum of designed system errors is achieved using parametric optimization methods of the measuring system model. A “structural methodâ€?of measuring system modelling is used. The used equipment properties, as well as data processing algorithms are taken into account",1996,0,1105 828,Optical 3D motion measurement,"This paper presents a CCD-camera based system for high-speed and accurate measurement of the three-dimensional movement of reflective targets. These targets are attached to the moving object under study. The system has been developed at TU Delft and consists of specialized hardware for real-time multi-camera image processing at 100 frames/s and software for data acquisition, 3D reconstruction and target tracking. An easy-to-use and flexible but accurate calibration method is employed to calibrate the camera setup. Applications of the system are found in a wide variety of areas; biomechanical motion analysis for research, rehabilitation, sports and ergonomics, motion capture for computer animation and virtual reality, and motion analysis of technical constructions like wind turbines. After a more detailed discussion of the image processing aspects of the system the paper will focus on image modeling and parameter estimation for the purpose of camera calibration and 3D reconstruction. A new calibration method that has recently been developed specifically for the measurement of wind turbine blade movements is. Test measurements show that with a proper calibration of the system a precision of 1:3000 relative to the field of view dimensions can be achieved. Future developments will further improve this result",1996,0, 829,Measuring vibration patterns for rotor diagnostics,"The vibration patterns of machinery and industrial plants are effective means for characterising their current dynamics and, thus, detecting symptoms and trends of incipient anomalies; or, assessing the accumulation of previous damages. The availability of `intelligent' instrumentation is a powerful option for the build-up of diagnostic frameworks, based on the measurement of vibration signatures and the acknowledgment of the threshold features for `pro-active' maintenance schedules. This option is considered with attention paid to the metrological requests supporting the standardised restitution of the signatures, in connection with the opportunities offered by `intelligent' data acquisition, processing and restitution, namely availability of extended sets of procedural schemes leading to consistent metrological images, transparent conditioning contexts granting data restitution standardisation, and sophisticated programming aids for acknowledgment of system hypotheses",1996,0, 830,Real-time determination of power system frequency,"The main frequency is an important parameter of an electrical power system. The frequency can change over a small range due to generation-load mismatches. Some power system protection and control applications require accurate and fast estimates of the frequency. Most digital algorithms for measuring frequency have acceptable accuracy if voltage waveforms are not distorted. However, due to non-linear devices the voltage waveforms can include higher harmonics. The paper presents a new method based on digital filtering and Prony's model. Computers simulation results confirm, that the method is more accurate than others",1996,0, 831,A low-cost water sensing method for adaptive control of furrow irrigation,A computer-based optimization technique for furrow irrigation depends upon a network of water sensors to detect the water advance in the initial phase of the irrigation. This paper describes a new type of low-cost water sensing network developed specifically for this technique. Each sensing network consists of up to fourteen simple water sensors baked using inexpensive twin-wire. The distance between the computer and the last sensor can be up to 1000 m. Each sensor uses the variation of capacitance to detect the presence of water and communicates its state to the base computer by means of variable-frequency current pulses,1996,0, 832,Time domain analysis and its practical application to the measurement of phase noise and jitter,"The precise determination of phase is necessary for a very large set of measurements. Traditionally, phase measurements have been made using analog phase detectors which suffer from limited accuracy and dynamic range. This paper describes how phase digitizing, which uses time domain techniques, removes these limitations. Phase digitizing is accomplished using a time interval analyzer, which measures the signal zero-crossing times. These zero-crossing times are processed to compute phase deviation, with the reference frequency specified as a numerical value or derived from the times themselves. Phase digitizing can be applied even in the presence of modulation, as the underlying clock can be reconstructed in software to fit the data. Measurements derived from this phase data such as phase noise, jitter analysis, Allan variance (AVAR), Maximum Time Interval Error (MTIE), and Time Deviation (TDEV) are applied to such applications as the characterization of oscillators, computer clocks, chirp radar, token ring networks, and tributaries in communication systems",1996,0, 833,Specializing object-oriented RPC for functionality and performance,"Remote procedure call (RPC) integrates distributed processing with conventional programming languages. However traditional RPC lacks support for forms of communication such as datagrams, multicast, and streams that fall outside the strict request-response model. Emerging applications such as Distributed Interactive Simulation (DIS) and Internet video require scalable, reliable, and efficient communication. Applications are often forced to meet these requirements by resorting to the error-prone ad-hoc message-based programming that characterized applications prior to the introduction of RPC. In this paper we describe an object-oriented RPC system that supports specialization for functionality and performance, allowing applications to modify and tune the RPC system to meet individual requirements. Our experiences with functional extensions to support reliable multicast and specializations to support streaming of performance-critical RPCs indicate that a wide range of communication semantics can be supported without resorting to ad-hoc messaging protocols",1996,0, 834,Dynamic resource migration for multiparty real-time communication,"With long-lived multi-party connections, resource allocation subsystems in distributed real-time systems or communication networks must be aware of dynamically changing network load in order to reduce call-blocking probabilities. We describe a distributed mechanism to dynamically reallocate (“migrateâ€? resources without adversely affecting the performance that established connections receive. In addition to allowing systems to dynamically adapt to load, this mechanism allows for distributed relaxation of resources (i.e. the adjustment of overallocation of resources due to conservative assumptions at connection establishment time) for multicast connections. We describe how dynamic resource migration is incorporated in the Tenet Scheme 2 protocols for multiparty real-time communication",1996,0, 835,Measuring and understanding the ageing of kraft insulating paper in power transformers,"In this paper we discuss the application of a number of techniques that are currently being developed for use in life assessment/condition monitoring of transformer insulation, and we illustrate possible applications with some recent data. These include: high performance liquid chromatography (HPLC) of the oil to measure the product concentrations; size exclusion chromatography (SEC) of the paper to measure changes in molecular weight distribution (related to the degree of polymerization, DP) and computer modeling of the degradation process to relate chemical degradation rates to changes in physical properties, such as tensile strength, which is of critical importance in assessing the probability of damage to the paper under fault conditions. We also revisit the use of DP measurement as a means of assessing insulation life and present results that illustrate problems that we have identified in obtaining consistent data. We describe a method, developed from the British and American standards, which enables us to obtain reproducible results with a known variance. The aims of this paper are to present a limited review of techniques that are relatively new to the transformer monitoring field, to provide some recent results from this laboratory to demonstrate their potential application, and to show the need for a range of new tools, if we are to improve our ability to assess the condition of transformers and the remanent life of their insulation.",1996,0, 836,Miniature microstrip stepped impedance resonator bandpass filters and diplexers for mobile communications,"Miniature microstrip stepped impedance resonator bandpass filters and diplexers for satellite mobile communications have been developed. A very high dielectric constant substrate (/spl epsiv//sub r/=89 and h=2 mm) is used. Experimental results show that an unloaded half wave resonator quality factor as high as 400 at 1.5 GHz, with such substrate, may be possible. The merit of this circuit lies in the simplicity of design procedure, the possibility of developing this filter with quite a variety of high dielectric constant substrate materials and the simplicity of simulation with most commercial software packages. A four resonator bandpass filter with 35 MHz bandwidth at 1.55 GHz was designed and implemented with this substrate. Based on this filter, a diplexer which meets satellite mobile communications performance has been developed. Experimental results are in good agreement with theoretical predictions.",1996,0, 837,A distributable algorithm for optimizing mesh partitions,"Much effort has been directed towards developing mesh partitioning strategies to distribute a finite element mesh among the processors of a distributed memory parallel computer. The distribution of the mesh requires a partitioning strategy. This partitioning strategy should be designed such that the computational load is balanced and the communication is minimized. Unfortunately, many of the existing approaches for the mesh partitioning are sequential and are performed as a sequential pre-processing step on a serial machine. These approaches may not be appropriate if the mesh is very large, that is to say, this pre-processing on a serial machine may not be feasible due to memory or time constraints, or on a parallel machine if the mesh is already distributed. In this paper we propose an algorithm with a fundamentally distributed design for the optimization of mesh partitions. To assess the quality of the partitioning, the size of edge cuts is the taken as the chosen metric",1996,0, 838,Industrial-strength management strategies,"As our industry undertakes ever larger development projects, the number of defects occurring in delivered software increases exponentially. Drawing on his experiences in the defense industry, the author offers nine best practices to improve the management of large software systems: (1) risk management; (2) agreement on interfaces; (3) formal inspections; (4) metrics-based scheduling and management; (5) binary quality gates at inch/pebble level; (6) program-wide visibility of progress vs. plan; (7) defect tracking against quality targets; (8) configuration management; and (9) people-aware management accountability",1996,0, 839,Why software reliability predictions fail,"Software reliability reflects a customer's view of the products we build and test, as it is usually measured in terms of failures experienced during regular system use. But our testing strategy is often based on early product measures, since we cannot measure failures until the software is placed in the field. The author shows us that such measurement is not effective at predicting the likely reliability of the delivered software.",1996,0, 840,Systems integration via software risk management,"This paper addresses an evolutionary process currently taking place in software engineering: the shift from hardware to software, where the role of software engineering is increasing and is becoming more central in systems integration. This shift also markedly affects the sources of risk that are introduced throughout the life cycle of a system's development-its requirements, specifications, architecture, process, testing, and end product. Risk is commonly defined as a measure of the probability and severity of adverse effects. Software technical risk is defined as a measure of the probability and severity of adverse effects inherent in the development of software. Consequently, risk assessment and management, as a process, will more and more assume the role of an overall cross-functional system integration agent. Evaluating the changes that ought to take place in response to this shift in the overall pattern leads to two challenges. One is the need to reassess the role of a new breed of software systems engineers/systems integrators. The other is the need to develop new and appropriate metrics for measuring software technical risk. Effective systems integration necessitates that all functions, aspects, and components of the system must be accounted for along with an assessment of most risks associated with the system. Furthermore, for software-intensive systems, systems integration is not only the integration of components, but is also; an understanding of the functionality that emerges from the integration. Indeed, when two or more software components are integrated, they often deliver more than the sum of what each was intended to deliver; this integration adds synergy and enhances functionality. In particular, the thesis advanced in this paper is that the process of risk assessment and management is an imperative requirement for successful systems integration; this is especially true for software-intensive systems. In addition, this paper advances the premise that the process of risk assessment and management is also the sine qua non requirement for ensuring against unwarranted time delay in a project's completion schedule, cost overrun, and failure to meet performance criteria. To achieve the aspired goal of systems integration a hierarchical holographic modeling (HHM) framework, which builds on previous works of the authors, has been developed. This HHM framework constitutes seven major considerations, perspectives, venues, or decompositions, each of which identifies the sources of risk in systems integration from a different, albeit with some overlap, viewpoint: software development, temporal perspective, leadership, the environment, the acquisition process, quality, and technology",1996,0, 841,ATM network design and optimization: a multirate loss network framework,"Asynchronous transfer mode (ATM) network design and optimization at the call-level may be formulated in the framework of multirate, circuit-switched, loss networks with effective bandwidth encapsulating cell-level behavior. Each service supported on the (wide area, broadband) ATM network is characterized by a rate or bandwidth requirement. Future networks will be characterized by links with very large capacities in circuits and by many rates. Various asymptotic results are given to reduce the attendant complexity of numerical calculations. A central element is a uniform asymptotic approximation (UAA) for link analyses. Moreover, a unified hybrid approach is given which allows asymptotic and nonasymptotic methods of calculations to be used cooperatively. Network loss probabilities are obtained by solving fixed-point equations. A canonical problem of route and logical network design is considered. An optimization procedure is proposed, which is guided by gradients obtained by solving a system of equations for implied costs. A novel application of the EM algorithm gives an efficient technique for calculating implied costs with changing traffic conditions. Finally, we report numerical results obtained by the software package TALISMAN, which incorporates the theoretical results. The network considered has eight nodes, 20 links, six services, and as many as 160 routes",1996,0, 842,"Software engineering project management, estimation and metrics: discussion summary and recommendations","The topics of software engineering project management, estimation and metrics are discussed. The following questions are used to guide the discussion. How should student projects be managed-by staff or the students themselves? Can project management tools be used effectively for student projects? Should metrics be taught as a separate course component? To what degree can we rely on metric data derived from student projects? Much of the discussion focuses on the nature and organisation of the student projects themselves",1996,0, 843,More on the E-measure of subdomain testing strategies,"The expected number of failures detected (the E-measure) has been found to be a useful measure of the effectiveness of testing strategies. This paper takes a fresh perspective on the formulation of the E-measure. From this, we deduce new sufficient conditions for subdomain testing to be better than random testing. These conditions are simpler and more easily applicable than many of those previously found. Moreover, we obtain new characterisations of subdomain testing strategies in terms of the E-measure",1996,0, 844,Performance evaluation for distributed system components,"The performance evaluation of hardware and software system components is based on statistics that are long views on the behavior of these components. Since system resources may have unexpected behavior, relevant current information becomes useful in the management process of these systems, especially for data gathering, reconfiguration, and fault detection activities. Actually, there are few criteria to property evaluate the current availability of component services within distributed systems. Hence, the management system can not realistically select the most suitable decision for configuration. In this paper, we present a proposal for a continuous evaluation of component behaviour related to state changes. This model is further extended by considering different categories of events concerning the degradation of the operational state or usage state. Our proposals are based on the possibility of computing at the component level, the current availability of this component by continuous evaluation. we introduce a several current availability features and propose formula to compute them. Other events concerning a managed object are classified as warning, critical or outstanding, which leads to a more accurate operational view on a component. Several counter-based events are thresholded to improve predictable reconfiguration decisions concerning the usability of a component. The main goal is to offer to the management system current relevant information which can be used within management policies the flexible polling frequency tuned with respect to the current evaluation, or particular aspects related to dynamic tests within distributed systems. Implementation issues with respect to the standard recommendations within distributed systems are presented. Finally we describe how the reconfiguration management systems can use these features in order to monitor, predict, improve the existing configuration, or accommodate the polling frequency according to several simple criteria",1996,0, 845,An approach to selecting metrics for detecting performance problems in information systems,"Early detection of performance problems is essential to limit their scope and impact. Most commonly, performance problems are detected by applying threshold tests to a set of detection metrics. For example, suppose that disk utilization is a detection metric, and its threshold value is 80%. Then, an alarm is raised if disk utilization exceeds 80%. Unfortunately, the ad hoc manner in which detection metrics are selected often results in false alarms and/or failing to detect problems until serious performance degradations result. To address this situation, we construct rules for metric selection based on analytic comparisons of statistical power equations for five widely used metrics: departure counts (D), number in system (L), response times (R), service times (S), and utilizations (U). These rules are assessed in the context of performance problems in the CPU and paging sub-systems of a production computer system",1996,0, 846,Compiler-assisted generation of error-detecting parallel programs,"We have developed an automated a compile time approach to generating error-detecting parallel programs. The compiler is used to identify statements implementing affine transformations within the program and to automatically insert code for computing, manipulating, and comparing checksums in order to detect data errors at runtime. Statements which do not implement affine transformations are checked by duplication. Checksums are reused from one loop to the next if this is possible, rather than recomputing checksums for every statement. A global dataflow analysis is performed in order to determine points at which checksums need to be recomputed. We also use a novel method of specifying the data distributions of the check data using data distribution directives so that the computations on the original data, and the corresponding check computations are performed on different processors. Results on the time overhead and error coverage of the error detecting parallel programs over the original programs are presented on an Intel Paragon distributed memory multicomputer",1996,0, 847,System dependability evaluation via a fault list generation algorithm,"The size and complexity of modern dependable computing systems has significantly compromised the ability to accurately measure system dependability attributes such as fault coverage and fault latency. Fault injection is one approach for the evaluation of dependability metrics. Unfortunately, fault injection techniques are difficult to apply because the size of the fault set is essentially infinite. Current techniques select faults randomly resulting in many fault injection experiments which do not yield any useful information. This research effort has developed a new deterministic, automated dependability evaluation technique using fault injection. The primary objective of this research effort was the development and implementation of algorithms which generate a fault set which fully exercises the fault detection and fault processing aspects of the system. The theory supporting the developed algorithms is presented first. Next, a conceptual overview of the developed algorithms is followed by the implementation details of the algorithms. The last section of this paper presents experimental results gathered via simulation-based fault injection of an Interlocking Control System (ICS). The end result is a deterministic, automated method for accurately evaluating complex dependable computing systems using fault injection",1996,0, 848,Using neural networks to predict software faults during testing,"This paper investigates the application of principal components analysis to neural-network modeling. The goal is to predict the number of faults. Ten software product measures were gathered from a large commercial software system. Principal components were then extracted from these measures. We trained two neural networks, one with the observed (raw) data, and one with principal components. We compare the predictive quality of the two competing models using data collected from two similar systems. These systems were developed by the same organization, and used the same development process. For the environment we studied, applying principal-components analysis to the raw data yields a neural-network model whose predictive quality is statistically better than a neural-network model developed using the raw data alone. The improvement in model predictive quality is appreciable from a practitioner's point of view. We concur with published literature regarding the number of hidden layers needed in a neural-network model. A single hidden layer of neurons yielded a network of sufficient generality to be useful when predicting faults. This is important, because networks with more hidden layers take correspondingly more time to train. The application of alternative network architectures and training algorithms in software engineering should continue to be investigated",1996,0, 849,Generalized linear models in software reliability: parametric and semi-parametric approaches,"The penalized likelihood method is used for a new semi-parametric software reliability model. This new model is a nonparametric generalization of all parametric models where the failure intensity function depends only on the number of observed failures, viz. number-of-failures models (NF). Experimental results show that the semi-parametric model generally fits better and has better 1-step predictive quality than parametric NF. Using generalized linear models, this paper presents new parametric models (polynomial models) that have performances (deviance and predictive-qualities) approaching those of the semi-parametric model. Graphical and statistical techniques are used to choose the appropriate polynomial model for each data-set. The polynomial models are a very good compromise between the nonvalidity of the simple assumptions of classical NF, and the complexity of use and interpretation of the semi-parametric model. The latter represents a reference model that we approach by choosing adequate link and regression functions for the polynomial models",1996,0, 850,Computer-aided 3D tolerance analysis of disk drives,"Disk drives are multicomponent products in which product build variations directly affect quality. Dimensional management, an engineering methodology combined with software tools, was implemented in disk drive development engineering at IBM in San Jose to predict and optimize critical parameters in disk drives. It applies statistical simulation techniques to predict the amount of variation that can occur in the disk drive due to the specified design tolerances, fixturing tolerances, and assembly variations. This paper presents statistics describing the measurement values produced during simulations, a histogram showing the measurement values graphically, and an analysis of the process capability, Cpk to ensure robust designs. Additionally, it describes how modeling can determine the location(s) of the predicted variation, the contributing factors, and their percent of contribution. Although a complete 2.5-in. disk drive was modeled and all critical variations such as suspension-to-disk gaps, disk-stack envelope, and merge clearances were analyzed, this paper presents for illustration only one critical disk real estate parameter. The example shows the capability of this methodology. VSA®-3D software by Variation Systems Analysis was used.",1996,0, 851,ORCHESTRA: a probing and fault injection environment for testing protocol implementations,"Ensuring that a distributed system meets its prescribed specification is a growing challenge that confronts software developers and system engineers. Meeting this challenge is particularly important for applications with strict dependability and/or timeliness constraints. We have developed a software fault injection tool, called ORCHESTRA, for testing dependability and timing properties of distributed protocols. ORCHESTRA is based on a simple yet powerful framework, called script-driven probing and fault injection. The emphasis of this approach is on experimental techniques intended to identify specific “problemsâ€?in a protocol or its implementation rather than the evaluation of system dependability through statistical metrics such as fault coverage. Hence, the focus is on developing fault injection techniques that can be employed in studying three aspects of a target protocol: i) detecting design or implementation errors, ii) identifying violations of protocol specifications, and iii) obtaining insights into the design decisions made by the implementers",1996,0, 852,SHARPE: a modeler's toolkit,"SHARPE (Symbolic Hierarchical Automated Reliability and Performance Evaluator) is a program that supports the specification and automated solution of reliability and performance models. It contains support for fault trees, reliability block diagrams, reliability graphs, Markov and semi-Markov chains, generalized stochastic Petri nets, product-form queueing networks, and acyclic task graphs. These model types can be used separately or in hierarchical combination. SHARPE allows users complete freedom to choose model types, use results from models of any type as parameters for other models of any type and choose from among alternate algorithms for model solution. We present an example of how SHARPE can be used to analyze a hierarchical performability model",1996,0, 853,On-line recovery for rediscovered software problems,"This paper discusses a method that can allow a system to avoid or recover from the exercise of certain known faults at runtime and thus make certain software upgrades unnecessary. The method uses the knowledge of the characteristic symptoms of a software fault and the appropriate recovery action for the fault to detect and recover from the future exercise of the fault. An analysis of field data shows that the method is applicable to about 25% of faults and there is a potential for this number to go up. Using the method can not only improve the availability of user applications, but can also reduce the number of rediscovered problems and hence reduce the resources required for software service",1996,0, 854,A non-homogeneous Markov software reliability model with imperfect repair,"This paper reviews existing non-homogeneous Poisson process (NHPP) software reliability models and their limitations, and proposes a more powerful non-homogeneous Markov model for the fault detection/removal problem. In addition, this non-homogeneous Markov model allows for the possibility of a finite time to repair a fault and for imperfections in the repair process. The proposed scheme provides the basis for decision making both during the testing and the operational phase of the software product. Software behavior in the operational phase and the development test phase are related and the release time formulae are derived. Illustrations of the proposed model are provided",1996,0, 855,Evaluation of integrated system-level checks for on-line error detection,"This paper evaluates the capabilities of an integrated system level error detection technique using fault and error injection. This technique is comprised of two software level mechanisms for concurrent error detection, control flow checking using assertions (CCA) and data error checking using application specific data checks. Over 300,000 faults and errors were injected and the analysis of the results reveals that the CCA detects 95% of all the errors while the data checks are able to detect subtle errors that go undetected by the CCA technique. Latency measurements also shelved that the CCA technique is faster than the data checks in detecting the error. When both techniques were incorporated, the system was able to detect over 98% of all injected errors",1996,0, 856,Detecting failed processes using fault signatures,"A strategy is presented for automatically identifying processes that have failed. A determination is made of the types of data that need to be collected, and circumstances under which the approach is likely to be useful. The strategy is applied to generate signatures for three different types of workloads, and several different resources. A typical failure is injected into the process, and the associated signatures are presented for the same workloads and resources. A measure as defined that is used to determine whether or not a signature is likely to be indicative of a faulty process",1996,0, 857,Predictive modeling of software quality for very large telecommunications systems,"Society's reliance on large complex telecommunications systems mandates high reliability. Controlling faults in software requires that one can predict problems early enough to take preventive action. Software metrics are the basis for such predictions, and thus, many organization are collecting volumes of software metric data. Collecting software metrics is not enough. One must translate measurements into predictions. This study systematically presents a methodology for developing models that predict software quality factors. The individual details of this methodology may be familiar, but the whole modeling process must be integrated to produce successful predictions of software quality. We use two example studies to illustrate each step. One predicted the number of faults to be discovered in each module, and the other predicted whether each module would be considered fault-prone. The examples were based on the same data set, consisting of a sample from a very large telecommunications system. The sample of modules represented about 1.3 million lines of code",1996,0, 858,Lag-indexed VQ for pitch filter coding,"The presence of a pitch predictor is critical to low-rate performance of CELP coders. Unfortunately the rate required for high-quality encoding of pitch filter parameters is often a large fraction of the available bandwidth. Moreover, the application of analysis-by-synthesis techniques to pitch filter optimization can require excessive computation. We present a vector quantization (VQ) approach for coding pitch filter parameters which maintains a subjective quality equivalent to other coding schemes while requiring lower (variable) rate with less closed-loop computation than other VQ techniques",1996,0, 859,Algorithms for the generation of state-level representations of stochastic activity networks with general reward structures,"Stochastic Petri nets (SPNs) and extensions are a popular method for evaluating a wide variety of systems. In most cases, their numerical solution requires generating a state-level stochastic process, which captures the behavior of the SPN with respect to a set of specified performance measures. These measures are commonly defined at the net level by means of a reward variable. In this paper, we discuss issues regarding the generation of state-level reward models for systems specified as stochastic activity networks (SANs) with “step-based reward structuresâ€? Step-based reward structures are a generalization of previously proposed reward structures for SPNs and can represent all reward variables that can be defined on the marking behavior of a net. While discussing issues related to the generation of the underlying state-level reward model, we provide an algorithm to determine whether a given SAN is “well-specifiedâ€?A SAN is well-specified if choices about which instantaneous activity completes among multiple simultaneously-enabled instantaneous activities do not matter, with respect to the probability of reaching next possible stable markings and the distribution of reward obtained upon completion of a timed activity. The fact that a SAN is well specified is both a necessary and sufficient condition for its behavior to be completely probabilistically specified, and hence is an important property to determine",1996,0, 860,Substituting Voas's testability measure for Musa's fault exposure ratio,"This paper analyzes the relationship between Voas's (1991) software testability measure and Musa's (1987) fault exposure ratio, K. It has come to our attention that industrial users of Musa's model employ his published, experimental value for K, 4.2×10-7, for their projects, instead of creating their own K estimate for the system whose reliability is being assessed. We provide a theoretical argument for how a slight modification to Voas's formulae for determining a predicted minimal fault size can provide a predicted average fault size. The predicted average fault size can be used as a substitute for 4.2×10-7, and in our opinion is a more plausible choice for K",1996,0, 861,Software process improvement at Raytheon,"Over the last eight years, Raytheon mounted a sustained effort to improve its software development processes. The author describes the organizational structure and activities that made these processes more productive and predictable while raising software quality",1996,0, 862,Video compression with random neural networks,"We summarize a novel neural network technique for video compression, using a “point-processâ€?type neural network model we have developed, which is closer to biophysical reality and is mathematically much more tractable than standard models. Our algorithm uses an adaptive approach based upon the users' desired video quality Q, and achieves compression ratios of up to 500:1 for moving gray-scale images, based on a combination of motion detection, compression and temporal subsampling of frames. This leads to a compression ratio of over 1000:1 for full-color video sequences with the addition of the standard 4:1:1 spatial subsampling ratios in the chrominance images. The signal-to-noise-ratio obtained varies with the compression level and ranges from 29 dB to over 34 dB. Our method is computationally fast so that compression and decompression could possibly be performed in real-time software",1996,0, 863,Cooperative congestion control schemes in ATM networks,"One of the basic problems faced in the design of efficient traffic and congestion control schemes is related to the wide variety of services with different traffic characteristics and quality of service (QoS) requirements supported by ATM networks. The authors propose a new way of organizing the control system so that complexity is easier to manage. The multi-agent system approach, which provides the use of adaptative and intelligent agents, is investigated. The authors show, through the two congestion control schemes proposed, how to take advantage of using intelligent agents to increase the efficiency of the control scheme. First, TRAC (threshold based algorithm for control) is proposed, which is based on the use of fixed thresholds which enables the anticipation of congestion. This mechanism is compared with the push-out algorithm and it is shown that the authors' proposal improves the network performance. Also discussed is the necessity of taking into account the network dynamics. In TRAC, adaptative agents with learning capabilities are used to tune the values of the thresholds according to the status of the system. However, in this scheme, when congestion occurs, the actions we perform are independent of the nature of the traffic. Subsequently, we propose PATRAC (predictive agents in a threshold based algorithm for control) in which different actions are achieved according to the QoS requirements and to the prediction of traffic made by the agents. Specifically, re-routing is performed when congestion is heavy or is expected to be heavy and the traffic is cell loss sensitive. This re-routing has to deflect the traffic away from the congestion point. In this scheme, we propose a cooperative and predictive control scheme provided by a multi-agent system that is built in to each node",1996,0, 864,Implementing fail-silent nodes for distributed systems,"A fail-silent node is a self-checking node that either functions correctly or stops functioning after an internal failure is detected. Such a node can be constructed from a number of conventional processors. In a software-implemented fail-silent node, the nonfaulty processors of the node need to execute message order and comparison protocols to “keep in stepâ€?and check each other, respectively. In this paper, the design and implementation of efficient protocols for a two processor fail-silent node are described in detail. The performance figures obtained indicate that in a wide class of applications requiring a high degree of fault tolerance, software-implemented fail-silent nodes constructed simply by utilizing standard “off-the-shelfâ€?components are an attractive alternative to their hardware-implemented counterparts that do require special-purpose hardware components, such as fault-tolerant clocks, comparator, and bus interface circuits",1996,0, 865,On static compaction of test sequences for synchronous sequential circuits,"We propose three static compaction techniques for test sequences of synchronous sequential circuits. We apply the proposed techniques to test sequences generated for benchmark circuits by various test generation procedures. The results show that the test sequences generated by all the test generation procedures considered can be significantly compacted. The compacted sequences thus have shorter test application times and smaller memory requirements. As a by product, the fault coverage is sometimes increased as well. More importantly, the ability to significantly reduce the length of the test sequences indicates that it may be possible to reduce test generation time if superfluous input vectors are not generated",1996,0, 866,N-version programming: a unified modeling approach,"This paper presents an unified approach aimed at modeling the joint behavior of the N version system and its operational environment. Our objective is to develop reliability model that considers both functional and performance requirements which is particularly important for real-time applications. The model is constructed in two steps. First, the Markov model of N version failure and execution behavior is developed. Next, we develop the user-oriented model of the operational environment. In accounting for dependence we use the idea that the influence of the operational environment on versions failures and execution times induces correlation. The model addresses a number of basic issues and yet yields closed-form solutions that provide considerable insight into how reliability is affected by both versions characteristics and the operational environment",1996,0, 867,Measurement of power quality factors in electrical networks,"Summary form only given, as follows.Summary form only given. A microprocessor based system that correctly estimates asymmetry and harmonics rating is presented. The measurement method, the measuring channel structure, the DSP algorithm and software structure are described.",1996,0, 868,Data collection and recording guidelines for achieving intelligent diagnostics,"Whether using human or machine intelligence, the best decisions are made when using all available information from all relevant sources. In contrast to traditional automatic test equipment (ATE) programming techniques, which stop on failures to perform a diagnosis, we would collect from the entire sequence of electronic tests, and integrate these data with circuit topology and external data (including thermal imaging) to obtain a best informed diagnosis. Novel on-the-fly neural network paradigms provide the capability to make correct assessments from a large history of repair data. Applications with these paradigms require a computer with sufficient horsepower (such as a PC) and high-level programming languages. The circuit topology assessment program can identify “deadâ€?nodes through an entropy calculation, making it possible to perform circuit diagnosis in the absence of good/bad historical information. In theory, this technique could be applied to any automatic test equipment platform, provided that the data collection activities were in place",1996,0, 869,The Neural Engineering of ATE,"This paper describes a neural network-based software configuration implemented on a VXI automatic test equipment platform. The purpose of this unique software configuration, incorporating neural network and other artificial intelligence (AI) technologies, is to enhance ATE capability end efficiency by providing an intelligent interface for a variety of functions that are controlled or monitored by the software. This includes automated end user-directed control of the ATE end diagnostic strategy to streamline test sequences through the use of advanced diagnostic strategies. The use of Neural Engineering techniques are stressed which, in this context, foster the integration of diverse sensor technology capable of analyzing units under test (UUT) from different perspectives that provide new insight into static, dynamic, and historical UUT performance. Such methods can achieve greater accuracy in failure diagnosis and fault prediction; reduction in cannot duplicate (CND), retest-OK (RTOK) rates, and ambiguity group size; and improved confidence in performance testing that results in the determination of UUT ready for issue status. The hardware configuration of the ATE consists of an embedded 486 100 MHz PC controller and an instrument suite as follows: Power Supply, DMM, Digitizer, Counter/Timer, Digital I/O, Pulse Generator, Switching Matrix, Relay, Function Generator and Arbitrary Function Generator",1996,0, 870,Using neural networks to solve testing problems,"This paper discusses using neural networks for diagnosing circuit faults. As a circuit is tested, the output signals from a Unit Under Test can vary as different functions are invoked by the test. When plotted against time, these signals create a characteristic trace for the test performed. Sensors in the ATS can be used to monitor the output signals during test execution. Using such an approach, defective components can be classified using a neural network according to the pattern of variation from that exhibited by a known good card. This provides a means to develop testing strategies for circuits based upon observed performance rather than domain expertise. Such capability is particularly important with systems whose performance, especially under faulty conditions, is not well documented or where suitable domain knowledge and experience does not exist. Thus, neural network solutions may in some application areas exhibit better performance than either conventional algorithms or knowledge-based systems. They may also be retrained periodically as a background function, resulting with the network gaining accuracy over time",1996,0, 871,Thermal imaging is the sole basis for repairing circuit cards in the F-16 flight control panel,"When the card-level tester for the F-16 flight control panel (FLCP) had been dysfunctional for over 18 months, infrared (IR) thermography was investigated as a viable alternative for diagnosing and repairing the 7 cards in the FLCP box. Using thermal imaging alone, over 20 FLCP boxes were made serviceable, thus bringing the FLCP out of Awaiting Parts (AWP) status. In the current environment of downsizing, the problem of returning the card level tester to functionality is remote. Success of the imager points to corresponding inadequacies of the Automatic Test Equipment to detect certain kinds of failure. In particular, we were informed that one particular relay had never been ordered in the life of the F-16 system, whereas some cards became functional when the relay was the sole component replaced. The incorporation of a novel on-the-fly neural network paradigm, the Neural Radiant Energy Detection System (NREDS) now has the capability to make correct assessments from a large history of repair data. By surveying the historical data, the network makes assessments about relevant repair actions and probable component malfunctions. On FLCP A-2 cards, a repair accuracy of 11 out of 12 was achieved during the first repair attempt",1996,0, 872,Neuron and dendrite pruning by synaptic weight shifting in polynomial time,"There is a lot of redundant information in ANNs. Therefore, pruning algorithms are required in order to save the computational cost and reduce the complexity of the system. A weight shifting technique which was proposed for increasing the fault tolerance of the neural networks is applied to prune links and/or neurons of the networks. After the training converges, each hidden neuron that has less effect on the performance is removed and their weights are shifted to other links by weight shifting technique. The weights of removed links are still in the network only with other links of the same neuron. The experimental result shows that 5%-45% of links can be removed and the pruned network still gives the performance at the same level as the unpruned network. This technique does not require the retraining process, modification of the error cost function, or computational overhead. The time complexities of the link and neuron prunings are O(n2) and O(m), respectively, where n is the number of links and m is the number of neurons in the network",1996,0, 873,Accelerated aging of high voltage stator bars using a power electronic converter,"The use of computer-aided partial discharge detectors has become an increasingly useful tool in the application of electric machine insulation condition monitoring. The problem that exists is the lack of field and laboratory data associated with the faults that the condition monitoring aims to find. A stator winding aging bed has been constructed to allow the monitoring of the growth of induced faults, for comparison with field measurements. The use of a low emission, variable frequency power electronic converter, along with a computer-based discharge analyser, has allowed the study of supply frequency dependence of partial discharge activity along with the monitoring of fault initiation and development. The layout of the aging bed is discussed along with the operating performance of the variable frequency converter and partial discharge results gathered",1996,0, 874,Atomic data from plasma based measurements and compilations of transition probabilities,"Summary from only given. High efficiency electrical light sources used in lighting applications are based on electrical discharges in plasmas. The systematic search for improved lighting plasmas increasingly relies on plasma discharge modeling with computers and requires better and more comprehensive knowledge of basic atomic data such as radiative transition probabilities and collision cross sections. NIST has ongoing research programs aimed at the study of thermal equilibrium plasmas such as high pressure electric arcs and non-equilibrium plasmas in radio-frequency discharges and high current hollow cathode lamps. In emission experiments we have measured branching fractions and determined absolute transition probabilities for spectral lines in Ne I, Ne II, F I, O I and O II. In case of the Ne measurements the line radiation emitted by a high current hollow cathode lamp was analyzed with a 2 m monochromator. In addition, the spectra of Ne I and Ne II were measured with a UV Fourier transform spectrometer at very high spectral resolution. The line intensities were subsequently calibrated to absolute radiances. Where complete sets of transitions from an upper level could be observed, recent lifetime data for these levels were used to determine absolute transition probabilities. The accuracy of our branching fraction data is 5% for the stronger lines.",1996,0, 875,A microcomputer-based automatic testing system for digital circuits using signature analysis,"This paper proposes a microcomputer based automatic test equipment for digital circuits using signature analysis. The proposed system consists of a microcomputer equipped with a special interface card and controlled by a special software package. The system hardware is designed to transmit the test pattern generated by the microcomputer to the circuit under test (CUT), receive the response from the CUT, display the signatures and generate all necessary signals for the CUT. The software package is designed to control the system hardware. It provides the signatures of all modes simultaneously in one measurement and displays them on the screen of the microcomputer, compares them with those of the golden unit, and locates source faulty node(s). Examples for checking the capabilities of the proposed system to detect and locate faults in the CUT are illustrated",1996,0, 876,A proposed ATE for digital systems,"An ATE-system that can perform digital system testing to localize the faulty part is developed. The test system strategy is based on system and/or board partitioning and hierarchical testing. Testing takes place automatically by software programs within complete test packages including test models, test simulation, fault detection and fault diagnosis",1996,0, 877,Improving the effectiveness of software prefetching with adaptive executions,"The effectiveness of software prefetching for tolerating latency depends mainly on the ability of programmers and/or compilers to: 1) predict in advance the magnitude of the run-time remote memory latency, and 2) insert prefetches at a distance that minimizes stall time without causing cache pollution. Scalable heterogeneous multiprocessors, such as network of computers (NOWs), present special challenges to static software prefetching because on these systems the network topology and node configuration are not completely determined at compile time. Furthermore, dynamic software prefetching cannot do much better because individual nodes on heterogeneous large NOWs would tend to experience different remote memory delays over time. A fixed prefetch distance, even when computed at run-time, cannot perform well for the whole duration of a software pipeline. Here we present an adaptive scheme for software prefetching that makes it possible for nodes to dynamically change, not only the amount of prefetching, but the prefetch distance as well. Doing this makes it possible to tailor the execution of software pipeline to the prevailing conditions affecting each node. We show how simple performance data collected by hardware monitors can allow programs to observe, evaluate and change their prefetching policies. Our results show that on the benchmarks we simulated adaptive prefetching was capable of improving performance over static and dynamic prefetching by 10% to 60%. More important, future increases in the heterogeneity and size of NOWs will increase the advantages of adaptive prefetching over static and dynamic schemes",1996,0, 878,Editing for quality: how we're doing it,"Editing for quality (EFQ) is a process used by writers and editors at IBM's Santa Teresa Software Development laboratory to ensure information deliverables will meet customers' requirements and expectations. This editing process includes an overall appraisal of the information unit as a numeric score, or “EFQ indexâ€? An EFQ edit consists of a thorough technical edit providing detailed comments to the writer, a written report summarizing the strengths and weaknesses of the information unit in each of the eight categories, and the EFQ index, which is a relative score on a scale of 1 to 100. This paper describes the EFQ process, how it improves the quality of information, and how it is expected to predict customer satisfaction. It also discusses the benefits to editors and writers, summarizing interviews conducted with those who have used the EFQ process",1996,0, 879,Knowledge-based re-engineering of legacy programs for robustness in automated design,"Systems for automated design optimization of complex real-world objects can, in principle, be constructed by combining domain-independent numerical routines with existing domain-specific analysis and simulation programs. Unfortunately, such “legacyâ€?analysis codes are frequently unsuitable for use in automated design. They may crash for large classes of input, be numerically unstable or locally non-smooth, or be highly sensitive to control parameters. To be useful, analysis programs must be modified to reduce or eliminate only the undesired behaviors, without altering the desired computation. To do this by direct modification of the programs is labor-intensive, and necessitates costly revalidation. We have implemented a high-level language and runtime environment that allow failure-handling strategies to be incorporated into existing Fortran and C analysis programs while preserving their computational integrity. Our approach relies on globally managing the execution of these programs at the level of discretely callable functions so that the computation is only affected when problems are detected. Problem handling procedures are constructed from a knowledge base of generic problem management strategies. We show that our approach is effective in improving analysis program robustness and design optimization performance in the domain of conceptual design of jet engine nozzles",1996,0, 880,A statistical approach to the inspection checklist formal synthesis and improvement,Proposes a statistical approach to the formal synthesis and improvement of inspection checklists. The approach is based on defect causal analysis and defect modeling. The defect model is developed using IBM's Orthogonal Defect Classification. A case study describes the steps required and a tool for the implementation. The advantages and disadvantages of both empirical and statistical methods are discussed and compared. It is suggested that a statistical approach should be used in conjunction with the empirical approach. The main advantage of the proposed technique is that it allows us to tune a checklist according to the most recent project experience and to identify optimal checklist items even when a source document does not exist,1996,0, 881,Independent recovery in large-scale distributed systems,"In large systems, replication can become important means to improve data access times and availability. Existing recovery protocols, on the other hand, were proposed for small-scale distributed systems. Such protocols typically update stale, newly-recovered sites with replicated data and resolve the commit uncertainty of recovering sites. Thus, given that in large systems failures are more frequent and that data access times are costlier, such protocols can potentially introduce large overheads in large systems and must be avoided, if possible. We call these protocols dependent recovery protocols since they require a recovering site to consult with other sites. Independent recovery has been studied in the context of one-copy systems and has been proven unattainable. This paper offers independent recovery protocols for large-scale systems with replicated data. It shows how the protocols can be incorporated into several well-known replication protocols and proves that these protocols continue to ensure data consistency. The paper then addresses the issue of nonblocking atomic commitment. It presents mechanisms which can reduce the overhead of termination protocols and the probability of blocking. Finally, the performance impact of the proposed recovery protocols is studied through the use of simulation and analytical studies. The results of these studies show that the significant benefits of independent recovery can be enjoyed with a very small loss in data availability and a very small increase in the number of transaction abortions",1996,0, 882,Development and application of composite complexity models and a relative complexity metric in a software maintenance environment,"A great deal of effort is now being devoted to the study, analysis, prediction, and minimization of expected software maintenance cost, long before software is delivered to users or customers. It had been estimated that, on the average, the effort spent on software maintenance is as costly as the effort spent on all other software stages. Ways to alleviate software maintenance complexity and high costs should originate in software design. Two aspects of maintenance deserve attention: protocols for locating and rectifying defects and ensuring that no new defects are introduced in the development phase of the software process, and development of protocols for increasing the quality and reducing the costs associated with modification, enhancement, and upgrading of software. This article focuses on the second aspect and puts forward newly developed parsimonious models and a relative complexity metric for complexity measurement of software that were used to rank the modules in the system relative to each other. Significant success was achieved by use of the models and relative metric to identify maintenance-prone modules",1996,0, 883,New generation of real-time software-based video codec: Popular Video Coder II (PVC-II),"A new generation of real-time software-based video coder, the Popular Video Coder II (PVC-II), is presented.. The transform and the motion estimation parts are removed and the quantizer and the entropy coder are modified in the PVC-II as compared with the traditional video coder. Moreover, the PVC-II improves the coding performance of its previous version, the Popular Video Coder, by introducing several newly developed efficient coding techniques, such as the adaptive quantizer, the adaptive resolution reduction and the fixed-model intraframe DPCM, into the codec. The coding speed, compression ratio and picture quality of the PVC-II are good enough for applying it to various real-time multimedia applications. Since no compression hardware is needed for the PVC-II to encode and decode video data, the cost and complexity of developing multimedia applications, such as video phone and multimedia e-mail systems, can be largely reduced",1996,0, 884,Sensitivity of reliability-growth models to operational profile errors vs. testing accuracy [software testing],"This paper investigates: 1) the sensitivity of reliability-growth models to errors in the estimate of the operational profile (OP); and 2) the relation between this sensitivity and the testing accuracy for computer software. The investigation is based on the results of a case study in which several reliability-growth models are applied during the testing phase of a software system. The faults contained in the system are known in advance; this allows measurement of the software reliability-growth and comparison with the estimates provided by the models. Measurement and comparison are repeated for various OPs, thus giving information about the effect of a possible error in the estimate of the OP. The results show that: 1) the predictive accuracy of the models is not heavily affected by errors in the estimate of the OP; and 2) this relation depends on the accuracy with which the software system has been tested",1996,0, 885,Analyze-NOW-an environment for collection and analysis of failures in a network of workstations,"This paper describes Analyze-NOW, an environment for the collection and analysis of failures/errors in a network of workstations. Descriptions cover the data collection methodology and the tool implemented to facilitate this process. Software tools used for analysis are described, with emphasis on the details of the implementation of the Analyzer, the primary analysis tool. Application of the tools is demonstrated by using them to collect and analyze failure data (for 32-week period) from a network of 69 SunOS-based workstations. Classification based on the source and effect of faults is used to identify problem areas. Different types of failures encountered on the machines and network are highlighted to develop a proper understanding of failures in a network environment. The results from the analysis tool should be used to pinpoint the problem areas in the network. The results obtained from using Analyze-NOW on failure data from the monitored network reveal some interesting behavior of the network. Nearly 70% of the failures were network-related, whereas disk errors were few. Network-related failures were 75% of all hard-failures (failures that make a workstation unusable). Half of the network-related failures were due to servers not responding to clients, and half were performance-related and others. Problem areas in the network were found using this tool. The authors' approach was compared to the method of using the network architecture to locate problem areas. This comparison showed that locating problem areas using network architecture over-estimates the number of problem areas",1996,0, 886,Assessment of fault-detection processes: an approach based on reliability techniques,"Two major factors influence the number of faults uncovered by a fault-detection process applied to a software artifact (e.g., specification, code): ability of the process to uncover faults, quality of the artifact (number of existing faults). These two factors must be assessed separately, so that one can: switch to a different process if the one being used is not effective enough, or stop the process if the number of remaining faults is acceptable. The fault-detection process assessment model can be applied to all sorts of artifacts produced during software development, and provides measures for both the `effectiveness of a fault-detection process' and the `number of existing faults in the artifact'. The model is valid even when there are zero defects in the artifact or the fault-detection process is intrinsically unable to uncover faults. More specifically, the times between fault discoveries are modeled via reliability-based techniques with an exponential distribution. The hazard rate is the product of `effectiveness of the fault-detection process' and `number of faults in the artifact'. Based on general hypotheses, the number of faults in an artifact follows a Poisson distribution. The unconditional distribution, whose parameters are estimated via maximum likelihood, is obtained",1996,0, 887,A universal technique for accelerating simulation of scan test patterns,Scan test patterns are typically generated by ATPG tools which use a zero delay simulation model. These scan test patterns have to be verified using a golden simulator which is approved by the chip foundry with full timing before the patterns are accepted for manufacturing test. This can be very time consuming for many designs because of the size of the test data and the large number of test cycles which have to be simulated. A universal technique for accelerating scan test pattern simulation which can be used for any simulator with any scan cell type from any foundry is proposed. The extensions to test data languages to support universal acceleration of scan pattern simulation are also proposed. Some experiment results are also provided,1996,0, 888,High resolution IDDQ characterization and testing-practical issues,"IDDQ testing has become an important contributor to quality improvement of CMOS ICs. This paper describes high resolution I DDQ characterization and testing (from the sub-nA to μA level) and outlines test hardware and software issues. The physical basis of IDDQ is discussed. Methods for statistical analysis of IDDQ data are examined, as interpretation of the data is often as important as the measurement itself. Applications of these methods to set reasonable test limits for detecting defective product are demonstrated",1996,0, 889,Cost effective frequency measurement for production testing: new approaches on PLL testing,The growing market of consumer multimedia products demands cost effective test solutions for built-in video functions in core chipsets. This work describes a new software approach to perform precise frequency measurements on a digital test system,1996,0, 890,Feasibility analysis of fault-tolerant real-time task sets,"Many safety critical real-time systems, employ fault tolerant strategies in order to provide predictable performance in the presence of failures. One technique commonly employed is time redundancy using retry/re-execution of tasks. This can in turn affect the correctness of the system by causing deadlines to be missed. This paper provides exact schedulability tests for fault tolerant task sets under specified failure hypothesis",1996,0, 891,Realistic defect coverages of voltage and current tests,This paper presents the realistic defect coverage of voltage and current measurements on a Philips digital CMOS ASIC library obtained by defect simulation. This analysis was made to study the minimisation of overlap between voltage and current based test methods by means of defect detection tables. Results show a poor defect coverage of voltage measurements and the major influence of the defect resistance on the voltage detectability and the possible overlap.,1996,0, 892,A hybrid genetic algorithm applied to automatic parallel controller code generation,"High performance real-time digital controllers employ parallel hardware such as transputers and digital signal processors to achieve short response times when this is not achievable with conventional uni-processor systems. Implementing such fine-grained parallel software is error-prone and difficult. We show how a hybrid genetic algorithm can be applied to automate this parallel code generation for a set of regular control problems such that significant speedup is obtained with few constraints on hardware. Genetic algorithms are particularly suited to this problem since the mapping problem is combinatorial in nature. However, one drawback of the genetic algorithm is that it is sensitive to small changes in the problem size. To overcome this problem the presented approach partitions the original problem into sub-problems, called boxes. The scheduling of these boxes is similar to the VLSI placement problem",1996,0, 893,Integrating metrics and models for software risk assessment,"Enhanced Measurement for Early Risk Assessment of Latent Defects (EMERALD) is a decision support system for assessing reliability risk. It is used by software developers and managers to improve telecommunications software service quality as perceived by the customer and the end user. Risk models are based on static characteristics of source code. This paper shows how a system such as EMERALD can enhance software development, testing, and maintenance by integration of: a software quality improvement strategy; measurements and models; and delivery of results to the desktop of developers in a timely manner. This paper also summarizes empirical experiments with EMERALD's models using data from large industrial telecommunications software systems. EMERALD has been applied to a very large system with over 12 million lines of source code within procedures. Experience and lessons learned are also discussed",1996,0, 894,Using the genetic algorithm to build optimal neural networks for fault-prone module detection,"The genetic algorithm is applied to developing optimal or near optimal backpropagation neural networks for fault-prone/not-fault-prone classification of software modules. The algorithm considers each network in a population of neural networks as a potential solution to the optimal classification problem. Variables governing the learning and other parameters and network architecture are represented as substrings (genes) in a machine-level bit string (chromosome). When the population undergoes simulated evolution using genetic operators-selection based on a fitness function, crossover, and mutation-the average performance increases in successive generations. We found that, on the same data, compared with the best manually developed networks, evolved networks produced improved classifications in considerably less time, with no human effort, and with greater confidence in their optimality or near optimality. Strategies for devising a fitness function specific to the problem are explored and discussed",1996,0, 895,An on-line algorithm for checkpoint placement,"Checkpointing is a common technique for reducing the time to recover from faults in computer systems. By saving intermediate states of programs in a reliable storage device, checkpointing enables one to reduce the processing time loss caused by faults. The length of the intervals between the checkpoints affects the execution time of the programs. Long intervals lead to a long re-processing time, while too-frequent checkpointing leads to a high checkpointing overhead. In this paper, we present an online algorithm for the placement of checkpoints. The algorithm uses online knowledge of the current cost of a checkpoint when it decides whether or not to place a checkpoint. We show how the execution time of a program using this algorithm can be analyzed. The total overhead of the execution time when the proposed algorithm is used is smaller than the overhead when fixed intervals are used. Although the proposed algorithm uses only online knowledge about the cost of checkpointing, its behavior is close to that of the off-line optimal algorithm that uses the complete knowledge of the checkpointing cost",1996,0,1026 896,Unification of finite failure non-homogeneous Poisson process models through test coverage,"A number of analytical software reliability models have been proposed for estimating the reliability growth of a software product. We present an Enhanced Non-Homogeneous Poisson Process (ENHPP) model and show that previously reported Non-Homogeneous Poisson Process (NHPP) based models, with bounded mean valve functions, are special cases of the ENHPP model. The ENHPP model differs from previous models in that it incorporates explicitly the time varying test coverage function in its analytical formulation, and provides for defective fault detection and test coverage during the testing and operational phases. The ENHPP model is validated using several available failure data sets",1996,0, 897,A conservative theory for long term reliability growth prediction,The paper describes a different approach to software reliability growth modelling which should enable conservative long term predictions to be made. Using relatively standard assumptions it is shown that the expected value of the failure rate after a usage time t is bounded by: λ¯t⩽(N/(et)) where N is the initial number of faults and e is the exponential constant. This is conservative since it places a worst case bound on the reliability rather than making a best estimate. We also show that the predictions might be relatively insensitive to assumption violations over the longer term. The theory offers the potential for making long term software reliability growth predictions based solely on prior estimates of the number of residual faults. The predicted bound appears to agree with a wide range of industrial and experimental reliability data. It is shown that less pessimistic results can be obtained if additional assumptions are made about the failure rate distribution of faults,1996,0, 898,An empirical evaluation of maximum likelihood voting in failure correlation conditions,"The maximum likelihood voting (MLV) strategy was recently proposed as one of the most reliable voting methods. The strategy determines the most likely correct result based on the reliability history of each software version. However, the theoretical results were obtained under the assumption that inter-version failures are not correlated by common cause faults. We first discuss the issues that arise in practical implementation of MLV, and present an extended version of the MLV algorithm that uses component reliability estimates to break voting ties. We then empirically evaluate the implemented MLV strategy in a situation where the inter-version failures are highly correlated. Our results show that, although in real situations MLV carries no reliability guarantees, it tends to be statistically more reliable, even under high inter-version correlation conditions, than other voting strategies that we have examined. We also compare implemented MLV performance with that of Recovery Block and hybrid Consensus Recovery Block approaches. Our results show that MLV often outperforms Recovery Block and that it can successfully compete with more elaborate Consensus Recovery Block. To the best of our knowledge, this is the first empirical evaluation of the MLV strategy",1996,0, 899,Data partition based reliability modeling,The paper presents an approach to software reliability modeling using data partitions derived from tree based models. We use these data sensitive partitions to group data into clusters with similar failure intensities. The series of data clusters associated with different time segments forms a piecewise linear model for the assessment and short term prediction of reliability. Long term prediction can be provided by the dual model that uses these grouped data as input fitted to some failure count variations of the traditional software reliability growth models. These partition based reliability models can be used effectively to measure and predict the reliability of software systems and can be readily integrated into our strategy of reliability assessment and improvement using tree based modeling,1996,0, 900,Detection of software modules with high debug code churn in a very large legacy system,"Society has become so dependent on reliable telecommunications, that failures can risk loss of emergency service, business disruptions, or isolation from friends. Consequently, telecommunications software is required to have high reliability. Many previous studies define the classification fault prone in terms of fault counts. This study defines fault prone as exceeding a threshold of debug code churn, defined as the number of lines added or changed due to bug fixes. Previous studies have characterized reuse history with simple categories. This study quantified new functionality with lines of code. The paper analyzes two consecutive releases of a large legacy software system for telecommunications. We applied discriminant analysis to identify fault prone modules based on 16 static software product metrics and the amount of code changed during development. Modules from one release were used as a fit data set and modules from the subsequent release were used as a test data set. In contrast, comparable prior studies of legacy systems split the data to simulate two releases. We validated the model with a realistic simulation of utilization of the fitted model with the test data set. Model results could be used to give extra attention to fault prone modules and thus, reduce the risk of unexpected problems",1996,0, 901,Fault exposure ratio estimation and applications,"One of the most important parameters that control reliability growth is the fault exposure ratio (FER) identified by J.D. Musa et al. (1991). It represents the average detectability of the faults in software. Other parameters that control reliability growth are software size and execution speed of the processor which are both easily evaluated. The fault exposure ratio thus presents a key challenge in our quest towards understanding the software testing process and characterizing it analytically. It has been suggested that the fault exposure ratio may depend on the program structure, however the structuredness as measured by decision density may average out and may not vary with program size. In addition FER should be independent of program size. The available data sets suggest that FER varies as testing progresses. This has been attributed partly to the non-randomness of testing. We relate defect density to FER and present a model that can be used to estimate FER. Implications of the model are discussed. This model has three applications. First, it offers the possibility of estimating parameters of reliability growth models even before testing begins. Secondly, it can assist in stabilizing projections during the early phases of testing when the failure intensity may have large short term swings. Finally, since it allows analytical characterization of the testing process, it can be used in expressions describing processes like software test coverage growth",1996,0, 902,Developing a design complexity measure,"The cost to develop and maintain software is increasing at a rapid rate. The majority of the total cost to develop software is spent in the post-deployment-maintenance phase of the software life-cycle. In order to reduce life-cycle costs, more effort needs to be spent in earlier phases of the software life-cycle. One characteristic that merits investigation is the complexity of a software design. Project performance metrics (i.e. effort, schedule, defect density, etc.) are driven by software complexity, and affect project costs. The Software Design Complexity Measure examines an organization's historical project performance metrics along with the complexity of a project's software design to estimate future project performance metrics. These estimates indicate costs that the evaluated software design will incur in the future. Equipped with future cost estimates, a project manager will be able to make more informed decisions concerning the future of the project",1996,0, 903,A description of the software element of the NASA EME flight tests,"In support of NASA's Fly-By-Light/Power-By-Wire (FBL/PBW) program, a series of flight tests were conducted by NASA Langley Research Center in February, 1995. The NASA Boeing 757 was flown past known RF transmitters to measure both external and internal radiated fields. The aircraft was instrumented with strategically located sensors for acquiring data on shielding effectiveness and internal coupling. The data are intended to support computational and statistical modeling codes used to predict internal field levels of an electromagnetic environment (EME) on aircraft. The software was an integral part of the flight tests, as well as the data reduction process. The software, which provided flight test instrument control, data acquisition, and a user interface, executes on a Hewlett Packard (HP) 300 series workstation and uses HP VEEtest development software and the C programming language. Software tools were developed for data processing and analysis, and to provide a database organized by frequency bands, test runs, and sensors. This paper describes the data acquisition system on board the aircraft and concentrates on the software portion. Hardware and software interfaces are illustrated and discussed. Particular attention is given to data acquisition and data format. The data reduction process is discussed in detail to provide insight into the characteristics, quality, and limitations of the data. An analysis of obstacles encountered during the data reduction process is presented",1996,0, 904,A two-stage objective model for video quality evaluation,This paper describes a new methodology that might be useful to develop a model for comparing the present operational readiness of a video system with the same system's past performance. It provides the results of a study conducted to determine the feasibility of developing a video performance model using the ANSI committee TlA1.5 measures to allow the monitoring of video service performance across time. Based on the results of this study it is concluded that the performance measures can be used to develop an objective model of video service performance,1996,0, 905,Multiscale motion estimation for scalable video coding,"Motion estimation is an important component of video coding systems because it enables us to exploit the temporal redundancy in the sequence. The popular block-matching algorithms (BMAs) produce unnatural, piecewise constant motion fields that do not correspond to “trueâ€?motion. In contrast, our focus here is on high-quality motion estimates that produce a video representation that is less dependent on the specific frame-rate or resolution. To this end, we present an iterated registration algorithm that extends previous work on multiscale motion models and gradient-based estimation for coding applications. We obtain improved motion estimates and higher overall coding performance. Promising applications are found in temporally-scalable video coding with motion-compensated frame interpolation at the decoder. We obtain excellent interpolation performance and video quality; in contrast, BMA leads to annoying artifacts near moving image edges",1996,0, 906,Automatic classification of wafer defects: status and industry needs,"This paper describes the Automatic Defect Classification (ADC) beta site evaluations performed as part of the SEMATECH ADC project. Two optical review microscopes equipped with ADC software were independently evaluated in manufacturing environments. Both microscopes were operated in bright-field mode with white light illumination. ADC performance was measured on three process levels of random logic devices: source/drain, polysilicon gate, and metal. ADC performance metrics included classification accuracy, repeatability, and speed. In particular, ADC software was tested using a protocol that included knowledge base tests, gauge studies, and small passive data collections",1996,0,1031 907,A transparent light-weight group service,"The virtual synchrony model for group communication has proven to be a powerful paradigm for building distributed applications. Implementations of virtual synchrony usually require the use of failure detectors and failure recovery protocols. In applications that require the use of a large number of groups, significant performance gains can be attained if these groups share the resources required to provide virtual synchrony. A service that maps user groups onto instances of a virtually synchronous implementation is called a light-weight group service. This paper proposes a new design for the light-weight group protocols that enables the usage of this service in a transparent manner as a test case, the new design was implemented in the Horus system, although the underlying principles can be applied to other architectures as well. The paper also presents performance results from this implementation",1996,0, 908,Locating more corruptions in a replicated file,"When a data file is replicated at more than one site, we are interested in detecting corruption by comparing the multiple copies. In order to reduce the amount of messaging for large files, techniques based on page signatures and combined signatures have been explored. However, for 3 or more sites, the known methods assume that the number of corrupted page copies to be at most [M/2]-1, where M is the number of sites. We point out that this assumption is unrealistic and the corresponding methods are unnecessarily pessimistic. In this paper, we replace this assumption by another assumption which we show to be reasonable. Based on this assumption, we derived a distributed algorithm which in general achieves better performance than previously known results. Our system model is also more refined than previous work",1996,0, 909,Visual coherence and usability: a cohesion metric for assessing the quality of dialogue and screen designs,"Interface design metrics help developers evaluate user interface quality from designs and visual prototypes before implementing working prototypes or systems. Visual coherence, based on the established software engineering concept of cohesion, measures the fit between the layout of user interface features and their semantic content. Visually coherent interfaces group semantically more closely related features together, enhancing comprehension and ease of use. Preliminary research using a scenario-based technique with built in validity checks found professional developers preferred more visually coherent designs and rated them easier to use, even when these departed from familiar dialogue conventions. Implications for design and further research are discussed",1996,0, 910,Relating knock-on viscosity to software modifiability,"The notion of “cognitive dimensionsâ€?developed by Green provides an analytic framework for assessing usability for a variety of information artifacts. The work here describes formal interpretation of dimensions in order to precisely assess the suitability of interactive systems for particular tasks. The particular dimension considered is viscosity-this concerns the ease with which information structures can be modified and updated within a given environment. A formal interpretation of such a dimension has the benefit of yielding practical measures and guidelines for assessment. This extends a growing body of work concerned with formally characterising interactive properties that are significant to successful use. The context in which we demonstrate our interpretation of dimensions is that of program modification, where a program represents an information structure to be updated. The framework developed provides an interpretation of empirical evidence regarding software quality and modifiability",1996,0, 911,A test of precision GPS clock synchronization,"This paper describes tests of precision GPS time transfer using geodetic-quality TurboRogue receivers. The GPS data are processed with the GIPSY-OASIS II software, which simultaneously estimates the GPS satellite orbits and clocks, receiver locations and clock offsets, as well as other parameters such as Earth orientation. This GPS solution technique, which emphasizes high accuracy GPS orbit determination and observable modeling, has been shown to enable sub-1 ns time transfer at global distance scales. GPS-based monitoring of clock performance has been carried out for several years through JPL's high precision GPS global network processing. The paper discusses measurements of variations in relative clock offsets down to a level of a few tens of picoseconds. GPS-based clock frequency measurements are also presented",1996,0, 912,Directional dilation for the connection of piece-wise objects: a semiconductor manufacturing case study,"A technique for intelligent dilation of objects in an image was developed that, when compared to standard isotropic techniques, can create a better representative resultant image for subsequent morphological analysis. Individual masses in an image are forced to grow in preferential directions with varying degrees of strength according to a set of morphological parameters. These parameters are determined by applying a model based on gravitational force to determine the strength and direction of the “attraction forceâ€?between objects. The result of this analysis is a collection of normalized dilation vectors for each object in the image, where each vector represents a potential direction for growth. Results of the application of this algorithm are shown on binary images of defect distributions on semiconductor wafers where a single defect (e.g., a scratch) can be made up of numerous, disconnected blobs",1996,0, 913,Scheduling VBR traffic for achievable QoS in a shared un-buffered environment,"A system of T resources (wireless channels) shared by a number of un-buffered applications is considered. The shared transmission resources are defined to be the slots (packet transmission times) of a TDMA frame (service cycle). Packets which are not allocated a channel within the current service cycle are dropped. The quality of service (QoS) vector is equivalently described in terms of (diverse) packet dropping probabilities (pi), or (diverse) packet dropping rates (di). From a precisely defined region of achievable QoS vectors, easy to implement scheduling policies that deliver the achievable QoS vectors are derived. As an example, a policy that delivers a target QoS vector is derived; its performance is verified through simulations",1996,0, 914,The design and development of a distributed scheduler agent,"The main objective of the distributed scheduler agent is to provide task placement advice either to parallel and distributed applications directly, or to a distributed scheduler which will despatch normal applications transparently to the advised hosts. To accomplish this, the distributed scheduler agent needs to know the global load situation across all machines and be able to pick the host which best suits the specific resource requirements of individual jobs. Issues concerning the collecting and distribution of load information throughout the system are discussed. This load information is then fed to a ranking algorithm which uses a 3-dimensional load space to generate the most suitable host based on weights which indicate the relative importance of resources to a task. Performance tests are carried out to determine the response times and overhead of the distributed scheduler agent. An application, a distributed ray tracer, is also customised to make use of the distributed scheduler agent and the results presented",1996,0, 915,A better ATPG algorithm and its design principles,"The traditional goal of an ATPG algorithm is to achieve a high fault coverage by producing a small number of tests. However, since usually a high fault coverage does not imply a high defect coverage, such an objective can mislead us to overtrust an ATPG method which is optimal for faults but inefficient for defects. The paper presents several new principles to design an ATPG algorithm for solve this problem. The new principles direct an ATPG algorithm to improve its efficiency and reliability for detecting the non target defects through the target faults. Theory and experiments are presented to demonstrate the superiority of the new principles and the new test generation algorithm",1996,0, 916,Detection of fault-prone software modules during a spiral life cycle,"The article is an experience report on identifying fault prone modules in a subsystem of the Joint Surveillance Target Attack Radar System, JSTARS, a large tactical military system. The project followed the spiral life cycle model. The iterations of the system were developed in FORTRAN about one year apart. We developed a discriminant analysis model using software metrics from one iteration to predict whether or not each module in the next would be considered fault prone. Tactical military software is required to have high reliability. Each software function is often considered mission critical, and the lives of military personnel often depend on mission success. In our project, each iteration of a spiral life cycle development produced a system that was suitable for operational testing. A risk analysis based on operational testing guided development of the next iteration. Identifying fault prone modules early in the development of an iteration can lead to better reliability, The results confirm previously published studies that discriminant analysis can be a useful tool in identification of fault prone software modules. This study used consecutive iterations, first, to build, and then to evaluate the model. This model validation approach is more realistic than earlier studies which split data from one project to simulate two iterations. Model results could be used to identify those modules that would probably benefit from earlier reviews and testing, and thus, reduce the risk of unexpected problems with those modules",1996,0, 917,The effect of interface complexity on program error density,"The paper explains how to evaluate software maintainability by considering the effects of interfaced complexities between modified and unmodified parts. When software is maintained, designers develop software, considering not only functions of modified parts but also those of unmodified parts by reading specifications or source codes. So, not only does the volume and complexity of the unmodified and modified parts, but also the interfaced complexities between, them greatly affect the occurrence of software errors. The modified part consists of several subsystems, each of which is composed of several functionally related routines. The unmodified part also consists of several routines. A routine corresponds to a function of C-coding. We experimentally show by regression and discriminant analyses that the quality of each routine tends to decrease as the reuse-ratio increases. The optimal threshold of reuse-ratio is selected by applying AIC (Akaihe Information Criterion) procedures to discriminant analysis for software quality classification. We can reasonably separate routines into two parts, one modified in which most software errors occur and the other unmodified in which few errors occur. The interfaced complexities measured by the extended cyclomatic number between each sub system of the modified part and the unmodified part greatly affect the number of errors and error density. By applying regression analysis to medium size software, we have shown that 40% of variance of error density are represented by interfaced complexities among each subsystem of the modified part and the unmodified part",1996,0, 918,Measurements for managing software maintenance,"Software maintenance is central to the mission of many organizations. Thus, it is natural for managers to characterize and measure those aspects of products and processes that seem to affect the cost, schedule, quality and functionality of software maintenance delivery. This paper answers basic questions about software maintenance for a single organization and discusses some of the decisions made based on the answers. Attributes of both the software maintenance process and the resulting product were measured to direct management and engineering attention toward improvement areas, track the improvement over time, and help make choices among alternatives",1996,0, 919,Alternate path reasoning in intelligent instrument fault diagnosis for gas chromatography,"Intelligent instrument fault diagnosis is addressed using expert networks, a hybrid technique which blends traditional rule-based expert systems with neural network style training. One of the most difficult aspects of instrument fault diagnosis is developing an appropriate rule base for the expert network. Beginning with an initial set of rules given by experts, a more accurate representation of the reasoning process can be found using example data. A methodology for determining alternate paths of reasoning and incorporating them into the expert network is presented. Our technique presupposes interaction and cooperation with the expert, and is intended to be used with the assistance of the expert to incorporate knowledge discovered from the data into the intelligent diagnosis tool. Tests of this methodology are conducted within the problem domain of fault diagnosis for gas chromatography. Performance statistics indicate the efficacy of automating the introduction of alternate path reasoning into the diagnostic reasoning system",1996,0, 920,A new study for fault-tolerant real-time dynamic scheduling algorithms,"Many time-critical applications require predictable performance. Tasks corresponding to these applications have deadlines to be met despite the presence of faults. Failures can happen either due to processor faults or due to task errors. To tolerate both processor and task failures, the copies of every task have to be mutually excluded in space and also in time in the schedule. We assume, each task has two versions, namely, primary copy and backup copy. We believe that the position of the backup copy in the task queue with respect to the position of the primary copy (distance) is a crucial parameter which affects the performance of any fault-tolerant dynamic scheduling algorithm. To study the effect of distance parameter, we make fault-tolerant extensions to the well-known myopic scheduling algorithm which is a dynamic scheduling algorithm capable of handling resource constraints among tasks. We have conducted an extensive simulation to study the effect of distance parameter on the schedulability of fault tolerant myopic scheduling algorithm",1996,0, 921,A comparison of functional and structural partitioning,"Incorporating functional partitioning into a synthesis methodology leads to several important advantages. In functional partitioning, we first partition a functional specification into smaller sub-specifications and then synthesize structure for each, in contrast to the current approach of first synthesizing structure for the entire specification and then partitioning that structure. One advantage is that of greatly improved partitioning among a given set of packages with size and I/O constraints, such as FPGAs or ASIC blocks, resulting in better performance and far fewer required packages. A second advantage is that of greatly reduced synthesis runtimes. We present results of experiments demonstrating these important advantages, leading to the conclusion that further research focus on functional partitioning can lead to improved quality and practicality of synthesis environments",1996,0, 922,Revisiting measurement of software complexity,"Software complexity measures are often proposed as suitable indicators of different software quality attributes. The paper presents a study of some complexity measures and their correlation with the number of failure reports, and a number of problems are identified. The measures are such poor predictors that one may as well use a very simple measure. The proposed measure is supposed to be ironic to stress the need for a more scientific approach to software measurement. The objective is primarily to encourage discussions concerning methods to estimate different quality attributes. It is concluded that either completely new methods are needed to predict software quality attributes or a new view on predictions from complexity measures is needed. This is particularly crucial if the software industry is to use software metrics successfully on a broad basis",1996,0, 923,Improving the quality of classification trees via restructuring,"The classification-hierarchy table developed by Chen and Poon (1996) provides a systematic approach to construct classification trees from given sets of classifications and their associated classes. The paper enhances their study by defining a metric to measure the “qualityâ€?of a classification tree, and providing an algorithm to improve this quality",1996,0, 924,Analysis of software process improvement experience using the project visibility index,"Based on the capability maturity model (CMM), process improvement at OMRON, a Japanese microprocessor manufacturer, increased project predictability in three ways: accuracy, variability, and performance. The authors use the project visibility index (PVI) and other measurements to quantitatively demonstrate this. Qualitative analysis of how and why OMRON achieved higher project visibility and increases in the QCD (quality, cost, and delivery on time) factors are supported with data on review-effort ratios and productivity. They identify factors directly affected by high project visibility",1996,0, 925,Software specifications metrics: a quantitative approach to assess the quality of documents,"Following the development of system requirements, the software specifications document consists of a definition of the software product. Each of the following phases of design, coding, and integration/testing transforms the initial software specifications into lower levels of machine implementable details until the final machine processable object code is generated. Therefore, the completeness, readability and accuracy of the software specification directly influences the quality of the final software product. A waterfall development model is not assumed. It is assumed however, that software specifications are kept “aliveâ€?and relevant. The purpose of the article is to provide a methodology for deriving quantitative measures of the quality of the software specifications document. These measures are designed to complement good engineering judgment that has to be applied in order to judge the quality of a software specifications document. Quantitative measures (i.e., metrics) will help reveal problem areas in various dimensions of quality characteristics. We focus on completeness, readability and accuracy. The methodology is presented with real life examples",1996,0, 926,Estimation of the explicit cell rate for the ABR service based on the measurements,"The ATM Forum has defined two modes of operation for the ABR (available bit rat) congestion control algorithm, i.e. the explicit rate indication (ERI) and the binary congestion indication (BCI). As the standardization process is mainly focused on the UNI interface, the switch mechanisms are left to the vendors. The objective of the ERI algorithm is to determine the actual explicit rate while the BCI algorithm uses queue thresholds to detect congestion and noncongestion state. This information is sent to the source in order to update the allowed cell rate. In this paper we present a method to implement the ERI algorithm based on the measurements of spare link capacity. To illustrate the effectiveness of the proposition, several numerical results are included. Finally, the method is compared with the standard BCI algorithm",1996,0, 927,Lassoo!: an interactive graphical tool for seafloor classification,"Lassoo! is an interactive graphical tool that facilitates the empirical classification of seafloor materials. Lassoo! can: (1) input a multivariate geo-referenced data set (e.g. geophysical properties, acoustic data or acoustic derivative data sets); (2) display data in geo-referenced map space and/or in multivariate data space (e.g. side scan sonar imagery data in geo-referenced space and parameters derived from the imagery in bivariate space); (3) interactively select subsets of data points in either bivariate or geo-referenced spaces; (4) assign “classesâ€?to selected regions in either data space and; (5) automatically identify these “classesâ€?in both data spaces, Lassoo! has been used for the evaluation of data collected with the commercial sediment classification system “RoxAnnâ€?in an area of seafloor dredge spoil dumping, The RoxAnn data were evaluated by direct comparison with side scan sonar data obtained with a Simrad EM1000 multibeam system. The purpose of the project was to monitor the dispersal of sediments from two dump sites in the survey area and to classify the various sediments encountered in this region. The results of acoustic classification of the sediments were compared to ground truth data obtained from cores. It was observed that the ability to select classes in one space and the identification of these classes in other spaces is a powerful tool for enhancing the quality of empirical sediment classification",1996,0, 928,Multiagent testbed for measuring multimedia presentation quality disorders,"The capability to deliver continuous media to the workstation and meet its real time processing requirements is recognised as the central element of future distributed multimedia courseware. In this process, user perception of quality of service (QoS) plays an important role. Since people have different expectations we concluded that it is important to ensure that courseware packages meet the user perceived QoS rather than commonly defined QoS. To investigate this hypothesis, we have constructed a multi agent testbed (MAT) for integrated management of resource distribution. We present an overview of the MAT architecture, and discuss a case study in which the MAT is being used to assess the impact of the quality of presentation decline from the perceived QoS on student's learning process",1996,0, 929,An approach of parameter identification for asynchronous machine,"This work deals with the parameter identification of asynchronous machines in cases where the motor is rotating and at standstill. In this work, an optimal exciting input signal is determined by methods of optimization to give a best trajectory of the system. Integration is applied to a differential equation model to avoid the wrong information caused by a derivative operation on noise. Instrumental variable estimation is introduced to improve the quality of identification by least-square estimation. Experimental results are given. The work is achieved with MATLAB Toolbox",1996,0, 930,A PC-based method for the localisation and quantization of faults in passive tree-structured optical networks using the OTDR technique,"The method presented showed that the optical time domain reflectometry can be used to facilitate the maintenance of multistaged tree-structured passive optical networks at a lowcost. This method has been implemented in a user-friendly software, and the results obtained proved its effectiveness. We also showed that it is possible to estimate the loss of a fault only on the basis of OTDR traces acquired at the network's head despite the fact that such traces result in the combination of the reflected and backscattered signals coming from the different optical paths.",1996,0, 931,Implementing fault injection and tolerance mechanisms in multiprocessor systems,"The size and complexity of today's multiprocessor systems require the development of new techniques to measure their dependability. An effective technique allowing one to inject faults in message passing multiprocessor systems is presented. Interrupt messages are used to trigger fault injection routines in the targeted processors. Any fault that can be emulated by a modification of the memory content of processors can be injected. That includes faults that could occur within the processors, memories and even in the communication network. The proposed technique allows one to control the time and location of faults as well as other characteristics. It has been used in a prototype multiprocessor system running real applications in order to compare the efficiency of various error detection and correction mechanisms",1996,0, 932,High frequency piezo-composite transducer array designed for ultrasound scanning applications,"A 20 MHz high density linear array transducer is presented in this paper, This array has been developed using an optimized ceramic-polymer composite material. The electro-mechanical behaviour of this composite, especially designed for high frequency applications, is characterised and the results are compared to theoretical predictions. To support this project, a new method of transducer simulation has been implemented. This simulation software takes into account the elementary boundary phenomena and allows prediction of inter-element coupling modes in the array. The model also yields realistic computed impulse responses of transducers, A miniature test device and water tank have been constructed to perform elementary acoustic beam pattern measurements. It is equipped with highly accurate motion controls and a specific needle-shaped target has been developed. The smallest displacement available in the three main axes of this system is 10 microns. The manufacturing of the array transducer has involved high precision dicing and micro interconnection techniques. The flexibility of the material provides us with the possibility of curving and focusing the array transducer. Performance of this experimental array are discussed and compared to the theoretical predictions. The results demonstrate that such array transducers will allow high quality near field imaging. This work presents the efforts to extend the well known advantages of composite piezoelectric transducers to previously unattainable frequencies",1996,0, 933,Availability of real time Internet services,"The Internet has seen dramatic growth over the last few years aided by the increasing popularity of the World Wide Web. Businesses are rushing to provide a presence on the Web and the number of consumers surfing the Web grows daily. This growth in traffic and applications has implications on the expected quality of services. As competition for services increases and customers are asked to pay for these services, performance will become a key differentiator. We consider quality of service issues for real time Internet applications. In particular we define service availability, the probability that a customer will be properly serviced by a given application at a given point in time, as a key measure of service quality. We discuss the network and service factors affecting the performance of new real time applications and then develop a system availability model for the Internet and use it to provide initial estimates of service availability for telephony over the Internet",1996,0, 934,A novel semiconductor pixel device and system for X-ray and gamma ray imaging,"We are presenting clinical images (objects, mammography phantoms, dental phantoms, dead animals) and data from a novel X-ray imaging device and system. The device comprises a pixel semiconductor detector flip-chip joined to an ASIC circuit. CdZnTe and Si pixel detectors with dimensions of the order of 1 cm2 have been implemented with a pixel pitch of 35 μm. Individual detectors comprise, therefore, tens of thousands of pixels. A novel ASIC accumulates charge created from directly absorbed X-rays impinging on the detector. Each circuit on the ASIC, corresponding to a detector pixel, is capable of accumulating thousands of X-rays in the energy spectrum from a few to hundreds of keV with high efficiency (CdZnTe). Image (X-ray) accumulation times are user controlled and range from just a few to hundreds of ms. Image frame updates are also user controlled and can be provided as fast as every 20 ms, thus offering the possibility of real time imaging. The total thickness of an individual imaging the including the mounting support does not exceed 4 mm. Individual imaging tiles are combined in a mosaic providing an imaging system with any desired shape and useful active area. The mosaic allows for cost effective replacement of individual tiles. A scanning system, allows for elimination, in the final image, of any inactive space between the imaging tiles without use of software interpolation techniques. The Si version of our system has an MTF of 20% at 14 1p/mm and the CdZnTe version an MTF of 15% at 10 1p/mm. Our digital imaging devices and systems are intended for use in X-ray and gamma-ray imaging for medical diagnosis in a variety of applications ranging from conventional projection X-ray imaging and mammography to fluoroscopy and CT scanning. Similarly, the technology is intended for use in non destructive testing, product quality control and real time on-line monitoring. The advantages over existing X-ray digital imaging modalities (such as digital imaging plates, scintillating screens coupled to CCDs etc.) include compactness, direct X-ray conversion to an immediate real time digital display, exquisite image resolution, dose reduction and large continuous imaging areas. Our measurements and images confirm that this new digital imaging system compares favourably to photoluminescence",1996,0, 935,Application of wavelet basis function neural networks to NDE,"This paper presents a novel approach for training a multiresolution, hierarchical wavelet basis function neural network. Such a network can be employed for characterizing defects in gas pipelines which are inspected using the magnetic flux leakage method of nondestructive testing. The results indicate that significant advantages over other neural network based defect characterization schemes could be obtained, in that the accuracy of the predicted defect profile can be controlled by the resolution of the network. The centers of the basis functions are calculated using a dyadic expansion scheme and a hybrid learning method. The performance of the network is demonstrated by predicting defect profiles from experimental magnetic flux leakage signals",1996,0, 936,N-best-based instantaneous speaker adaptation method for speech recognition,"An instantaneous speaker adaptation method is proposed that uses N-best decoding for continuous mixture-density hidden Markov model-based speech recognition systems. An N-best paradigm of multiple-pass search strategies is used that makes this method effective even for speakers whose decodings using speaker-independent models are error-prone. To cope with an insufficient amount of data, our method uses constrained maximum a posteriori estimation, in which the parameter vector space is clustered, and a mixture-mean bias is estimated for each cluster. Moreover, to maintain continuity between clusters, a bias for each mixture-mean is calculated as the weighted sum of the estimated biases. Performance evaluation using connected-digit (four-digit strings) recognition experiments performed over actual telephone lines showed more than a 20% reduction in the error rates, even for speakers whose decodings using speaker-independent models were error-prone",1996,0, 937,Investigating rare-event failure tolerance: reductions in future uncertainty,"At the 1995 Computer Assurance (COMPASS) conference, Voas and Miller (1995) presented a technique for assessing the failure tolerance of a program when the program was executing in unlikely modes (with respect to the expected operational profile). In that paper, several preliminary algorithms were presented for inverting operational profiles to more easily distinguish the unlikely modes of operation from the likely modes. This paper refines the original algorithms. It then demonstrates the new algorithms being used in conjunction with a failure tolerance assessment technique on two small programs",1996,0, 938,Adaptive recovery for mobile environments,"Mobile computing allows ubiquitous and continuous access to computing resources while the users travel or work at a client's site. The flexibility introduced by mobile computing brings new challenges to the area of fault tolerance. Failures that were rare with fixed hosts become common, and host disconnection makes fault detection and message coordination difficult. This paper describes a new checkpoint protocol that is well adapted to mobile environments. The protocol uses time to indirectly coordinate the creation of new global states, avoiding all message exchanges. The protocol uses two different types of checkpoints to adapt to the current network characteristics, and to trade off performance with recovery time",1996,0, 939,Impact of program transformation on software reliability assessment,"The statistical sampling method is a theoretically sound approach for measuring the reliability of safety critical software, such as control systems for nuclear power plants, aircrafts, space vehicles, etc. It has, however some practical drawbacks, two of which are the large number of test cases needed to attain a reasonable confidence in the reliability estimate and the sensitivity of the reliability estimate to variations in the operational profile. One way of dealing with both of these issues is to combine statistical sampling with formal methods and attempt to verify complete program paths. This combination becomes especially effective if high usage paths are verified. However the verification of complete paths is difficult to perform in practice and viable only when there is a high confidence in the correctness of the specification. We identify program transformations and partial proofs which have a measurable impact on the reliability assessment procedure. These methods reduce the effective size of the input space which can facilitate sampling without replacement, thereby increasing the confidence in the reliability estimate. Furthermore, these techniques increase the probability that the program under test is free of errors if testing reveals no failures",1996,0, 940,Reliability prediction method for electronic systems: a comparative reliability assessment method,"The paper describes a proposed research in defining a new reliability prediction methodology that may be used to evaluate the reliability of computer and electronic systems. The proposed methodology will attempt to minimize the deficiencies of the traditional reliability prediction methods. The deficiencies include: the use of generic failure rates for reliability prediction; and the lack of realism of the reliability prediction in various operational environments. The proposed methodology will employ the use of Analytical Hierarchy Process, a decision tool, to incorporate the qualitative and quantitative data that are most prevalent to the reliability performance of the system under study. This methodology will analyze the reliability of the system under study by comparing its performance characteristics against its predecessor system (or a similar system) with known reliability performance. The resultant analysis will yield a reliability ratio between the two systems and the ratio may be used to describe the system's reliability under various operational environments. The key traits of the proposed methodology are its ability to incorporate all relevant failure modes that are prevalent to reliability performance and the use of realistic data that will provide realism of the predicted reliability",1996,0, 941,A tree-based classification model for analysis of a military software system,"Tactical military software is required to have high reliability. Each software function is often considered mission critical, and the lives of military personnel often depend on mission success. The paper presents a tree based modeling method for identifying fault prone software modules, which has been applied to a subsystem of the Joint Surveillance Target Attack Radar System, JSTPARS, a large tactical military system. We developed a decision tree model using software product metrics from one iteration of a spiral life cycle to predict whether or not each module in the next iteration would be considered fault prone. Model results could be used to identify those modules that would probably benefit from extra reviews and testing and thus reduce the risk of discovering faults later on. Identifying fault prone modules early in the development can lead to better reliability. High reliability of each iteration translates into a highly reliable final product. A decision tree also facilitates interpretation of software product metrics to characterize the fault prone class. The decision tree was constructed using the TREED-ISC algorithm which is a refinement of the CHAID algorithm. This algorithm partitions the ranges of independent variables based on chi squared tests with the dependent variable. In contrast to algorithms used by previous tree based studies of software metric data, there is no restriction to binary trees, and statistically significant relationships with the dependent variable are the basis for branching.",1996,0, 942,Data flow transformations to detect results which are corrupted by hardware faults,"Design diversity, which is generally used to detect software faults, can be used to detect hardware faults without any additional measures. Since design of diverse programs may use hardware parts in the same way, the hardware fault coverage obtained is insufficient. To improve hardware fault coverage, a method is presented that systematically transforms every instruction of a given program into a modified instruction (sequence), keeping the algorithm fixed. This transformation is based on a diverse data representation and accompanying modified instruction sequences, that calculate the original results in the diverse data representation. If original and systematically modified variants of a program are executed sequentially, the results can be compared online to detect hardware faults. For this method, different diverse data representation have been examined. For the most suitable representation, the accompanying modified instruction sequences have been generated at assembler level and at high language level. The theoretically estimated improvement of the fault coverage of design diversity by additionally using systematically generated diversity have been confirmed by practical examinations",1996,0, 943,Development of a QRS detector algorithm for ECG's measured during centrifuge training,This paper describes a QRS detector algorithm for ECG's measured during centrifuge training of fighter pilots at the Netherlands Aerospace Medical Centre (NAMC). Due to the low quality of the ECG signal during centrifuge training a very robust QRS detector is necessary. An algorithm has been developed which uses lower and upper thresholds in spatial velocities to decide if a detected peak is a R-top. The performance (S=99.41% and PP-99.77%) of this new QRS detector was substantially higher than a frequently used QRS detector in the literature,1996,0, 944,An image quality prediction model for optimization of X-ray system performance,"Describes a software modeling system for predicting image quality of a radiological X-ray system. As opposed to simulation, this model bases the prediction on several quasi-separable performance measures, called system quantities (SQs). Examples of SQs are modulation transfer function, contrast gain, and noise distribution. Use of the SQs provides results which are more direct than simulation with significantly less computation. The modeling structure is created to be insensitive to the nature of its components or the methods of subsystem modeling. This system is designed primarily to facilitate component selection for new X-ray systems. It can also be used to predict the impact of component changes on existing systems, including incorporation of new technology, and to assist in resolution of discrepancies between expected and measured performance in existing systems",1996,0, 945,Performance evaluation of a distributed application performance monitor,"
First Page of the Article
",1996,0, 946,FM-QoS: Real-time Communication using Self-synchronizing Schedules,"FM-QoS employs a novel communication architecture based on network feedback to provide predictable communication performance (e.g. deterministic latencies and guaranteed bandwidths) for high speed cluster interconnects. Network feedback is combined with self-synchronizing communication schedules to achieve synchrony in the network interfaces (NIs). Based on this synchrony, the network can be scheduled to provide predictable performance without special network QoS hardware. We describe the key element of the FM-QoS approach, feedback-based synchronization (FBS), which exploits network feedback to synchronize senders. We use Petri nets to characterize the set of self-synchronizing communication schedules for which FBS is effective and to describe the resulting synchronization overhead as a function of the clock drift across the network nodes. Analytic modeling suggests that for clocks of quality 300 ppm (such as found in the Myrinet NI), a synchronization overhead less than 1% of the total communication traffic is achievable -- significantly better than previous software-based schemes and comparable to hardware-intensive approaches such as virtual circuits (e.g. ATM). We have built a prototype of FBS for Myricom s Myrinet network (a 1.28 Gbps cluster network) which demonstrates the viability of the approach by sharing network resources with predictable performance. The prototype, which implements the local node schedule in software, achieves predictable latencies of 23 µs for a single-switch, 8-node network and 2 KB packets. In comparison, the best-effort scheme achieves 104 µs for the same network without FBS. While this ratio of over four to one already demonstrates the viability of the approach, it includes nearly 10 µs of overhead due to the software implementation. For hardware implementations of local node scheduling, and for networks with cascaded switches, these ratios should be much larger factors.",1997,0, 947,The effective use of automated application development tools,"In this paper we report on the results of a four-year study of how automated tools are used in application development (AD). Drawing on data collected from over 100 projects at 22 sites in 15 Fortune 500 companies, we focus on understanding the relationship between using such automated AD tools and various measures of AD performance—including user satisfaction, labor cost per function point, schedule slippage, and stakeholder-rated effectiveness. Using extensive data from numerous surveys, on-site observations, and field interviews, we found that the direct effects of automated tool use on AD performance were mixed, and that the use of such tools by themselves makes little difference in the results. Further analysis of key intervening factors finds that training, structured methods use, project size, design quality, and focusing on the combined use of AD tools adds a great deal of insight into what contributes to the successful use of automated tools in AD. Despite the many grand predictions of the trade press over the past decade, computer-assisted software engineering (CASE) tools failed to emerge as the promised “silver bullet.â€?The mixed effects of CASE tools use on AD performance that we found, coupled with the complex impact of other key factors such as training, methods, and group interaction, suggest that a cautious approach is appropriate for predicting the impact of similar AD tools (e.g., object-oriented, visual environments, etc.) in the future, and highlight the importance of carefully managing the introduction and use of such tools if they are to be used successfully in the modern enterprise.",1997,0, 948,Modeling manufacturing dependability,"In this paper, an analytical approach for the availability evaluation of cellular manufacturing systems is presented, where a manufacturing system is considered operational as long as its production capacity requirements are satisfied. The advantage of the approach is that constructing a system level Markov chain (a complex task) is not required. A manufacturing system is decomposed into two subsystems, i.e. machining system and material handling system. The machining subsystem is in turn decomposed into machine cells. For each machine cell and material handling subsystem, a Markovian model is derived and solved to find the probability of a subset of working machines in each cell, and a subset of the operating material handling carriers that satisfies the manufacturing capacity requirements. The overall manufacturing system availability is obtained using a procedure presented in the paper. The novelty of the approach is that it incorporates imperfect coverage and imperfect repair factors in the Markovian models. The approach is used to evaluate transient and steady-state performance of three alternative designs based on an industrial example. Detailed discussion of the results and the impact of imperfect coverage and imperfect repair on the availability of the manufacturing system is presented. Possible extensions of the work and software tools available for model analysis are also discussed",1997,0, 949,Requirement metrics-value added,"Summary form only given. Actions in the requirements phase can directly impact the success or failure of a project. It is critical that project management utilize all available tools to identify potential problems and risks as early in the development as possible, especially in the requirements phase. The Software Assurance Technology Center (GSFC) and the Quality Assurance Division at NASA Goddard Space Flight Center are working on developing and applying metrics from the onset of project development, starting in the requirements phase. This talk will discuss the results of a metrics effort on a real, large system development at GSFC and lessons learned from this experience. The development effort for this project uses an automated tool to manage requirements decomposition. This report focuses on the metrics used to assess the requirement decomposition effort and to identify potential risks. The report discusses how metric analysis in the requirements phase can be accomplished on any government or industry project. The use of an automated tool to manage requirement development facilitation of metrics for improved insight into development and risk assessment will also be expanded",1997,0, 950,Formal methods for V&V of partial specifications: an experience report,"This paper describes our work exploring the suitability of formal specification methods for independent verification and validation (IV&V) of software specifications for large, safety critical systems. An IV&V contractor often has to perform rapid analysis on incomplete specifications, with no control over how those specifications are represented. Lightweight formal methods show significant promise in this context, as they offer a way of uncovering major errors, without the burden of full proofs of correctness. We describe an experiment in the application of the method SCR to testing for consistency properties of a partial model of the requirements for fault detection isolation and recovery on the space station. We conclude that the insights gained from formalizing a specification is valuable, and it is the process of formalization, rather than the end product that is important. It was only necessary to build enough of the formal model to test the properties in which we were interested. Maintenance of fidelity between multiple representations of the same requirements (as they evolve) is still a problem, and deserves further study",1997,0, 951,Selection of equipment to leverage commercial technology (SELECT),"One of the primary effects of acquisition reform has been to inject uncertainty into the procurement process for both the US Department of Defense (DoD) and its contractors. Of particular concern is the quantification of reliability performance and risk assessment for commercial off-the-shelf (COTS) equipment that is targeted far use in environments more severe than that for which it was originally designed. The mismatch between the design and end-use environment is compounded by several COTS-unique issues, including: (1) a potential shift in predominant COTS equipment failure mechanisms versus their military equivalents in identical applications; (2) the effects of commercial quality practices on inherent COTS reliability;, (3) the ability of commercial packaging to withstand severe environments; and (4) the reliability risk mitigation options which can make the COTS equipment suitable for the application. Acquisition personnel currently lack the tools necessary to translate these figures-of-merit to the more severe environmental requirements inherent in military field operations, or to assess and manage the risks that will allow them to make informed decisions in selecting between competing designs. The SELECT tool will address these issues",1997,0, 952,The case for architecture-specific common cause failure rates and how they affect system performance,"In redundant systems composed of identical units, common cause failures can significantly affect system performance. We show that the probabilities of common cause failure are architecture specific. We derive the probabilities for generalized dual and triplex systems. We analyze and compare three common architectures, i.e., 1oo2, 1oo2D and 2oo3, to determine the effects of common cause failure on system performance as measured by the indices MTTF, MTTFSafe, MTTFDangerous, Probability of Failure on Demand, and availability. We conclude that the 2oo3 provides marginally better availability than the 1oo2D. In all other performance indices, the 1oo2D provides superior performance",1997,0, 953,Software reliability-growth test and the software reliability-testing platform,"Software users are interested in the MTTF value of software. The rules of λ Pareto diagrams are in general quite different from one software to another. We cannot get an analytical expression to describe the pattern of λ Pareto diagram and be generally suitable to all software. We have constructed a software reliability testing platform (SRTP) which can automatically produce random input points for software following the rule of the f(P). This platform will check the output. If a failure happens, the software will be examined and the defect will be removed, then the reliability of the software will be increased. If the time interval I of software reliability growth test (SRGT) is not quite long, the failure rates of defects appearing within one interval may have no significant differences. If we assume that these λis are mutually equal to each other we find that the later failure data are closely in compliance with the Duane model. Some authors point out that in some cases they achieve an unreasonable result: the total number of defects N estimated is much less than n (the number of defects appeared). Dr. Song proves that, N has at most n roots and there is only one root which lies between n-1 and âˆ?in practical cases. The numerical results of SRGT for a software tested by Dr. Gong and Prof. Zhou are taken as an example. Taking the “zero-failure testâ€?as the verification test of software reliability, the zero-failure test time T needed is T=θ 1/C, where θ1 is the minimum acceptable MTTF value, C=-1/ln(1-γ), and γ is the s-confidence level",1997,0, 954,Performing fault simulation in large system design,"This paper presents a methodology and supporting set of tools for performing fault simulation throughout the design process for large systems. Most of the previous work in fault simulation has sought efficient methods for simulating faults at a single level design abstraction. This paper has developed a methodology for performing fault simulation of design models at the architectural, algorithmic, functional-block, and gate levels of design abstraction. As a result, fault simulation is supported throughout the design process from system definition through hardware/software implementation. Furthermore, since the fault simulation utilities are provided in an advanced design environment prototype tool, an iterative design/evaluation process is available for system designers at each stage of design refinement. The two key contributions of this paper are: a fault simulation methodology and supporting tools for performing fault simulation throughout the design process of large systems; and a methodology for performing fault simulation concurrently in hardware and software component designs and a proof-of-concept implementation",1997,0, 955,Structural information as a quality metric in software systems organization,"Proposes a metric for expressing the entropy of a software system and for assessing the quality of its organization from the perspective of impact analysis. The metric is called “structural information” and is based on a model dependency descriptor. The metric is characterized by its independence from the method of building the system and the architectural styles which represent it at the various levels of abstraction. It takes into account both the structure of the components at all levels of abstraction and the structure derived from the links between the different levels of abstraction. The use of this metric makes it possible to monitor changes in the quality levels of the system during its life-cycle and to determine which level of documentation is causing the degradation of the system. This enables adopting timely measures to counter any drop in the quality of a system due to poorly-performed maintenance. In addition, the metric can serve to check that perfective maintenance objectives are attained",1997,0, 956,Assessing the benefits of incorporating function clone detection in a development process,"The aim of the experiment presented in this paper is to present an insight into the evaluation of the potential benefits of introducing a function clone detection technology in an industrial software development process. To take advantage of function clone detection, two modifications to the software development process are presented. Our experiment consists of evaluating the impact that these proposed changes would have had on a specific software system if they had been applied over a 3 year period (involving 10000 person-months), where 6 subsequent versions of the software under study were released. The software under study is a large telecommunication system. In total 89 million lines of code have been analyzed. A first result showed that, against our expectations, a significant number of clones are being removed from the system over time. However, this effort is insufficient to prevent the growth of the overall number of clones in the system. In this context the first process change would have added value. We have also found that the second process change would have provided programmers with a significant number of opportunities for correcting problems before customers experienced them. This result shows a potential for improving the software system quality and customer satisfaction",1997,0, 957,Alpha 21164 testability strategy,A custom DFT strategy solved specific testability and manufacturing issues for this high performance microprocessor. Hardware and software assisted self test and self repair features helped meet aggressive schedule and manufacturing quality and cost goals,1997,0, 958,Assessment of fault-tolerant computing systems at NASA's Langley Research Center,"In the early 1970's while NASA was studying Advanced Technology Transport concepts, researchers at NASA's Langley Research Center (LaRC) recognized that digital computer systems would be controlling civil transport aircraft in the near future and that the technology did not exist to determine if these digital systems would be reliable enough for this role. In addition, although several existing computer system concepts showed promise to meet the civil transport requirements, none had been realized in an operational system. A multi-initiative program was developed to determine how to assess reliability and performance of fault-tolerant digital computer systems for determining if they could meet the requirements of a civil transport. Subsequent research emphasized the application of formal methods, system safety and digital upset. Some results indicated that dissimilar software may not be reliable enough for critical applications, testing alone will not prove the reliability of highly reliable digital systems and formal methods can find design errors missed by other assessment techniques. Future research will center around the application of formal mathematical methods, insuring software safety, and determination of digital system upsets due to electromagnetic radiation. The long term goal is to define methods for producing error-free systems for flight crucial civil transport applications",1997,0, 959,An efficient dynamic parallel approach to automatic test pattern generation,"Automatic test pattern generation yielding high fault coverage for CMOS circuits has received a wide attention in industry and academic institutions for a long time. Since ATPG is an NP complete problem with complexity exponential to the number of circuit elements, the parallelization of ATPG is an attractive of research. In this paper we describe a parallel sequential ATPG approach which is either run on a standard network of UNIX workstations or, without any changing of the source code, on one of the most powerful high performance parallel computers, the IBM SP2. The test pattern generation is performed in three phases, two for easy-to-detect faults, using fault parallelism with an adaptive limit for the number of backtracks and a third phase for hard-to-detect faults, using search tree parallelism. The main advantage over existing approaches is a dynamic solution for partitioning the fault list and the search tree resulting in a very small overhead for communication without the need of any broadcasts and an optimal load balancing without idle times for the test pattern generators. Experimental results are shown in comparison with existing approaches and are promising with respect to small overhead and utilization of resources",1997,0, 960,Decomposition of inheritance hierarchy DAGs for object-oriented software metrics,"Software metrics are widely used to measure software complexity and assure software quality. However, research in the field of software complexity measurement of a class hierarchy has not yet been carefully studied. The authors introduce a novel factor called unit repeated inheritance (URI) and an important method called the inheritance level technique (ILT) to realize and measure the object-oriented software complexity of a class hierarchy. The approach is based on the graph-theoretical model for measuring the hierarchical complexity in inheritance relations. The proposed metrics extraction shows that inheritance is closely related to the object-oriented software measurement and reveals that overuse of the repeated (multiple) inheritance will increase software complexity and be prone to implicit software errors",1997,0, 961,Why we should use function points [software metrics],"Function point analysis helps developers and users quantify the size and complexity of software application functions in a way that is useful to software users. Are function points a perfect metric? No. Are they a useful metric? In the author's experience, yes. Function points are technologically independent, consistent, repeatable, and help normalize data, enable comparisons, and set project scope and client expectations. The author addresses these issues from the perspective of a practitioner who uses the International Function Point Users Group's Counting Practices Manual, Release 4.0 rules for counting function points. Of course, other function point standards exist, including Mark II and Albrecht's original rules",1997,0, 962,Measuring the performance of a software maintenance department,"All studies indicate that over half of an average data processing user staff is committed to maintaining existing applications. However, as opposed to software development where productivity is measured in terms of lines of code, function-points, data-points or object-points per person month and quality is measured in terms of deficiencies and defect rates per test period, there are no established metrics for measuring the productivity and quality of software maintenance. This means that over half of an organization's software budget cannot be accounted for. The costs occur without being able to measure the benefits obtained. A set of metrics is proposed for helping to remedy this situation by measuring the productivity and quality of the maintenance service",1997,0, 963,Object-oriented analysis of COBOL,"The object-oriented paradigm is considered as the one which best guarantees the investments for renewal. It allows one to produce software with high degrees of reusability and maintainability, and satisfying certain quality characteristics. These features are not obviously automatically guaranteed by the simple adoption of an object-oriented programming language, so a process of re-analysis is needed. In this view, several methods for reengineering old applications according to the object-oriented paradigm have been defined and proposed. A method and tool (C2O2, COBOL to Object-Oriented) for analyzing COBOL applications in order to extract its object-oriented analysis is presented. The tool identifies classes and their relationships by means of a process of understanding and refinement in which COBOL data structures are analyzed, converted into classes, aggregated, and simplified semi-automatically. The algorithm is also capable of detecting data structures which may cause problems in the Year 2000, as demonstrated with an example",1997,0, 964,Generalization and decision tree induction: efficient classification in data mining,"Efficiency and scalability are fundamental issues concerning data mining in large databases. Although classification has been studied extensively, few of the known methods take serious consideration of efficient induction in large databases and the analysis of data at multiple abstraction levels. The paper addresses the efficiency and scalability issues by proposing a data classification method which integrates attribute oriented induction, relevance analysis, and the induction of decision trees. Such an integration leads to efficient, high quality, multiple level classification of large amounts of data, the relaxation of the requirement of perfect training sets, and the elegant handling of continuous and noisy data",1997,0, 965,Compressionless routing: a framework for adaptive and fault-tolerant routing,"Compressionless routing (CR) is an adaptive routing framework which provides a unified framework for efficient deadlock free adaptive routing and fault tolerance. CR exploits the tight coupling between wormhole routers for flow control to detect and recover from potential deadlock situations. Fault tolerant compressionless routing (FCR) extends CR to support end to end fault tolerant delivery. Detailed routing algorithms, implementation complexity, and performance simulation results for CR and FCR are presented. These results show that the hardware for CR and FCR networks is modest. Further, CR and FCR networks can achieve superior performance to alternatives such as dimension order routing. Compressionless routing has several key advantages: deadlock free adaptive routing in toroidal networks with no virtual channels, simple router designs, order preserving message transmission, applicability to a wide variety of network topologies, and elimination of the need for buffer allocation messages. Fault tolerant compressionless routing has several additional advantages: data integrity in the presence of transient faults (nonstop fault tolerance), permanent fault tolerance, and elimination of the need for software buffering and retry for reliability. The advantages of CR and FCR not only simplify hardware support for adaptive routing and fault tolerance, they also can simplify software communication layers",1997,0, 966,The Silicon Gaming Odyssey slot machine,"The Odyssey is a multi-game video slot machine using standard PC hardware components under the control of an embedded real-time operating system. A PCI-bus based peripheral memory board and parallel port-based general-purpose I/O board adapt the PC for use in the slot machine application. The system software is designed to address the unique requirements of casino gaming machines, including high reliability and security, fault detection and recovery, and responsive performance.",1997,0, 967,A digital-signal-processor-based measurement system for on-line fault detection,"This paper deals with the design, construction, and setting up of a measurement apparatus, based on an architecture using two parallel digital signal processors (DSP's), for on-line fault detection in electric and electronic devices. In the proposed architecture, the first DSP monitors a device output on-line in order to detect faults, whereas the second DSP estimates and updates the system-model parameters in real-time in order to track their eventual drifts. The problems which arose when the proposed apparatus was applied to a single-phase inverter are discussed, and some of the experimental results obtained in fault and nonfault conditions are reported",1997,0, 968,Automated parameter extraction for ultrasonic flaw analysis,"To make a decision on the nature of a defect contained within a weld specimen, it is necessary to reliably detect suspect regions in the ultrasonic inspection images, derive geometrical parameters such as shape, size, position and orientation from each indication and, finally, collate these parameters intelligently by associating each indication with a possible defect type. This procedure is discussed for the case when the segmentation of indications and parameter calculation procedures are performed by the authors' NDT Workbench facility. A real-flaw example, inspected by a series of probes, is used to demonstrate the final defect categorisation decision. The indication parameters derived during this process can be used either to aid the manual interpreter or as part of a knowledge based system (KBS)",1997,0, 969,A 'crystal ball' for software liability,"Software developers are living in a liability grace period, but it won't last. To adequately insure themselves against potential liability, developers need tools to identify worst-case scenarios and help them quantify the risks associated with a piece of software. For assessing such risks associated with software, the authors recommend fault injection, which provides worst-case predictions about how badly a piece of code might behave and how frequently it might behave that way. By contrast, software testing states how good software is. But even correct code can have ""bad days"", when external influences keep it from working as desired. Fault injection is arguably the next best thing to having a crystal ball, and it certainly beats facing the future with no predictions at all. It should be a regular part of risk assessment. The greatest benefit from fault injection occurs when a piece of software does not tolerate injected anomalies. False optimism gives way to the only honest claim-that the software presents risks.",1997,0, 970,Concurrent detection of software and hardware data-access faults,"A new approach allows low-cost concurrent detection of two important types of faults, software and hardware data-access faults, using an extension of the existing signature monitoring approach. The proposed approach detects data-access faults using a new type of redundant data structure that contains an embedded signature. Low-cast fault detection is achieved using simple architecture support and compiler support that exploit natural redundancies in the data structures, in the instruction set architecture, and in the data-access mechanism. The software data-access faults that the approach can detect include faults that have been shown to cause a high percentage of system failures. Hardware data-access faults that occur in all levels of the data-memory hierarchy are also detectable, including faults in the register file, the data cache, the data-cache TLB, the memory address and data buses, etc. Benchmark results for the MIPS R300D processor executing code scheduled by a modified GNU C Compiler show that the new approach can concurrently check a high percentage of data accesses, while causing little performance overhead and little memory overhead",1997,0, 971,Developing the 777 airplane information management system (AIMS): a view from program start to one year of service,"The 777 Airplane Information Management System (AIMS) has successfully completed its first year of airline revenue service. This in-service success follows a development and integration program that was an exceptional challenge, primarily due to the system size, technical innovation, and aggressive program schedule. Honeywell and Boeing had to work closely together to complete the design and development of AIMS in a time frame that supported the 777 early extended twin operations (ETOPS) goal and the initial 777 airplane certification. With teams from the two companies working as one unit, redundant activities were eliminated, technical and program problems were identified and solved rapidly, and schedule time was saved by both teams helping with tasks that were traditionally considered to be the other team's job. In service, airline maintenance personnel are finding that the AIMS Central Maintenance Computer System is a tremendous asset in maintaining the 777, and AIMS reliability is exceeding predictions.",1997,0, 972,"Estimates, uncertainty, and risk","The authors discuss the sources of uncertainty and risk, their implications for software organizations, and how risk and uncertainty can be managed. Specifically, they assert that uncertainty and risk cannot be managed effectively at the individual project level. These factors must be considered in an organizational context",1997,0, 973,Buffer management for high performance database systems,"This paper presents the development of a buffer algorithm named RESBAL, which exploits parallelism in order to offer high performance for query execution in relational database systems. The algorithm aims to provide both efficient and predictive data buffering by exploiting the use of prior knowledge of query reference behaviour. Designed to offer a high level of flexibility, RESBAL employs a multiple buffering strategy both on page fetch level and page replacement level. The evaluation of RESBAL has been carried out in a parallel database system environment based on a transputer architecture. The results of this evaluation allow comparisons to be made between different buffer algorithms, and demonstrate the feasibility and effectiveness of the RESBAL algorithm. It is hoped that this work will provide some useful input to research on developing high performance database systems. Their new uses such as data mining and data warehousing are presenting the research community with interesting and important challenges",1997,0, 974,Application of neural networks to software quality modeling of a very large telecommunications system,"Society relies on telecommunications to such an extent that telecommunications software must have high reliability. Enhanced measurement for early risk assessment of latent defects (EMERALD) is a joint project of Nortel and Bell Canada for improving the reliability of telecommunications software products. This paper reports a case study of neural-network modeling techniques developed for the EMERALD system. The resulting neural network is currently in the prototype testing phase at Nortel. Neural-network models can be used to identify fault-prone modules for extra attention early in development, and thus reduce the risk of operational problems with those modules. We modeled a subset of modules representing over seven million lines of code from a very large telecommunications software system. The set consisted of those modules reused with changes from the previous release. The dependent variable was membership in the class of fault-prone modules. The independent variables were principal components of nine measures of software design attributes. We compared the neural-network model with a nonparametric discriminant model and found the neural-network model had better predictive accuracy",1997,0, 975,From software metrics to software measurement methods: a process model,"Presents a process model for software measurement methods. The proposed model details the distinct steps from the design of a measurement method, to its application, then to the analysis of its measurement results, and lastly to the exploitation of these results in subsequent models, such as in quality and estimation models. From this model, a validation framework can be designed for analyzing whether or not a software metric could qualify as a measurement method. The model can also be used for analyzing the coverage of the validation methods proposed for software metrics",1997,0, 976,Predicting how badly “goodâ€?software can behave,"Using fault injection and failure-tolerance measurement with ultrarare inputs, the authors create on automated software environment that can supplement traditional testing methods. Applied to four case studies, their methods promise to make software more robust",1997,0, 977,Supporting dynamic space-sharing on clusters of non-dedicated workstations,"Clusters of workstations are increasingly being viewed as a cost effective alternative to parallel supercomputers. However, resource management and scheduling on workstations clusters is complicated by the fact that the number of idle workstations available for executing parallel applications is constantly fluctuating. We present a case for scheduling parallel applications on non dedicated workstation clusters using dynamic space sharing, a policy under which the number of processors allocated to an application can be changed during its execution. We describe an approach that uses application level checkpointing and data repartitioning for supporting dynamic space sharing and for handling the dynamic reconfiguration triggered when failure or owner activity is detected on a workstation being used by a parallel application. The performance advantages of dynamic space sharing are quantified through a simulation study, and experimental results are presented for the overhead of dynamic reconfiguration of a grid oriented data parallel application using our approach",1997,0, 978,An experimental investigation of the Internet integrated services model using the resource reservation protocol (RSVP),"The Internet Protocol (IP) in use today was designed to offer network applications a best-effort delivery service for network traffic and no guarantees are made as to when or if the packets will be delivered to their final destination. Many new network applications need the network to meet certain delay and packet loss requirements, but the current best-effort delivery service does not have the ability to provide this level of service. The Internet Engineering Task Force has proposed the Internet integrated services (IIS) model to allocate network resources achieving the desired service and the resource ReSerVation Protocol (RSVP) to deliver requests on behalf of the application for these resources. The basic concepts of the IIS model and RSVP are described. An experiment is presented in which the network performance is evaluated, both with and without network resource allocation, under varying degrees of load with the purpose of assessing the benefits of resource allocation for particular data flows",1997,0, 979,Parallel architecture selection for image tracking applications,"Some application-specific algorithms exhibit very high processing demand due to the real-time requirement. Architectural structure adaptations are necessary to meet the computational requirement of such algorithms. An optimized architecture should consider all the performance criteria and their degree of importance. Among these, the degree of parallelism, memory and communication bandwidth, and I/O rate should be considered. A functional analysis of several image processing algorithms is used as an example to find a parallel architecture to meet the requirements. The architecture selection factors described, certainly can help in improving the I/O and processor bound limitations. Their lack of consideration can degrade the system performance in terms of the real time requirement and quality of presentation and future expansion",1997,0, 980,The multimodal multipulse excitation vocoder,"This paper presents a new high-quality, variable-rate vocoder in which the average bit-rate is parametrically controllable. The new vocoder is intended for use with data-voice simultaneous channel (DVSC) applications, in which the speech data is transmitted simultaneously with video and other types of data. The vocoder presented in this paper achieves state-of-the-art quality at several different bit-rates between 5.5 kbps and 10 kbps. Further, it achieves this performance at acceptable levels of complexity and delay",1997,0, 981,Performance evaluation of an on-line PGNAA cross-belt analyzer [cement plant monitoring],"Arizona Portland Cement (APC), USA, has operated a prompt gamma neutron activation analyzer (PGNAA) on its 1000 TPH overland belt conveyor system since August of 1995. The belt conveyor passes through the analyzer enclosure where a neutron source under the belt causes gamma ray releases identified by the sodium iodide scintillation detector above the belt. Sophisticated analytical software operating on a personal computer decomposes the gamma ray spectrum into its constituent elements. The result is a real-time one minute update of the complete chemical composition of the crushed rock headed down the beltline. The Crossbelt Analyzer project was a 1995 Capital Budget project. APC ordered the equipment from GammaMetrics in February and completed the installation in August 1995. Training and acceptance testing were conducted in late 1995. This paper summarizes the project after one year of operation",1997,0, 982,Optimization of embedded DSP programs using post-pass data-flow analysis,"We investigate the problem of code generation for DSP systems on a chip. Such systems devote a limited quantity of silicon to program ROM, so application software must be maximally dense. Additionally, the software must be written so as to meet various high-performance constraints, which may include hard real-time constraints. Unfortunately, current compiler technology is unable to generate dense, high-performance code for DSPs, whose architectures are highly irregular. Consequently, designers often resort to programming application software in assembly-a Time-consuming, error-prone, and non-portable task. Thus, DSP compiler technology must be improved substantially. We describe some optimizations that significantly improve the quality of compiler-generated code. Our optimizations are applied globally and even across procedure calls. Additionally, they are applied to the machine-dependent assembly representation of the source program. Our target architecture is the Texas Instruments' TMS320C25 DSP",1997,0, 983,A quality methodology based on statistical analysis of software structure,"Presents a methodology to assess software quality, based on the structural parameters of the software, and to validate it by factorial and regression analysis. This methodology is able to predict every file risk level, or, in other words, how likely it is that a file contains faults. The influence of structural parameters on the conditions which characterise a system can vary enormously. Once the environment has been identified and a reasonable amount of data accumulated, it is possible to construct a model which identifies potentially dangerous programs; moreover, this model is dynamic and can be continually recalibrated to suit the environment in question. The conclusions we have reached allow us to assert that both the results obtained by the proposed methodology and the basic assumption upon which the theoretical framework depends have been substantially proven",1997,0, 984,Intelligent ray tracing-a new approach for field strength prediction in microcells,"With micro- and even picocells becoming an ever more important part of modern network design, deterministic wave propagation modeling has become a widely discussed solution. The necessity of acquiring high resolution-high quality data as well as extensive computation time pose severe challenges to practical planning scenarios however. A number of solutions and accelerating techniques are known in literature, all of them performing quite well under certain preconditions, but are unsatisfactory when conditions change. Therefore, the basic principles of accelerating rigorous 3-D ray tracing are presented under the aspect of their impact on prediction accuracy. A software has been developed which is able to make `intelligent' decisions with regard to the application of various accelerating algorithms and by consequence gives optimized performance in all types of propagation scenarios. Comparison with measurements shows the gain in computation efficiency as well as the achieved prediction accuracy",1997,0, 985,Task allocation algorithms for maximizing reliability of distributed computing systems,"We consider the problem of finding an optimal and suboptimal task allocation (i.e., to which processor should each module of a task or program be assigned) in distributed computing systems with the goal of maximizing the system reliability (i.e., the probability that the system can run the entire task successfully). The problem of finding an optimal task allocation is known to be NP-hard in the strong sense. We present an algorithm for this problem, which uses the idea of branch and bound with underestimates for reducing the computations in finding an optimal task allocation. The algorithm reorders the list of modules to allow a subset of modules that do not communicate with one another to be assigned last, for further reduction in the computations of optimal task allocation for maximizing reliability. We also present a heuristic algorithm which obtains suboptimal task allocations in a reasonable amount of computational time. We study the performance of the algorithms over a wide range of parameters such as the number of modules, the number of processors, the ratio of average execution cost to average communication cost, and the connectivity of modules. We demonstrate the effectiveness of our algorithms by comparing them with recent competing task allocation algorithms for maximizing reliability available in the literature",1997,0, 986,Performance-based design of distributed real-time systems,"The paper presents a design method for distributed systems with statistical, end-to-end real-time constraints, and with underlying stochastic resource requirements. A system is modeled as a set of chains, where each chain is a distributed pipeline of tasks, and a task can represent any activity requiring nonzero load from some CPU or network resource. Every chain has two end-to-end performance requirements: its delay constraint denotes the maximum amount time a computation can take to flow through the pipeline, from input to output. A chain's quality constraint mandates a minimum allowable success rate for outputs that meet their delay constraints. The design method solves this problem by deriving (1) a fixed proportion of resource load to give each task; and (2) a deterministic processing rate for every chain, an which the objective is to optimize the output success rate (as determined by an analytical approximation). They demonstrate their technique on an example system, and compare the estimated success rates with those derived via simulated on-line behavior",1997,0, 987,A variable-rate CELP coder for fast remote voicemail retrieval using a notebook computer,"Remote retrieval of compressed voicemail data over a telephone line is one of several emerging applications of speech coding. Using a notebook computer equipped with a modem, a user can remotely access a networked desktop unit located at their office or home to retrieve various types of information such as email, FAX, electronic documents as well as voicemail. By compressing the speech data, we reduce the amount of time needed for the transfer of voice data over the telephone line. We have developed a high-quality variable-rate CELP (code-excited linear prediction) coder which can be used in such an application. The coder operates at an average rate of 3 kb/s assuming 80% speech activity and uses several new techniques including a mode decision mechanism based on the use of a peakiness measure and a novel excitation search method called gain-matched analysis-by-synthesis. The coder gives quality comparable to the Microsoft GSM 6.10 Audio Codec at 13 kb/s",1997,0, 988,Layered analytic performance modelling of a distributed database system,"Very few analytic models have been reported for distributed database systems, perhaps because of complex relationships of the different kinds of resources in them. Layered queueing models seem to be a natural framework for these systems, capable of modelling all the different features which are important for performance (e.g. devices, communications, multithreaded processes, locking). To demonstrate the suitability of the layered framework, a previous queueing study of the CARAT distributed testbed has been recast as a layered model. Whereas the queueing model bears no obvious resemblance to the database system, the layered model directly reflects its architecture. The layered model predictions have about the same accuracy as the queueing model",1997,0, 989,Development of finite element design simulation tool for proximity sensor coils with ferrite cores,"In the development of inductive proximity sensors, a great deal of time and effort is spent in the design of the sensor coil, which is typically used with a ferrite core for increased Q (Quality factor). Computer simulation of the sensor coil could shorten this design process by eliminating the need for extensive building and testing of prototypes. Analytical solutions require extensive information about the material properties of the ferrite core, which is not readily available. While there are software packages commercially available that use the finite element method to simulate magnetic fields, these packages require extensive user training, and they involve lengthy run times. In this work, a linear minimal node finite element model of a proximity sensor coil is developed, serving as a rapid design simulation tool that requires little operator training. Using this minimal node model, a rapidly converging Pascal program is created to calculate the Q of a coil vs. frequency and vs. temperature. Measured and simulated data are presented for coils having a 22 mm×6.8 mm core. Because this model is linear and uses a minimal number of nodes, errors do exist between measured and simulated data. The simulated Q curves are still reasonable representations of the measured Q curves however, indicating that the model can be used as a rapid design tool to reduce the flowtime of new coil designs",1997,0, 990,An automatic measurement system for the characterization of automotive gaskets,"The paper describes an automatic measurement system for the estimation of both geometrical sizes and physical features of automotive gaskets. The system hardware includes a personal computer and a CCD camera connected to a frame grabber. The system software consists of an original image processing procedure. At first, both hardware and software solutions are illustrated. Then, the whole image processing procedure, including original algorithmic strategies, is described in detail. Finally, experimental results on images of different automotive gaskets are reported together with the metrological characterization of the proposed measurement system",1997,0, 991,A VXI signal analyzer based on the wavelet transform,"The paper deals with a signal analyzer for real time detection and analysis of transients. In particular, the analyzer implements a suitable procedure, based on the wavelet transform, for analyzing disturbances affecting the power supply voltage. It mounts a good performance digital signal processor and adopts the VXI standard interface. Hardware and software solutions are described in detail. Experimental results on signals characterized by different disturbances with known characteristics assess the reliability of the proposed analyzer",1997,0, 992,On-line current monitoring to diagnose airgap eccentricity-an industrial case history of a large high-voltage three-phase induction motor,An appraisal of online monitoring techniques to detect airgap eccentricity in three-phase induction motors is presented. Results from an industrial case history show that an improved interpretation of the current spectrum can increase the reliability of diagnosing airgap eccentricity in large HV three-phase induction motors,1997,0, 993,Knowledge based system for faulty components detection in production testing of electronic device,"A knowledge-based system is proposed and described for faulty components detection and identification, in production testing of analog electronic boards. Its main parts are: the guided measuring probe and diagnostic expert system. Results are reported of using inductive machine learning technique for diagnostic rules acquisition",1997,0, 994,System implications of rain and ice depolarisation in Ka-band satellite communications,This paper investigates the impact of rain and ice depolarisation on the performance of satellite communications systems operating in the Ka-band with orthogonal polarisations. Various prediction methods for evaluating the degradation of dual-polarised system availability due to rain and ice are reviewed. Their results are compared using depolarisation data and statistics collected in Belgium at 20 GHz during the ESA's Olympus satellite experiment and exhibit significant discrepancies. A procedure based on the combined use of experimental data and of a simulation software for communications systems confirms the prediction produced by one method. The proposed procedure also yields estimation of system quality during particular depolarisation events,1997,0, 995,A new directional relay based on the measurement of fault generated current transients,"This paper presents a new principle in directional relaying in which the fault direction is determined based on a comparison of the spectral energy of the fault generated high frequency current transient components. In this scheme, a current transient detection unit, which is interfaced to the CTs on each of the output lines connecting the busbar, is employed to capture the fault generated transient current signals. The transient signal captured from each line is first integrated (based on a small window of information) to obtain its spectral energy; the level of the energy entering the busbar is then compared with that leaving it, to ascertain the direction of the fault. It is shown that the detector is also able to make correct discrimination when a fault occurs on the busbar. Simulation studies were carried out using the EMTP software. The faulted responses were examined with respect to various system and fault conditions",1997,0, 996,Automatic evaluation of transmission protection using information from digital fault recorders,"Digital fault recorders are increasingly being used by electric utilities throughout the world. The information from these records enables the power system engineer to determine and assess, by analysis, the power system response and the protection performance during fault conditions. Many fault records can be generated following certain incidents, particularly multiple faults caused by adverse weather conditions. In order to substantially reduce the time and effort required for this analysis, there is a need for tools which automate this process. This paper describes software (FR10A and AFRA) that has been developed for this purpose in The National Grid Company plc",1997,0, 997,Increasing signal accuracy of automotive wheel-speed sensors by online learning,"The wheel speed is an important signal for modern automotive control systems. The performance of these systems is closely related to the quality of the processed wheel speed. However, due to manufacturing tolerances or corrosion respectively of the sensor gear wheel, the quality of the signal is affected negatively. In this paper a software-based method to compensate for the mechanical inaccuracy of the sensor wheel is introduced. The approach increases the signal accuracy of conventional automotive wheel-speed sensors significantly. Moreover, with the presented method, the demand for manufacturing precision of the sensor wheel is reduced, which in turn cuts down the costs for mass production considerably. The correction is obtained by online learning of a correction factor for each sensor impulse using a fuzzy model for reconstructing the actual wheel-speed and recursive parameter identification for estimating the correction factors. A method to determine the angular position of the gear wheel without a reference impulse is proposed. The introduced approach is assessed by applying it to real world data",1997,0, 998,A visual search system for video and image databases,"This paper describes a new methodology for image search that is applicable to both image libraries and keyframe search over video libraries. In contrast to previous approaches, which require templates or direct manipulation of low-level image parameters, our search system classifies images into a pre-defined subject lexicon, including terms such as trees and flesh tones. The classification is performed off-line using neural network algorithms. Query satisfaction is performed on-line using only the image tags. Because most of the work is done off-line, this methodology answers queries much more quickly than techniques that require direct manipulation of images to answer the query. We also believe that pre-defined subjects are easier for users to understand when searching programmatic video material. Experiments using keyframes extracted by our video library system show that the methodology gives high-quality query results with fast on-line performance",1997,0, 999,The T experiments: errors in scientific software,"Extensive tests showed that many software codes widely used in science and engineering are not as accurate as we would like to think. It is argued that better software engineering practices would help solve this problem, but realizing that the problem exists is an important first step",1997,0, 1000,A Predictive Metric Based on Discriminant Statistical Analysis,"
First Page of the Article
",1997,0, 1001,An Investigation into Coupling Measures for C++,"
First Page of the Article
",1997,0, 1002,Benchmarking of commercial software for fault detection and classification (FDC) of plasma etchers for semiconductor manufacturing equipment,"In order to evaluate the performance of a large number of commercial FDC software, we conducted a benchmarking study at SEMATECH. The purpose of this paper is to report the tool data and the procedures that we were successfully able to use to benchmark the FDC software. These procedures can be used to compare FDC software based on similar or different principals. The results from this benchmarking have allowed the SEMATECH member companies to choose the software for beta-testing in the production environment. We conducted the benchmarking, by using the tool data obtained from three SEMATECH member companies, Texas Instruments, MOTOROLA and Lucent Technologies. Tool data was put on a secured FTP site with the instructions for solving the benchmarking problems. The software suppliers downloaded the data, analyzed the data and returned the results within the specified time-limits. The best results indicated that it may be possible to predict the tool faults in advance",1997,0, 1003,Performance of software-based MPEG-2 video encoder on parallel and distributed systems,"Video encoding due to its high processing requirements has been traditionally done using special-purpose hardware. Software solutions have been explored but are considered to be feasible only for nonreal-time applications requiring low encoding rates. However, a software solution using a general-purpose computing system has numerous advantages: it is more available and flexible and allows experimenting with and hence improving various components of the encoder. In this paper, we present the performance of a software video encoder with MPEG-2 quality on various parallel and distributed platforms. The platforms include an Intel Paragon XP/S and an Intel iPSC/860 hypercube parallel computer as well as various networked clusters of workstations. Our encoder is portable across these platforms and uses a data-parallel approach in which parallelism is achieved by distributing each frame across the processors. The encoder is useful for both real-time and nonreal-time applications, and its performance scales according to the available number of processors. In addition, the encoder provides control over various parameters such as the size of the motion search window, buffer management, and bit rate. The performance results include comparisons of execution times, speedups, and frame encoding rates on various systems",1997,0, 1004,Computation-constrained fast MPEG-2 encoding,"We introduce a computation-constrained predictive motion estimation algorithm for fast Motion Picture Expert Group-2 (MPEG-2) encoding. Motion estimation is restricted to a relatively small search area whose center is a two-dimensional (2D) median predicted motion vector. The number of computations is further reduced by incorporating explicitly a computational constraint during the estimation process. Experimental results show that the median prediction as well as the dynamic nature of the implementation lead to a dramatic increase in motion estimation speed for only a small loss in compression performance. A unique feature of our algorithm is that the reconstruction quality of the resulting MPEG-2 coder is proportional to the computer's processing power, providing much needed flexibility in many MPEG-2 software applications.",1997,0, 1005,Experimental control in a visual engineering environment: the use of DT VEETM in building a system for water quality assay,"This paper describes the use of Visual Engineering Environment (VEE) software in the control of a water quality monitoring system. The main purpose of the monitoring project was to assess the effectiveness of a cage culture turbidostat (CCT), relative to other bioassays, as a method for determining marine water quality (Clarkson and Leftley (I)). Bioassaying is a technique whereby living organisms, in this case algae, are used as indicators of toxin concentration",1997,0, 1006,Reusing testing of reusable software components,"A software component that is reused in diverse settings can experience diverse operational environments. Unfortunately, a change in the operating environment can also invalidate past experience about the component's quality of performance. Indeed, most statistical methods for estimating software quality assume that the operating environment remains the same. Specifically, the probability density governing the selection of program inputs is assumed to remain constant. However, intuition suggests that such a stringent requirement is unnecessary. If a component has been executed very many times in one environment without experiencing a failure, one would expect it to be relatively failure-free in other similar environments. This paper seeks to quantify that intuition. The question asked is, “how much can be said about a component's probability of failure in one environment after observing its operation in other environments?â€?Specifically, we develop bounds on the component's probability of failure in the new environment based on its past behavior",1997,0, 1007,A software metric for logical errors and integration testing effort,"Many software metrics are based on analysis of individual source code modules and do not consider the way that modules are interconnected. This presents a special problem for many current software development project environments that utilize a considerable amount of commercial, off-the-shelf or other reusable software components and devote a considerable amount of time to testing and integrating such components. We describe a new metric called the BVA metric that is based on an assessment of the coupling between program subunits. The metric is based on the testing theory technique known as boundary value analysis. For each parameter or global variable in a program module or subunit, we compute the number of test cases necessary for a “black-boxâ€?test of a program subunit based on partitioning that portion of the domain of the subunit that is affected by the parameter. The BVA metric can be computed relatively early in the software life cycle. Experiments in several different languages and both academic and industrial programming environments suggest a close predictive relationship with the density of logical software errors and also with integration and testing effort",1997,0, 1008,"Graceful degradation in real-time control applications using (m, k)-firm guarantee","Tasks in a real-time control application are usually periodic and they have deadline constraints by which each instance of a task is expected to complete its computation even in the adverse circumstances caused by component failures. Techniques to recover from processor failures often involve a reconfiguration in which all tasks are assigned to fault-free processors. This reconfiguration may result in processor overload where it is no longer possible to meet the deadlines of all tasks. In this paper, we discuss an overload management technique which discards selected task instances in such a way sheet the performance of the control loops in the system remain satisfactory even after a failure. The technique is based on the rationale that real-time control applications can tolerate occasional misses of the control law updates, especially if the control law is modified to account for these missed updates. The paper devises a scheduling policy which deterministic guarantees when and where the misses will occur and proposes a methodology for modifying the control law to minimize the deterioration in the control system behavior as a result of these missed control law updates.",1997,0, 1009,Robust search algorithms for test pattern generation,"In recent years several highly effective algorithms have been proposed for Automatic Test Pattern Generation (ATPG). Nevertheless, most of these algorithms too often rely on different types of heuristics to achieve good empirical performance. Moreover there has not been significant research work on developing algorithms that are robust, in the sense that they can handle most faults with little heuristic guidance. In this paper we describe an algorithm for ATPG that is robust and still very efficient. In contrast with existing algorithms for ATPG, the proposed algorithm reduces heuristic knowledge to a minimum and relies on an optimized search algorithm for effectively pruning the search space. Even though the experimental results are obtained using an ATPG tool built on top of a Propositional Satisfiability (SAT) algorithm, the same concepts can be integrated on application-specific algorithms.",1997,0, 1010,Fail-awareness: an approach to construct fail-safe applications,"We present a framework for building fail-safe hard real-time applications on top of an asynchronous distributed system subject to communication partitions, i.e. using processors and communication facilities whose real-time delays cannot be guaranteed. The basic assumption behind our approach is that each processor has a local hardware clock that proceeds within a linear envelope of real-time. This allows to compute an upper bound on the actual delays incurred by a particular processing sequence or message transmission. Services and applications can use these computed bounds to detect when they cannot guarantee all their properties because of excessive delays. This allows an application to detect when to switch to a fail-safe mode.",1997,0, 1011,COFTA: hardware-software co-synthesis of heterogeneous distributed embedded system architectures for low overhead fault tolerance,"Hardware-software co-synthesis is the process of partitioning an embedded system specification into hardware and software modules to meet performance, cost and reliability goals. In this paper, we address the problem of hardware-software co-synthesis of fault-tolerant real-time heterogeneous distributed embedded systems. Fault detection capability is imparted to the embedded system by adding assertion and duplicate-and-compare tasks to the task graph specification prior to cosynthesis. The reliability and availability of the architecture are evaluated during co-synthesis. Our algorithm allows the user to specify multiple types of assertions for each task. It uses the assertion or combination of assertions which achieves the required fault coverage without incurring too much overhead. We propose new methods to: 1) perform fault tolerance based task clustering 2) derive the best error recovery topology using a small number of extra processing elements, 3) exploit multi-dimensional assertions, and 4) share assertions to reduce the fault tolerance overhead. Our algorithm can tackle multirate systems commonly found in multimedia applications. Application of the proposed algorithm to several real-life telecom transport system examples shows its efficacy.",1997,0, 1012,An estimated method for software testability measurement,"Software testability is becoming an important factor to be considered during software development and assessment, especially for critical software. The paper gives software testability, previously defined by Voas (1991, 1992), a new model and measurement which is done before random black-box testing with respect to a particular input distribution. The authors also compared their measurement results with the one simulated according with Voas's model. It showed that their rough testability estimate provides enough information and will be used as guidelines for software development",1997,0, 1013,An overview of object-oriented design metrics,"In this paper, we examine the current state in the field of object-oriented design metrices. We describe three sets of currently available metrics suites, namely, those of Chidamber and Kemerer (1993), Lorenze and Kidd (1994) and Abreu (1995). We consider the important features of each set, and assess the appropriateness and usefulness of each in evaluating the design of object-oriented systems. As a result, we identify problems common to all three sets of metrices, allowing us to suggest possible improvements in this area",1997,0, 1014,BOOTSTRAP: five years of assessment experience,"The main characteristics of the BOOTSTRAP method are briefly described, i.e. the reference framework, the assessment procedure, the structure of the questionnaires and the rating and scoring mechanisms used. The major part deals with the results of the BOOTSTRAP assessments performed worldwide over the last 5 years. In this part, major experiences from over 40 sites and 180 projects are analysed in detail and summarised. The empirical data support the basic concepts of the Capability Maturity Model (CMM), but demonstrate that the more detailed profiles the BOOTSTRAP method provides are an excellent instrument to identify strengths and weaknesses of an organisation. BOOTSTRAP proved to be a very efficient and effective means not only to assess the current status of software process quality but also to initiate appropriate improvement actions. The last part deals with future trends in software process assessment approaches, i.e. the development of a common framework for performing assessments",1997,0, 1015,Heterojunction model for Focal Plane Array detector devices,"Night vision systems for military and commercial applications usually use an Infrared Focal Plane Array (IRFPA) for its radiation detector. Existing IRFPA models lack simplicity for setting up the detector's architecture/structure and lack continuity between IR detector material, IR detector processes and detector architecture. This paper will discuss the first version a new IRFPA computer model which is to be used to simulate heterojunction IRFPAs with enhanced quantitative and visual information that allow the device engineer to access the impact of material quality, processing procedures and IR detector architecture on IRFPA performance. This new model will be combined with statistical simulation to enable IRFPA design which have high performance and lowest cost",1997,0, 1016,Performability and reliability modeling of N version fault tolerant software in real time systems,"The paper presents a hierarchical modeling approach of the N version programming in a real time environment. The model is constructed in three layers. At the first layer we distinguish the NVP structure from its operational environment. The NVP structure submodel considers both failures of functionality and failures of performance. The operational environment submodel is based on the concept of the operational profile. The second layer consists of a per run reliability and performance submodels. The first considers per run failure probabilities, while the second is responsible for modeling the series of successive runs over a mission. The information contributed by the second layer constitutes third layer models which support the evaluation of a performability and reliability over mission. The work presented, generalizes our previous work as it considers general distributions of the versions' time to failure and execution time. Also, in addition to the performability model, the third layer includes a model aimed at reliability assessment over a mission period.",1997,0, 1017,EvA: a tool for optimization with evolutionary algorithms,"We describe the EvA software package which consists of parallel (and sequential) implementations of genetic algorithms (GAs) and evolution strategies (ESs) and a common graphical user interface. We concentrate on the descriptions of the two distributed implementations of GAs and ESs which are of most interest for the future. We present comparisons of different kinds of genetic algorithms and evolution strategies that include implementations of distributed algorithms on the Intel Paragon, a large MIMD computer and massively parallel algorithms on a 16384 processor MasPar MP-1, a large SIMD computer. The results show that parallelization of evolution strategies not only achieves a speedup in execution time of the algorithm, but also a higher probability of convergence and an increase of quality of the achieved solutions. In the benchmark functions we tested, the distributed ESs have a better performance than the distributed GAs.",1997,0, 1018,Investigation of printed wiring board testing by using planar coil type ECT probe,A new application of eddy current testing techniques for investigating trace defects on printed circuit boards is proposed. A test probe consisting of a meander type exciting coil is used to induce eddy currents. The following three experiments are conducted: measuring the induced signal when a circuit trace is cut; measuring the induced signal for a number of traces placed in parallel and with a cut in the centre trace; measuring the induced signal for two back to back right angle traces. The experimental results reveal that it is possible to clearly detect defects and that the signal response obtained is strongly associated with a particular defect pattern. The signals obtained from a high density patterned board have a complicated signal signature and are therefore difficult to interpret. This complexity can be avoided by comparing the signal signature of a known good board with a defective board. The difference signal gives a clear indication of a trace defect,1997,0, 1019,Barrel shifter-a close approximation to the completely connected network in supporting dynamic tree structured computations,"High performance computing requires high quality load distribution of processes of a parallel application over processors in a parallel computer at runtime such that both maximum load and dilation are minimized. The performance of a simple randomized tree growing algorithm on the barrel shifter and the Illiac networks is studied in this paper. The algorithm spreads tree nodes by letting them to take random walks to neighboring processors. We develop recurrence relations that characterize expected loads on all processors. We find that the performance ratio of probabilistic dilation-1 tree embedding in the barrel shifter network with N processors (a network with node degree O(log N)) is very close to that in the completely connected network of the same size. However, the hypercube network, which also has node degree log N, does not have such a capability. As a matter of factor, even the Illiac network, which is a subnetwork of the barrel shifter, has an optimal asymptotic performance ratio",1997,0, 1020,A general equipment diagnostic system and its application on photolithographic sequences,"This paper presents a general diagnostic system that can be applied to semiconductor equipment to assist the operator in finding the causes of decreased machine performance. Based on conventional probability theory, the diagnostic system incorporates both shallow and deep level information. From the observed evidence, and from the conditional probabilities of faults initially supplied by machine experts (and subsequently updated by the system), the unconditional fault probabilities and their bounds are calculated. We have implemented a software version of the diagnostic system, and tested it on real photolithography equipment malfunctions and performance drifts. Initial experimental results are encouraging",1997,0, 1021,"A test structure advisor and a coupled, library-based test structure layout and testing environment","A new test chip design environment, based on commercial tools, containing a test structure advisor and a coupled, library-based layout and testing environment has been developed that dramatically increases productivity. The test structure advisor analyzes cross sections of devices to recommend a comprehensive list of diagnostic and parametric test structures. These test structures can then be retrieved from the libraries of parameterized structures, customized, and placed in a design to rapidly generate customized test chips. Coupling of the layout and test environments enables the automatic generation of the vast majority of the parametric test software. This environment, which has been used to design test chips for a low-power/low-voltage CMOS process, a BiCMOS trench process, and a TFT process, results in an estimated tenfold increase in productivity. In addition, the redesign of five modules of Stanford's existing BiCMOS test chip, using parameterized test structures, showed an 8× improvement in layout time alone",1997,0, 1022,Empirical evaluation of retrieval in case-based reasoning systems using modified cosine matching function,"Case-based reasoning (CBR) supports ill-structured decision making by retrieving previous cases that are useful toward the solution of a new decision problem. The usefulness of previous cases is determined by assessing the similarity of a new case with the previous cases. In this paper, we present a modified form of the cosine matching function that makes it possible to contrast the two cases being matched and to include differences in the importance of features in the new case and the importance of features in the previous case. Our empirical evaluation of a CBR application to a diagnosis and repair task in an electromechanical domain shows that the proposed modified cosine matching function has a superior retrieval performance when compared to the performance of nearest-neighbor and the Tversky's contrast matching functions",1997,0, 1023,An analysis of the post-fault behavior of robotic manipulators,"Operations in hazardous or remote environments are invariably performed by robots. The hostile nature of the environments, however, increase the likelihood of failures for robots used in such applications. The difficulty and delay in the detection and consequent correction of these faults makes the post-fault performance of the robots particularly important. This work investigates the behavior of robots experiencing undetected locked-joint failures in a general class of tasks characterized by point-to-point motion. The robot is considered to have “convergedâ€?to a task position and orientation if all its joints come to rest when the end-effector is at that position. It is seen that the post-fault behavior may be classified into three categories: (1) the robot converges to the task position; (2) the robot converges to a position other than the task position; or (3) the robot does not converge, but keeps moving forever. The specific conditions for convergence are identified, and the different behaviors illustrated with examples of simple planar manipulators",1997,0, 1024,A retargetable table reader,"We describe the architecture of a system for reading machine-printed documents in known predefined tabular-data layout styles. In these tables, textual data are presented in record lines made up of fixed-width fields. Tables often do not rely on line-art (ruled lines) to delimit fields, and in this way differ crucially from fixed forms. Our system performs these steps: copes with multiple tables per page; identifies records within tables; segments records into fields; and recognizes characters within fields, constrained by field-specific contextual knowledge. Obstacles to good performance on tables include small print, tight line-spacing, poor-quality text (such as photocopies), and line-art or background patterns that touch the text. Precise skew-correction and pitch-estimation, and high-performance OCR using neural nets proved crucial in overcoming these obstacles. The most significant technical advances in this work appear to be algorithms for identifying and segmenting records with known layout, and integration of these algorithms with a graphical user interface (GUI) for defining new layouts. This GUI has been ergonomically designed to make efficient and intuitive use of exemplary images, so that the skill and manual effort required to retarget the system to new table layouts are held to a minimum. The system has been applied in this way to more than 400 distinct tabular layouts. During the last three years the system has read over fifty million records with high accuracy",1997,0, 1025,Identifying fault-prone function blocks using the neural networks an empirical study,"An empirical study was carried out at ETRI to investigate the relationship between several metrics and the number of change requests associated with function blocks. For this, we propose the software metrics of CHILL language, which are used to develop the telecommunication software. We present the identification method of fault-prone function blocks of telecommunication software based on neural-networks. We focus on investigate the relationship between the proposed metrics and function block's faults which are found during development phase. Using the neural network model, we classify function blocks as either fault-prone or not fault prone based on the proposed metrics. Therefore, for newly added function blocks, we can predict whether the function block is fault-prone or not fault prone",1997,0, 1026,An on-line algorithm for checkpoint placement,"Checkpointing enables us to reduce the time to recover from a fault by saving intermediate states of the program in a reliable storage. The length of the intervals between checkpoints affects the execution time of programs. On one hand, long intervals lead to long reprocessing time, while, on the other hand, too frequent checkpointing leads to high checkpointing overhead. In this paper, we present an on-line algorithm for placement of checkpoints. The algorithm uses knowledge of the current cost of a checkpoint when it decides whether or not to place a checkpoint. The total overhead of the execution time when the proposed algorithm is used is smaller than the overhead when fixed intervals are used. Although the proposed algorithm uses only on-line knowledge about the cost of checkpointing, its behavior is close to the off-line optimal algorithm that uses a complete knowledge of checkpointing cost",1997,0, 1027,A Chinese bank check recognition system based on the fault tolerant technique,"The contradiction between the high recognition accuracy and the low rejection rate in automatic bank check recognition has not been solved successfully. In this paper, a fault-tolerant Chinese bank check recognition system is presented to solve the contradiction between the need for low-error-recognition probability and the need for low-refused-recognition probability. The main idea is to use a dynamic cipher code (which is to be widely applied in China) to lower both of them. This system achieves a high recognition rate and a high reliability simultaneously when automatically processing Chinese bank checks with dynamic cipher codes. A practical scheme of fault-tolerant recognition of bank checks is given in this paper, and experiments show the performance of our fault-tolerant technique",1997,0, 1028,Viscosity as a metaphor for measuring modifiability,"An analytic framework termed `cognitive dimensions' is introduced and developed to provide formal definitions of dimensions for assessing the suitability of interactive systems for particular tasks. Cognitive dimensions is a psychological framework that provides broadbrush characterisations of interactions that are relevant to ease of use, and an effective terminology to support a wide range of assessments, including the resistance of languages and notations to modification. It is proposed that software design can benefit from the use of cognitive dimensions as tools for assessing software characteristics such as modifiability. To enable this, formal definitions of specific dimensions are developed. This enables the interpretation of otherwise informal dimensions in a precise and generic way. The authors develop and examine two dimensions associated with the notion of `viscosity' (resistance to local change) and demonstrate their relevance in the context of program modification. Two case studies exploring modifications in alternative programming languages and differing styles of solution are used to illustrate the utility of cognitive dimensions. The authors continue by identifying similarities between the novel notion of cognitive dimensions and conventional notions of program quality, such as coupling and cohesion",1997,0, 1029,Neural networks for engine fault diagnostics,"A dynamic neural network is developed to detect soft failures of sensors and actuators in automobile engines. The network, currently implemented off-line in software, can process multi-dimensional input data in real time. The network is trained to predict one of the variables using others. It learns to use redundant information in the variables such as higher order statistics and temporal relations. The difference between the prediction and the measurement is used to distinguish a normal engine from a faulty one. Using the network, we are able to detect errors in the manifold air pressure sensor and the exhaust gas recirculation valve with a high degree of accuracy",1997,0, 1030,A framework for parallel tree-based scientific simulations,"This paper describes an implementation of a platform-independent parallel C++ N-body framework that can support various scientific simulations that involve tree structures, such as astrophysics, semiconductor device simulation, molecular dynamics, plasma physics, and fluid mechanics. Within the framework the users will be able to concentrate on the computation kernels that differentiate different N-body problems, and let the framework take care of the tedious and error-prone details that care common among N-body applications. This framework was developed based on the techniques we learned from previous CM-5 C implementations, which have been rigorously justified both experimentally and mathematically. This gives us confidence that our framework will allow fast prototyping of different N-body applications, to run on different parallel platforms, and to deliver good performance as well",1997,0, 1031,Automatic classification of wafer defects: status and industry needs,"This paper describes the automatic defect classification (ADC) beta site evaluations performed as part of the SEMATECH ADC project. Two optical review microscopes equipped with ADC software were independently evaluated in manufacturing environments. Both microscopes were operated in bright-field mode with white light illumination, ADC performance was measured on three process levels of random logic devices: source/drain, polysilicon gate, and metal. ADC performance metrics included classification accuracy, repeatability, and speed. In particular, ADC software was tested using a protocol that included knowledge base tests, gauge studies, and small passive data collections",1997,0, 1032,Wavelet image extension for analysis and classification of infarcted myocardial tissue,"Some computer applications for tissue characterization in medicine and biology, such as analysis of the myocardium or cancer recognition, operate with tissue samples taken from very small areas of interest. In order to perform texture characterization in such an application, only a few texture operators can be employed: the operators should be insensitive to noise and image distortion and yet be reliable in order to estimate texture quality from the small number of image points available. In order to describe the quality of infarcted myocardial tissue, the authors propose a new wavelet-based approach for analysis and classification of texture samples with small dimensions. The main idea of this method is to decompose the given image with a filter bank derived from an orthonormal wavelet basis and to form an image approximation with higher resolution. Texture energy measures calculated at each output of the filter bank as well as energies of synthesized images are used as texture features in a classification procedure. The authors propose an unsupervised classification technique based on a modified statistical t-test. The method is tested with clinical data, and the classification results obtained are very promising. The performance of the new method is compared with the performance of several other transform-based methods. The new algorithm has advantages in classification of small and noisy input samples, and it represents a step toward structural analysis of weak textures.",1997,0, 1033,Validating the defect detection performance advantage of group designs for software reviews: report of a replicated experiment,"It is widely accepted that software development technical reviews (SDTRs) are a useful technique for finding defects in software products. The normative SDTR literature assumes that group reviews are better than individual reviews. Recent debates centre around the need for review meetings. This paper presents the findings of a replicated experiment that was conducted to investigate whether group review meetings are needed and why. We found that an interacting group is the preferred choice over the average individual and artificial (nominal) groups. The source of performance advantage of interacting groups is not synergy as was previously thought, but rather in discriminating between true defects and false positives identified by individual reviewers. As a practical implication, nominal groups may be an alternative review design in situations where individuals exhibit a low level of false positives",1997,0, 1034,Supporting hardware trade analysis and cost estimation using design complexity,"Defines and illustrates a hardware design complexity measure (HDCM) and describe its potential applications to trade-off analysis and cost estimation. Specifically, we define a VHDL complexity measure. We have derived the HDCM from an avionics software design complexity measure (ASDCM) that we have shown to be effective in estimation and optimization of overall software costs. Similar to the ASDCM, we believe that the proposed HDCM could enable more optimal hardware design, implementation and maintenance",1997,0, 1035,A reverse engineering approach to evaluate function point rules,"Function points are generally used for measuring software functional size from a user perspective. The paper is concerned with the problem of counting function points from source code using the function point analysis proposed by the International Function Point User Group (IFPUG) 1994 standards. The paper presents the Automated FP counting scope and objective, the presentation of an existing semi-formal model and the required extensions for the definition of four IFPUG rules. Then the authors propose reverse engineering techniques to address those four rules",1997,0, 1036,Implementation and performance assessment of multilevel data structures,The authors discuss the implementation and performance assessment of a multilevel data structure system prototype. They compare the multilevel data structure approach with corresponding fault-intolerant implementations to assess the overhead imposed for incorporating fault tolerance. The simulation shows promising results in using multilevel data structures,1997,0, 1037,A formal approach to software components classification and retrieval,"We propose an approach to reuse-based software development using a formal method. In our approach, each software component is annotated with a set of predicates to formally describe the component and is classified using a faceted scheme. A user may retrieve components from the library using either keywords or predicates. When a component is retrieved, it is checked to determine if it matches the requirements. Then, the component is integrated with the designed system, along its required functionalities. The integrated component/system as transformed into a Predicate/Transition net (PrT net) to perform consistency checking. If there is no inconsistency, the component may be adapted or incorporated directly. Otherwise, the conditions that cause the inconsistency will be revealed. The user may decide to search the library again by modifying the query specification and restart the whole process, or to terminate the search",1997,0, 1038,Communication middleware and software for QoS control in distributed real-time environments,"There has been an increasing need of highly predictable, timely, and dependable communication services with QoS guarantees on an end-to-end basis either for embedded real-time applications or for multimedia-integrated distributed control. Performance objectives used in conventional networks-such as maximizing the throughput, minimizing the response time, or providing fairness to users-are not of the most important concern for both types of applications. Instead, resources should be appropriately reserved and managed to support multidimensional quality of service (QoS) on an end-to-end basis, as well as application-specific tradeoffs among them. The main intent of the paper is thus to address and demonstrate an environment-an integrated set of network resource management techniques, middleware layers, and network software-for supporting multi-dimensional QoS on an end-to-end basis in distributed real-time environments. The authors first provide an analytic QoS framework Then, they elaborate on the algorithms and mechanisms to provide such network services. To empirically analyze the behavior of the proposed components, they are currently implementing them as software layers on top of the Sun Solaris operating system, and comment on the implementation status",1997,0, 1039,The Internal Revenue Service function point analysis program: a brief,"The Internal Revenue Service (IRS) Information Systems has been using function point analysis since 1993 as the foundation of its software metrics program. Using function point analysis, the IRS has significantly improved the quality of its software project management as measured by the SEI Capability Maturity Model. The IRS can determine the size of its software about four times faster than industry averages can estimate software size and resource requirements for new development projects as soon as the central data model is known, and can accurately size small software work orders and estimate corresponding resources needed without examining the software or meeting with the project team. Each of these methodologies qualify at least at CMM Level 2",1997,0, 1040,Test order for inter-class integration testing of object-oriented software,"One major problem in inter class integration testing of object oriented software is to determine the order in which classes are tested. This test order, referred to as inter class test order, is important since it affects the order in which classes are developed, the use of test stubs and drivers for classes, and the preparation of test cases. The paper first proposes a number of desirable properties for inter class test order and then presents a new inter class test order strategy. In this new strategy, classes are integrated according to their major and minor level numbers. Major level numbers of classes are determined according to inheritance and aggregation relations between classes, where an aggregation relation refers to a class' inclusion of objects of another class. For classes with the same major level number, their minor level numbers are determined according to association relations between these classes, where an association relation refers to a class' dependency (other than inheritance and aggregation relations) on another class",1997,0, 1041,A DSS for ISO 9000 certification in the health service: a case study,"The European Community market points out the need of overtaking the technical obstacle arising from national rules and laws which prevent the free trading of products and services and the cooperation among foreign companies. To make such a process easier and to harmonize the trades, a quality certification was found to assure the technical specifications of the products and the quality of the services offered. These rules has been collected in an E.C. Law called ISO 9000. The conformity to ISO 9000 rules has now become a fundamental requirement and a key element for the competition of the manufacturing enterprises. At present the need to extend the quality parameters is spreading also in the Services field where both rules and their directives are rather indefinite. ISO 8402 rules, in fact, define the idea of quality as a whole of performance and features of a product or of a service that must satisfy declared or implicit needs. Our work has been carried out together with Dental Clinic of the University of Genoa which can be considered as a case study for similar structures. This paper focuses the importance to define clearly both the requirements of a customer and the characteristics of the Service in order to satisfy such requirements. This paper gives a practical solution to outline the policy and the objectives of a quality system for the particular health structure. The proposed software structure, written in Java language, is a part of a decision support system which can be used in order to achieve the goal of an ISO 9000 certification in such environment. The proposed software structure also allows to define and to organize all data to be maintained according to ISO 9000 rules and can detect any deficiency which may compromise the quality standard",1997,0, 1042,Forecasting network performance to support dynamic scheduling using the network weather service,"The Network Weather Service is a generalizable and extensible facility designed to provide dynamic resource performance forecasts in metacomputing environments. In this paper, we outline its design and detail the predictive performance of the forecasts it generates. While the forecasting methods are general, we focus on their ability to predict the TCP/IP end-to-end throughput and latency that is attainable by an application using systems located at different sites. Such network forecasts are needed both to support scheduling, and by the metacomputing software infrastructure to develop quality-of-service guarantees",1997,0, 1043,Architectural composition in the electronic design studio: conceptual design using CAD visualisation and virtual reality modelling,"The paper evaluates the possibilities for the use of computer aided design and desktop virtual reality technologies as tools for architectural composition. An experimental teaching programme involving undergraduate architectural students at the University of Luton, in which aspects of compositional theory are explored through the direct creation of architectural form and space in digital formats is described. In the programme principles of architectural composition, based upon the ordering and organisation of typological architectural elements according to established rules of composition are introduced to the students through the study of recognised works of design theory. CAD and desktop virtual reality are then used to define and manipulate architectural elements, and to make formal and spatial evaluations of the environments created. The paper describes the theoretical context of the work, assesses the suitability of the software used for performing compositional manipulations, and evaluates the qualities of immersion and intuitive feedback which virtual reality based modelling can offer in the design visualisation process. The teaching programme utilises standard software packages, including AutoCAD, and 3D Studio, as well as Superscape VRT, a PC based desktop VR package",1997,0, 1044,Design of phase and amplitude comparators for transmission line protection,"This paper describes the design of poly-phase power system relays by using the amplitude and phase comparison techniques. Both techniques are used for designing phase-to-phase, three-phase and phase-to-ground relays. A new poly-phase amplitude method, which uses inputs from the phase comparator, is proposed for protecting power transmission lines during phase-to-phase and three-phase faults. A unique amplitude and phase comparison approach is introduced for use in a poly-phase phase-to-ground relaying unit. The proposed designs are suitable for implementation on microprocessors. Performance of the comparators was evaluated by using data simulated using EMTDC and some results are included in the paper",1997,0, 1045,Validating fault tolerant designs using laser fault injection (LFI),"Fault-tolerant processors and processing architectures traditionally have relied on higher level (module level and above) techniques for fault detection and recovery. The advent of VLSI and VHSIC technology, with increasing numbers of gates on a chip and increasing numbers of I/O pins available, has provided increasing opportunities for the inclusion of on-chip fault and defect tolerance. On-chip fault tolerance, encompassing hardware concurrent error detection and recovery mechanisms, offers the potential for improving system availability and supporting real-time fault tolerance. This paper describes the successful development and demonstration of a Laser Fault Injection (LFI) technique to inject soft, i.e., transient faults into VLSI circuits in a precisely-controlled, non-destructive, non-intrusive manner for the purpose of validating fault tolerant design and performance. This technique, as well as enabling the validation of fault-tolerant VLSI designs, also offers the potential for performing automated testing of board-level and system-level fault tolerant designs including the operating system and application software. It requires no vacuum test chamber, radioactive source, or additional on-chip fault injection circuitry. The laser fault injection technique has the advantage that it supports in situ testing of circuits and systems operating at speed. It can also emulate transient error conditions that standard diagnostic test patterns and techniques may miss",1997,0, 1046,An IDDQ sensor for concurrent timing error detection,"Error control is a major concern in many computer systems, particularly those deployed in critical applications. Experience shows that most malfunctions are caused by transient faults which often manifest themselves as signal delays or other timing violations. We present a novel CMOS based concurrent error detection circuit that allows a flip flop (or other timing sensitive circuit element) to signal when its data has been potentially corrupted by a timing violation. Our circuit employs on-chip IDDQ evaluation to determine when the input changes in relation to a clock edge. If the input changes too close to clock time, the resulting switching transient current exceeds a reference threshold, and an error is flagged. We have designed, fabricated and evaluated a test chip that shows that such an approach can be used to effectively detect setup and hold time violations in clocked circuit elements",1997,0, 1047,QoS allocation of multicast connections in ATM,"We study the resource allocation problem and call admission problem for multicast applications in ATM networks. Two approaches are proposed for dividing the end-to-end QoS requirement into the local QoS constraint on each link of the multicast tree. The first approach is to apply the greedy method to divide the end-to-end QoS based on the residual capacity on the links. In the second approach we show how to use a genetic algorithm to solve the QoS allocation problem. The effective bandwidth concept is used for estimating the amount of bandwidth required to guarantee a certain level of QoS on each link. If the end-to-end QoS cannot be guaranteed due to insufficient network resources, the multicast request is rejected (as a result of call admission). A new performance metric, fractional reward loss, is adopted for evaluating the solutions generated by the proposed two approaches. Our experiment results show that our approaches yield lower fractional reward loss and utilize the network resources more efficiently",1997,0, 1048,User-safe devices for true end-to-end QoS,"User-safe devices are a little smarter than your average device. They implement some of the functionality traditionally found in device drivers within the device itself. This provides a number of performance benefits, particularly for devices attached to I/O networks. User-safe devices are serious about quality of service. They can provide the guarantees that applications require in order to behave predictably on a loaded system. This paper describes user-safe devices, and the role they play in the University of Cambridge's experimental multimedia workstation",1997,0, 1049,Software measurement and formal methods: a case study centered on TRIO+ specifications,"Presents a case study where product measures are defined for a formal specification language (TRIO+) and are validated as quality indicators. To this end, defect and effort data were collected during the development of a monitoring and control system for a power plant. We show that some of the underlying hypotheses of these measures are supported bp empirical results and that several measures are significant early indicators of specification change and effort. From a more general perspective, this study exemplifies one important advantage of formal specifications: they are measurable and can thus be better controlled, assessed and managed than informal ones.",1997,0, 1050,Evolutionary neural networks: a robust approach to software reliability problems,"In this empirical study, from a large data set of software metrics for program modules, thirty distinct partitions into training and validation sets are automatically generated with approximately equal distributions of fault prone and not fault prone modules. Thirty classification models are built for each of the two approaches considered-discriminant analysis and the evolutionary neural network (ENN) approach-and their performances on corresponding data sets are compared. The lower error proportions for ENNs on fault prone, not fault prone, and overall classification were found to be statistically significant. The robustness of ENNs follows from their superior performance on the range of data configurations used. It is suggested that ENNs can be effective in other software reliability problem domains, where they have been largely ignored",1997,0, 1051,Predicting fault-prone modules with case-based reasoning,"Software quality classification models seek to predict quality factors such as whether a module will be fault prone, or not. Case based reasoning (CBR) is a modeling technique that seeks to answer new questions by identifying similar “casesâ€?from the past. When applied to software reliability, the working hypothesis of our approach is this: a module currently under development is probably fault prone if a module with similar product and process attributes in an earlier release was fault prone. The contribution of the paper is application of case based reasoning to software quality modeling. To the best of our knowledge, this is the first time that case based reasoning has been used to identify fault prone modules. A case study illustrates our approach and provides evidence that case based reasoning can be the basis for useful software quality classification models that are competitive with discriminant models. The case study revisits data from a previously published nonparametric discriminant analysis study. The Type II misclassification rate of the CBR model was substantially better than that of the discriminant model. Although the Type I misclassification rate was slightly greater and the overall misclassification rate was only slightly less, the CBR model was preferred when costs of misclassification were considered",1997,0, 1052,Detection of response time failures of real-time software,"Classical software reliability research has tended to focus on behavioral type failures which typically manifest themselves as incorrect or missing outputs. In real time software, a correct output which is not produced within a specified response time interval may also constitute a failure. As a result, response time failures must also be taken into account when real time software reliability is assessed. The paper considers the case where the detection of response times failures is done by a separate unit which observes the inputs and outputs of the target software. Automatic detection of such failures is complicated by state dependencies which require the unit to track a target's state as well as the elapsed times between specified stimulus and response pairs. A novel black box approach is described for detecting response time failures and quality of service degradations of session oriented, real time software. The behavior of the target software is assumed to be specified in a formalism based on the notion of communicating extended finite state machines. The response time failure detection unit implemented independently of the target software, interprets a formal model derived directly from the target's requirement specifications. The model is used both to track the state of the target and to determine when to start and stop time interval timing measurements. Measurements of multiple response time intervals may occur simultaneously. The approach was evaluated on the call processing program of a small telephone exchange. Some results of the evaluation are presented and discussed",1997,0, 1053,Hierarchical supervisors for automatic detection of software failures,"Software supervision is an approach to the automatic detection of software failures. A supervisor observes the inputs and outputs of a target system. It uses a model of correct behavior, derived from the target system's requirement specification. Discrepancies between specified and observed behaviors are reported as failures. Applications of the supervisor include online failure detection in real time reactive systems, fault localization and automatic collection of failure data. The paper describes a hierarchical approach to supervision. The approach differs from previous approaches in that supervision is split into two sub problems: tracking the behavior of the target system and detailed behavior checking. The architecture of the hierarchical supervisor has two layers: the path detection layer and the base supervisor layer. The hierarchical approach results in a significant reduction in computational cost arising from specification nondeterminism. The approach was evaluated by supervising the call processing software of a small telephone exchange, executed under random telephone traffic at different loads. Several thousand failures were individually seeded into the output generated by the exchange. The supervisor was able to detect the presence of all seeded failures. Reductions in computational cost of several orders of magnitude were measured in comparison with the direct, single layer supervisor",1997,0, 1054,Testing strategies for form-based visual programs,"Form based visual programming languages, which include electronic spreadsheets and a variety of research systems, have had a substantial impact on end user computing. Research shows that form based visual programs often contain faults, and that their creators often have unwarranted confidence in the reliability of their programs. Despite this evidence, we find no discussion in the research literature of techniques for testing or assessing the reliability of form based visual programs. The paper addresses this lack. We describe differences between the form based and imperative programming paradigms, and discuss effects these differences have on strategies for testing form based programs. We then present several test adequacy criteria for form based programs, and illustrate their application. We show that an analogue to the traditional “all-usesâ€?dataflow test adequacy criterion is well suited for code based testing of form based visual programs: it provides important error detection ability, and can be applied more easily to form based programs than to imperative programs",1997,0, 1055,Using reversible computing to achieve fail-safety,"This paper describes a fail-safe design approach that can be used to achieve a high level of fail-safety with conventional computing equipment which may contain design flaws. The method is based on the well-established concept of reversible computing. Conventional programs destroy information and hence cannot be reversed. However it is easy to define a virtual machine that preserves sufficient intermediate information to permit reversal. Any program implemented on this virtual machine is inherently reversible. The integrity of a calculation can therefore be checked by reversing back from the output values and checking for the equivalence of intermediate values and original input values. By using different machine instructions on the forward and reverse paths, errors in any single instruction execution can be revealed. Random corruptions in data values are also detected. An assessment of the performance of the reversible computer design for a simple reactor trip application indicates that it runs about ten times slower than a conventional software implementation and requires about 20 kilobytes of additional storage. The trials also show a fail-safe bias of better than 99.998% for random data corruptions, and it is argued that failures due to systematic flaws could achieve similar levels of fail-safe bias. Potential extensions and applications of the technique are discussed",1997,0, 1056,"Building software quality classification trees: approach, experimentation, evaluation","A methodology for generating an optimum software quality classification tree using software complexity metrics to discriminate between high-quality modules and low-quality modules is proposed. The process of tree generation is an application of the AIC (Akaike Information Criterion) procedures to the binomial distribution. AIC procedures are based on maximum likelihood estimation and the least number of complexity metrics. It is an improvement of the software quality classification tree generation method proposed by Porter and Selby (1990) from the viewpoint that the complexity metrics are minimized. The problems of their method are that the software quality prediction model is unstable because it reflects observational errors in real data too much and there is no objective criterion for determining whether the discrimination is appropriate or not at a deep nesting level of the classification tree when the number of sample modules gets smaller. To solve these problems a new metric is introduced and its validity is theoretically and experimentally verified. In our examples, complexity metrics written in C language, such as lines of source code, Halstead's (1977) software science, McCabe's (976) cyclomatic number, Henry and Kafura's (1981) fan-in/out and Howatt and Baker's (1989) scope number, are investigated. Our experiments with a medium-sized piece of software (85 thousand lines of source code; 562 samples) show that the software quality classification tree generated by our new metric identifies the target class of the observed modules more efficiently using the minimum number of complexity metrics without any significant decrease of the correct classification ratio (76%->72%) than the conventional classification tree",1997,0, 1057,Confidence-based reliability and statistical coverage estimation,"In confidence-based reliability measurement, we determine that we are at least C confident that the probability of a program failing is less than or equal to a bound B. The basic results of this approach are reviewed and several additional results are introduced, including the adaptive sampling theorem which shows how confidence can be computed when faults are corrected as they appear in the testing process. Another result shows how to carry out testing in parallel. Some of the problems of statistical testing are discussed and an alternative method for establishing reliability called statistical coverage is introduced. At the cost of making reliability estimates that are relative to a fault model, statistical coverage eliminates the need for output validation during reliability estimation and allows the incorporation of non-statistical testing results into the statistical reliability estimation process. Statistical testing and statistical coverage are compared, and their relationship with traditional reliability growth modeling approaches is briefly discussed",1997,0, 1058,Analysis of a software reliability growth model with logistic testing-effort function,"We investigate a software reliability growth model (SRGM) based on the Non Homogeneous Poisson Process (NHPP) which incorporates a logistic testing effort function. Software reliability growth models proposed in the literature incorporate the amount of testing effort spent on software testing which can be described by an exponential curve, a Rayleigh curve, or a Weibull curve. However it may not be reasonable to represent the consumption curve for testing effort only by an exponential, a Rayleigh or a Weibull curve in various software development environments. Therefore, we show that a logistic testing effort function can be expressed as a software development/test effort curve and give a reasonable predictive capability for the real failure data. Parameters are estimated and experiments on three actual test/debug data sets are illustrated. The results show that the software reliability growth model with logistic testing effort function can estimate the number of initial faults better than the model with Weibull type consumption curve. In addition, the optimal release policy of this model based on cost reliability criterion is discussed",1997,0, 1059,Software metrics model for integrating quality control and prediction,"A model is developed that is used to validate and apply metrics for quality control and quality prediction, with the objective of using metrics as early indicators of software quality problems. Metrics and quality factor data from the Space Shuttle flight software are used as an example. Our approach is to integrate quality control and prediction in a single model and to validate metrics with respect to a quality factor. Boolean discriminant functions (BDFs) were developed for use in the quality control and quality prediction process. BDFs provide good accuracy for classifying low quality software because they include additional information for discriminating quality: critical values. Critical values are threshold values of metrics that are used to either accept or reject modules when the modules are inspected during the quality control process. A series of nonparametric statistical methods is also used in the method presented. It is important to perform a marginal analysis when making a decision about how many metrics to use in the quality control and prediction process. We found that certain metrics are dominant in their effects on classifying quality and that additional metrics are not needed to accurately classify quality. This effect is called dominance. Related to the property of dominance is the property of concordance, which is the degree to which a set of metrics produces the same result in classifying software quality. A high value of concordance implies that additional metrics will not make a significant contribution to accurately classifying quality; hence, these metrics are redundant",1997,0, 1060,Advances in debugging high performance embedded systems,"The development of application software has quickly become the most expensive and people-intensive area of systems design. Depending on the nature of the software and with the advent of software quality controls, it has been estimated that each line of code can cost upwards of $50 to develop and debug (this rationale appears to hold true for both assembly language and high level language based algorithms). This being the case, it should be no surprise that debugging techniques have been subject to much rethinking recently and that debugging aids and systems are now being implemented in embedded controllers",1997,0, 1061,Availability analysis of transaction processing systems based on user-perceived performance,"Transaction processing systems are judged by users to be correctly functioning not only if their transactions are executed correctly, but also if most of them are completed within an acceptable time limit. Therefore, we propose a definition of availability for systems for whom there is a notion of system failure due to frequent violation of response time constraints. We define the system to be available at a certain time if at that time the fraction of transactions meeting a deadline is above a certain user requirement. This definition lends to very different estimates of availability measures such as system downtimes as compared with more traditional measures. We conclude that for transaction processing systems, where the user's perception is important, our definition more correctly quantifies the availability of the system",1997,0, 1062,Applying simulation to the design and performance evaluation of fault-tolerant systems,"The paper illustrates how the CESIUM simulation tool can be used for design and performance evaluation of fault tolerant and real time systems, in addition to testing the correctness of protocol implementations. We calibrate three increasingly accurate simulation models of a network of workstations using independently obtained data. For a sample group membership protocol, the predictions of the simulator are very close to the actual performance measured in the real system. We also apply CESIUM to the evaluation of two potential improvements for the protocol, performing experiments that would have been difficult to implement in the real system. The results of the simulations give us valuable insight on how to tune configuration parameters, as well as on the performance gains of the improved versions. Our experience shows that CESIUM can be used to develop best effort services which adapt their quality of service according to the failures that occur during operation",1997,0, 1063,Incorporating code coverage in the reliability estimation for fault-tolerant software,"Presents a technique that uses coverage measures in reliability estimation for fault-tolerant programs, particularly N-version software. This technique exploits both coverage and time measures collected during testing phases for the individual program versions and the N-version software system for reliability prediction. The application of this technique to single-version software was presented in our previous research (IEEE 3rd Int. Symp. on Software Metrics, Berlin, Germany, March 1996). In this paper, we extend this technique and apply it on the N-version programs. The results obtained from the experiment conducted on an industrial project demonstrate that our technique significantly reduces the hazard of reliability overestimation for both single-version and multi-version fault-tolerant software systems",1997,0, 1064,Predicting dependability properties on-line,"Consider a system put into operation at time t0. During the design phase, i.e. before time t0, a model ℳ is used to tune up certain system parameters in order to satisfy some dependability constraints. Once in operation, some aspects of the system's behavior are observed during the interval [t0,t]. In this paper, we show how to integrate into model ℳ the observations made during [t0,t] in the operational phase, in order to improve the predictions on the behavior of the system after time t",1997,0, 1065,Fast replicated state machines over partitionable networks,The paper presents an implementation of replicated state machines in asynchronous distributed environments prone to node failures and network partitions. This implementation has several appealing properties: it guarantees that progress will be made whenever a majority of replicas can communicate with each other; it allows minority partitions to continue providing service for idempotent requests; it offers the application the choice between optimistic or safe message delivery. Performance measurements have shown that our implementation incurs low latency and achieves high throughput while providing globally consistent replicated state machine semantics,1997,0, 1066,A supervisor-based semi-centralized network surveillance scheme and the fault detection latency bound,"Network surveillance (NS) schemes facilitate fast learning by each interested fault free node in the system of the faults or repair completion events occurring in other parts of the system. Currently concrete real time NS schemes effective in distributed computer systems based on point to point network architectures are scarce. We present a semi centralized real time NS scheme effective in a variety of point to point networks, called the supervisor based NS (SNS) scheme. This scheme is highly scalable and can be implemented entirely in software using commercial off the shelf (COTS) components without requiring any special purpose hardware support. An efficient execution support for the scheme has been designed as a new extension of the DREAM kernel, a timeliness guaranteed operating system kernel model developed in the authors' laboratory. This design can be viewed as an implementation model which can be easily adapted to various commercial operating system kernels. The paper also presents an analysis of the SNS scheme on the basis of the implementation model to obtain some tight bounds on the fault detection latency",1997,0, 1067,A fail-aware membership service,We propose a new protocol that can be used to implement a partitionable membership service for timed asynchronous systems. The protocol is fail-aware in the sense that a process p knows at all times if its approximation of the set of processes in its partition is up-to-date or out-of-date. The protocol minimizes wrong suspicions of processes by giving processes a second chance to stay in the membership before they are removed. Our measurements show that the exclusion of live processes is rare and the crash detection times are good. The protocol guarantees that the memberships of two partitions never overlap,1997,0, 1068,Probabilistic verification of a synchronous round-based consensus protocol,"Consensus protocols are used in a variety of reliable distributed systems, including both safety-critical and business-critical applications. The correctness of a consensus protocol is usually shown, by making assumptions about the environment in which it executes, and then proving properties about the protocol. But proofs about a protocol's behavior are only as good as the assumptions which were made to obtain them, and violation of these assumptions can lead to unpredicted and serious consequences. We present a new approach for the probabilistic verification of synchronous round based consensus protocols. In doing so, we make stochastic assumptions about the environment in which a protocol operates, and derive probabilities of proper and non proper behavior. We thus can account for the violation of assumptions made in traditional proof techniques. To obtain the desired probabilities, the approach enumerates possible states that can be reached during an execution of the protocol, and computes the probability of achieving the desired properties for a given fault and network environment. We illustrate the use of this approach via the evaluation of a simple consensus protocol operating under a realistic environment which includes performance, omission, and crash failures",1997,0, 1069,Transaction reordering in replicated databases,"The paper presents a fault tolerant lazy replication protocol that ensures 1-copy serializability at a relatively low cost. Unlike eager replication approaches, our protocol enables local transaction execution and does not lead to any deadlock situation. Compared to previous lazy replication approaches, we significantly reduce the abort rate of transactions and we do not require any reconciliation procedure. Our protocol first executes transactions locally, then broadcasts a transaction certification message to all replica managers, and finally employs a certification procedure to ensure 1-copy serializability. Certification messages are broadcast using a non blocking atomic broadcast primitive, which alleviates the need for a more expensive non blocking atomic commitment algorithm. The certification procedure uses a reordering technique to reduce the probability of transaction aborts",1997,0, 1070,Preventing useless checkpoints in distributed computations,"A useless checkpoint is a local checkpoint that cannot be part of a consistent global checkpoint. The paper addresses the following important problem. Given a set of processes that take (basic) local checkpoints in an independent and unknown way, the problem is to design a communication induced checkpointing protocol that directs processes to take additional local (forced) checkpoints to ensure that no local checkpoint is useless. A general and efficient protocol answering this problem is proposed. It is shown that several existing protocols that solve the same problem are particular instances of it. The design of this general protocol is motivated by the use of communication induced checkpointing protocols in “consistent global checkpointâ€?based distributed applications. Detection of stable or unstable properties, rollback recovery and determination of distributed breakpoints are examples of such applications",1997,0, 1071,Modular flow analysis for concurrent software,"Modern software systems are designed and implemented in a modular fashion by composing individual components. The advantages of early validation are widely accepted in this context, i.e., that defects in individual module designs and implementations may be detected and corrected prior to system-level validation. This is particularly true for errors related to interactions between system components. In this paper, we describe how a whole-program automated static analysis technique can be adapted to the validation of individual components, or groups of components, of sequential or concurrent software systems. This work builds off of an existing approach, FLAVERS, that uses program flow analysis to verify explicitly stated correctness properties of software systems. We illustrate our modular analysis approach and some of its benefits by describing part of a case-study with a realistic concurrent multi-component system",1997,0, 1072,Designing and verifying embedded microprocessors,"Motorola's ColdFire products are a line of microprocessors targeting embedded-system applications such as computer peripherals (disk drives, laser printers, scanners).They originated from the same design group that produced the 680X0 general purpose microprocessors, whose target market was desktop computing applications. The ColdFire microprocessors, however, target the highly competitive embedded market, whose time-to-market and cost requirements are much more stringent. The ColdFire design team received a set of challenges quite different from those associated with the 680X0 line. They had to minimize test costs since the target selling price was an order of magnitude less than that of a desktop microprocessor. With no room for design errors, they had to put processes in place that detect errors early in the design flow and provide feedback for continuous improvement. A new methodology reduced new product cycle time to less than a year for the ColdFire products and provided improved techniques for integrating cores in new applications. In addition, it increased quality measurement capability and reduced test cost",1997,0, 1073,Economics of diagnosis,"Detecting the existence of a fault in complex systems is neither sufficient nor economical without diagnostics assisting in fault isolation and cost-effective repairs. This work attempts to put in economical terms the technical decisions involving diagnostics. It looks at the cost factors of poor diagnostics in terms of the accuracy and completeness of fault identification and the time and effort it takes to come to a final (accurate) repair decision. No Problems Found (NPF), Retest OK (RTOK), False Alarms, Cannot Duplicates (CND) and other diagnostic deficiencies can range from 30% to 60% of all repair actions. According to a 1995 survey run by the IEEE Reliability Society, the Air Transport Association (ATA) has determined that 4500 NPF events cost ATE member airlines $100 million annually. A U.S. Army study has shown that maintenance costs can be reduced by 25% if 70-80% of the items if had been repairing were to be discarded. Many of these situations can be overcome by investing in emerging technologies, such as Built-in (Self) Test (BIST) and expert diagnostic tools. The role of these tools is to minimize dependence on the skills, knowledge and experience of individuals, and thus overcome costs of inaccurate, inefficient, and incomplete diagnostics. Use of BIST can also directly reduce costs. This paper presents technical solutions and economic analyses showing to what extent such solutions provide a sufficient return on investment",1997,0, 1074,The impact of test definition standards on TPS quality,"One of the chronic problems associated with Test Program Set (TPS) development has been the lack of unambiguous standards for test definition. Elements of the definition include, but are not necessarily limited to, the questions of what constitutes a test, what aspect or failure mode of the UUT is being addressed by the test, why the test is being performed at that particular point in the overall test flow, and how the test relates to other tests within the main performance verification path or any of the diagnostic branches. Moreover, test definition standards are very closely related to the way in which unambiguous fault defection and fault isolation metrics are implemented, as well as the way in which overall (TPS) test strategy is documented. Definitive standards for test definition will support not only demonstrable TPS quality but also TPS documentation that is both descriptive and indicative of the levels of quality that are being achieved",1997,0, 1075,Dynamic scheduling of parallelizable tasks and resource reclaiming in real-time multiprocessor systems,"Many time critical applications require predictable performance and tasks in these applications have deadlines to be met despite the presence of faults. We propose a new dynamic non preemptive scheduling algorithm for a relatively new task model called parallelizable task model where real time tasks can be executed concurrently on multiple processors. We use this parallelism in tasks to meet their deadlines and thus obtain better processor utilization compared to nonparallelizable task scheduling algorithms. We assume that tasks are aperiodic. Further, each task is characterized by its deadline, resource requirements, and worst case computation time on p processors, where p is the degree of task parallelization. To study the effectiveness of our algorithm, we have conducted extensive simulation studies and compared its performance with the myopic scheduling algorithm (K. Ramamritham et al., 1990). We found that the success ratio offered by our algorithm is always higher than the myopic algorithm for a wide variety of task parameters. Also, we propose a resource reclaiming algorithm to reclaim resources from parallelizable real time tasks when their actual computation times are less than their worst case computation times. Our parallelizable task scheduling together with its associated reclaiming offers the best guarantee ratio compared to the other algorithmic combinations",1997,0, 1076,Design and implementation of network services that provide end-to-end QoS in ATM-based real-time environments,"There has been an increasing need of highly predictable, timely, and dependable communication services with QoS guarantees on an end-to-end basis for embedded real-time applications and for multimedia-integrated distributed control. Performance objectives used in conventional networks-such as maximizing the throughput, minimizing the response time, or providing fairness to users-are not of the most important concern for both types of applications. Instead, resources should be appropriately reserved and managed to support quality of service (QoS) on an end-to-end basis, as well as application-specific tradeoffs among them. The main intent of this paper is thus to develop and demonstrate an environment an integrated set of networks resource management techniques, middleware layers, and network software-for supporting QoS on an end-to-end basis in ATM-based distributed real-time environments. We present an analytic QoS framework that incorporates a flow/QoS specification, a QoS translation/negotiation policy, an admission control facility, a route construction scheme, a resource reservation manager, a run-time message scheduler, and a traffic regulator and QoS monitor. Then, sue elaborate on the algorithms and mechanisms to implement these network components. To empirically analyze the interplay among different components, and the performance that results from careful coordination of all the components, we care currently implementing them as middleware layers in the Sun Solaris environment, and will comment in the paper our implementation strategies",1997,0, 1077,Development of expertise in complex domains,"It is necessary to understand how human planners' performance evolves over time with increasing expertise in order to develop effective computer critics, tutors, knowledge acquisition systems, and training strategies. In this paper we present two studies of the development of expertise in complex domains: manufacturing planning and software development management planning. We had experts in each domain rank order the plans created by practitioners at various levels of experience from best to worst quality. We did this to assess whether practitioners really did gain skill with increased experience in both of these fields or whether experts were “self proclaimedâ€? Next, we analyzed the spoken statements of the practitioners to identify the knowledge and problem solving strategies they used or lacked. We used these data to model the skill development phases in each domain. These models can be used to develop computer tools and training strategies to help practitioners achieve higher levels of competence",1997,0, 1078,Coupled fluid-thermo-structures simulation methodology for MEMS applications,"A coupled fluid-thermo-structures solution methodology is being developed for application to MEMS. State-of-the-art computational methods are used for simulation of a wide variety of MEMS types and configurations. The proposed software is aimed to be powerful and accurate, yet simple and quick enough to be included in the design cycles of microfluidic devices for performance predictions and device optimization. A distinct feature of the present computational methodology is the fully implicit numerical coupling between all disciplines. A description of the various solver modules is presented. The accuracy of the current software is validated on a benchmark quality experiments of a flow in a microchannel and fluid-structure interaction case. The versatility is demonstrated on two practical microfluidic devices",1997,0, 1079,What makes inspections work?,"Software inspections are considered a cheap and effective way to detect and remove defects from software. Still, many people believe that the process can be improved and have thus proposed alternative methods. We reviewed many of these proposals and detected a disturbing pattern: too often competing methods are based on conflicting arguments. The existence of these competing views indicates a serious problem: we have yet to identify the fundamental drivers of inspection costs and benefits. We first built a taxonomy of the potential drivers of inspection costs and benefits. Then we conducted a family of experiments to evaluate their effects. We chose effort and interval, measured as calendar time to completion, as our primary measures of cost. Defects discovered per thousand lines of code served as our primary measure of inspection benefits",1997,0, 1080,The impact of costs of misclassification on software quality modeling,"A software quality model can make timely predictions of the class of a module, such as not fault prone or fault prone. These enable one to improve software development processes by targeting reliability improvement techniques more effectively and efficiently. Published software quality classification models generally minimize the number of misclassifications. The contribution of the paper is empirical evidence, supported by theoretical considerations, that such models can significantly benefit from minimizing the expected cost of misclassifications, rather than just the number of misclassifications. This is necessary when misclassification costs for not fault prone modules are quite different from those of fault prone modules. We illustrate the principles with a case study using nonparametric discriminant analysis. The case study examined a large subsystem of the Joint Surveillance Target Attack Radar System, JS-TARS, which is an embedded, real time, military application. Measures of the process history of each module were independent variables. Models with equal costs of misclassification were unacceptable, due to high misclassification rates for fault prone modules, but cost weighted models had acceptable, balanced misclassification rates",1997,0, 1081,Predicting fault detection effectiveness,"Regression methods are used to model software fault detection effectiveness in terms of several product and testing process measures. The relative importance of these product/process measures for predicting fault detection effectiveness is assessed for a specific data set. A substantial family of models is considered, specifically, the family of quadratic response surface models with two way interaction. Model selection is based on “leave one out at a timeâ€?cross validation using the predicted residual sum of squares (PRESS) criterion. Prediction intervals for fault detection effectiveness are used to generate prediction intervals for the number of residual faults conditioned on the observed number of discovered faults. High levels of assurance about measures like fault detection effectiveness (residual faults) require more than just high (low) predicted values, they also require that the prediction intervals have high lower (low upper) bounds",1997,0, 1082,Testability measurements for data flow designs,The paper focuses on data flow designs. It presents a testability measurement based on the controllability/observability pair of attributes. A case study provided by AEROSPATIALE illustrates the testability analysis of an embedded data flow design. Applying such an analysis during the specification stage allows detection of weaknesses and appraisal of improvements in terms of testability,1997,0, 1083,Some misconceptions about lines of code,"Source lines of code (SLOC) is perhaps the oldest of software metrics, and still a benchmark for evaluating new ones. Despite the extensive experience with the SLOC metric, there are still a number of misconceptions about it. The paper addresses three of them: (1) that the format of SLOC is relevant to how to properly count it (a simple experiment shows that, in fact, it does not matter), (2) that SLOC is most useful as a predictor of software quality (in fact it is most useful as a covariate of other predictors), and (3) that there is an important inverse relationship between defect density and code size (in fact, this is an arithmetic artifact of plotting bugs-per-SLOC against SLOC)",1997,0, 1084,Assessing feedback of measurement data: relating Schlumberger RPS practice to learning theory,"Schlumberger RPS successfully applies software measurement to support their software development projects. It is proposed that the success of their measurement practices is mainly based on the organization of the interpretation process. This interpretation of the measurement data by the project team members is performed in so-called `feedback sessions'. Many researchers identify the feedback process of measurement data as crucial to the success of a quality improvement program. However, few guidelines exist about the organization of feedback sessions. For instance, with what frequency should feedback sessions be held, how much information should be presented in a single session, and what amount of user involvement is advisable? Within the Schlumberger RPS search to improve feedback sessions, the authors explored learning theories to provide guidelines to these type of questions. After all, what is feedback more than learning?",1997,0, 1085,Some conservative stopping rules for the operational testing of safety critical software,"Operational testing, which aims to generate sequences of test cases with the same statistical properties as those that would be experienced in real operational use, can be used to obtain quantitative measures of the reliability of software. In the case of safety critical software it is common to demand that all known faults are removed. This means that if there is a failure during the operational testing, the offending fault must be identified and removed. Thus an operational test for safety critical software takes the form of a specified number of test cases (or a specified period of working) that must be executed failure-free. This paper addresses the problem of specifying the numbers of test cases (or time periods) required for a test, when the previous test has terminated as a result of a failure. It has been proposed that, after the obligatory fix of the offending fault, the software should be treated as if it were completely novel, and be required to pass exactly the same test as originally specified. The reasoning here claims to be conservative, in as much as no credit is given for any previous failure-free operation prior to the failure that terminated the test. We show that, in fact, this is not a conservative approach in all cases, and propose instead some new Bayesian stopping rules. We show that the degree of conservatism in stopping rules depends upon the precise way in which the reliability requirement is expressed. We define a particular form of conservatism that seems desirable on intuitive grounds, and show that the stopping rules that exhibit this conservatism are also precisely the ones that seem preferable on other grounds",1997,0, 1086,Comparison of conventional approaches and soft-computing approaches for software quality prediction,"Managing software development and maintenance projects requires early knowledge about quality and effort needed for achieving a necessary quality level. Quality prediction models can identify outlying software components that might cause potential quality problems. Quality prediction is based on experience with similar predecessor projects constructing a relationship between the output-usually the number of errors-and some kind of input-here we use complexity metrics-to the quality of a software development project. Two approaches are presented to build quality prediction models: multilinear discriminant analysis as one example for conventional approaches and fuzzy expert-systems generated by genetic algorithms. Using the capability of genetic algorithms, the fuzzy rules can be automatically generated from example data to reduce the cost and improve the accuracy. The generated quality model-with respect to changes-provides both quality of fit (according to past data) and predictive accuracy (according to ongoing projects). The comparison of the approaches gives an answer on the effectiveness and the efficiency of a soft-computing approach",1997,0, 1087,Fault analysis and performance monitoring in prototyping machine vision systems,"This paper introduces an innovative approach in the development process for machine vision systems. We take all system requirements into consideration, not only digital image processing but also process synchronization and process control. Strict implementation conventions for hardware and software design allow integrated performance monitoring and fault analysis. Even in a prototype stage, various algorithms and hardware components can be assessed directly in the industrial environment. After presenting the system concept in general, the implementation process is discussed for a sample application",1997,0, 1088,RF induction and analog junction techniques for finding opens,"In response to the need for fast and simple methods for detecting and diagnosing open pins on SMT devices, board test manufactures have introduced power-off, vectorless test techniques that provide reasonable fault coverage with little programming effort. This paper describes two techniques: the RF induction and the analog junction technique. The RF technique applies a 200-500kHz signal to a spiral loop antenna located over the device. This antenna or “inducerâ€?produces an AC signals at the device pin under test which the in-circuit tester measures. The technique requires one inducer for each device to be tested. The test system software automatically learns the characteristics of the device being tested. The users need only describe the connections between the device and the tester, identify the device's power and ground pins, and which inducer is mounted over the part. The analog junction technique uses the device protection diodes. It requires no fixture hardware, relying only on the bed-of-nails contact with the device pins. The analog junction technique applies voltage to one pin of the device, causing current to flow between the pin and the device ground lead. Voltage is then applied another pin on the device, causing current to flow between this pin and the ground lead. Both the analog junction and RF induction techniques are effective means for detecting opens on digital devices",1997,0, 1089,Application and analysis of IDDQ diagnostic software,"A current disadvantage of IDDq testing is lack of software-based diagnostic tools that enable IC vendors to create a large database of defects uniquely detected with this test method. We present a methodology for performing defect localization based upon IDDq test information (only). Using this technique, fault localization can be completed within minutes (e.g. <5 minutes) after IC testing is complete. This technique supports multiple fault models and has been successfully applied to a large number of samples-including ones that have been verified through failure analysis. Data is presented related to key issues such as diagnostic resolution, hardware-to-fault model correlation, diagnostic current thresholds, and the diagnosability of various defect types",1997,0, 1090,Fault handling mechanisms in the RETHER protocol,"RETHER is a software-driven token-passing protocol designed to provide bandwidth guarantee for real-time multimedia applications over off-the-shelf Ethernet hardware. To our knowledge, it is the first all-software and fully-implemented real-time protocol on top of commodity Ethernet hardware. Because token passing is used to regulate network accesses, node crashes and/or packer corruption may lead to token loss, thus potentially shutting down the network completely. This paper describes the fault handling mechanisms built into RETHER to address this problem in both a single-segment and multi-segment Ethernet environment. The emphasis of the paper is on the uniqueness of the target application context and the rationale of chosen solutions. We present the performance tradeoffs of the fault detection/recovery schemes, the implementation experiences of the prototype",1997,0, 1091,Quality assurance certification: adoption by Australian software developers and its association with capability maturity,"Many Australian developers have committed resources to achieve certification in response to government purchasing policies favouring standards-certified-suppliers, but cynics suggest the `piece of paper' does little to improve the processes and subsequent product. This study details a research project undertaken to assess the adoption of QA certification by Australian software developers. Primary data for the study were gathered from a survey of 1,000 Australian software developers, and secondary data from the JAS-ANZ register of certified organisations. The survey design was used to determine the extent of adoption of QA certification by Australian developers, their organisational characteristics, capability maturity and perceptions regarding the value of QA certification. The SEI CMM questionnaire was used as the basis of the capability maturity measurement instrument, enabling an international comparison of CMM levels of developers in Australia, Hong Kong and Singapore",1997,0, 1092,Criticality prediction models using SDL metrics set,"This paper focuses on the experiences gained from defining design metrics for SDL and comparing three prediction models for identifying the most fault-prone entities using the defined metrics. Three sets of design complexity metrics for SDL are defined according to two design phases and SDL entity types. Two neural net based prediction models and a model using the hybrid metrics are implemented and compared by a simulation. Though the backpropagation model shows the best prediction results, the selection method in hybrid complexity order is expected to have similar performance with some supports. Also two hybrid metric forms (weighted sum and weighted multiplication) are compared and it is shown that two metric forms can be used interchangeably for ordinal purpose",1997,0, 1093,Fault injection for the masses,"The key technology that the author would like to see adopted by the masses is a family of software fault injection algorithms that can predict where to concentrate testing. From a novelty standpoint, these algorithms were (and still are) unique among other methods of performing fault injection. The author concedes that the algorithms are computational, but the results can provide unequaled information about how “bad thingsâ€?propagate through systems. Because of that, he thinks fault injection methods are valuable to anyone responsible for software quality, including those working in one-person independent software vendors (ISVs) or even the largest corporations",1997,0, 1094,Identification of high frequency transformer equivalent circuit using Matlab from frequency domain data,"High frequency modelling is essential in the design of power transformers, to study impulse voltage distribution, winding integrity and insulation diagnosis. In this paper, a PC based, fully automated technique for identifying parameters of a high frequency transformer equivalent circuit is proposed. At each discrete measurement point, voltage ratio and phase between input and output is calculated to obtain the frequency response. A parametric system identification technique is utilised to determine the coefficients of an appropriate transfer function to model the measured frequency response from 50 Hz to 1 MHz, divided into low, medium and high frequency ranges. The proposed technique is simple to implement, fully computerised and avoids time consuming measurements reported earlier. Test results on several transformers indicate that the method is highly reliable, consistent and is sensitive to transformer winding faults",1997,0, 1095,Sequential test generation based on circuit pseudo-transformation,"The test generation problem for a sequential circuit capable of generating tests with combinational test generation complexity can be reduced to that for the combinational circuit formed by replacing each FF in the sequential circuit by a wire. In this paper, we consider an application of this approach to general sequential circuits. We propose a test generation method using circuit pseudo-transformation technique: given a sequential circuit, we extract a subcircuit with balanced structure which is capable of generating tests with combinational test generation complexity, replace each FF in the subcircuit by wire, generate test sequences for the transformed sequential circuit, and finally obtain test sequences for the original sequential circuit. We also estimate the effectiveness of the proposed method by experiment with ISCAS'89 benchmark circuits",1997,0, 1096,TEMPLATES: a test generation procedure for synchronous sequential circuits,"We develop the basic definitions and procedures for a test generation concept referred to as templates that magnifies the effectiveness of test generation by taking advantage of the fact that many faults have “similarâ€?test sequences. Once a template is generated, several test sequences to detect different faults are derived from it at a reduced complexity compared to the complexity of test generation",1997,0, 1097,Estimating the number of faults using simulator based on generalized stochastic Petri-net model,"In order to manage software projects quantitatively, we have presented a new model far software project based on generalized stochastic Petri-net model which can take influence of human factors into account, and we have already developed software project simulator based on GSPN model. This paper proposes methods for calculating model parameters in the new model and estimating the number of faults in the design and debug phases of software process. Then we present experimental evaluation of proposed method using a data of actual software development project on a certain company. As the result of case study, we confirmed its effectiveness with respect to estimating the number of faults in the software process",1997,0, 1098,Can we rely on COTS microkernels for building fault-tolerant systems?,"This paper addresses the use of COTS microkernels in fault-tolerant, and, to some extent, safety-critical systems. The main issue is to assess the behavior of such components, upon which application software relies, in the presence of faults. Using fault injection, it is possible to classify the behavior of the functional primitives. From the results obtained fault containment mechanisms can be provided as a new API to complement the basic detection mechanisms of the microkernel. Some preliminary experiments with the Chorus microkernel are also reported",1997,0, 1099,Automated network fault management,"Future military communication networks will have a mixture of backbone terrestrial, satellite and wireless terrestrial networks. The speeds of these networks vary and they are very heterogeneous. As networks become faster, it is not enough to do reactive fault management. Our approach combines proactive and reactive fault management. Proactive fault management is implemented by dynamic and adaptive routing. Reactive fault management is implemented by a combination of a neural network and an expert system. The system has been developed for the X.25 protocol. Several fault scenarios were modeled and included in the study reduced switch capacity, increased packet generation rate of a certain application, disabled switch in the X.25 cloud, and disabled links. We also modeled the occurrence of alarms including the severity of the problem, location of the event and a threshold. To detect and identify faults we use both numerical data associated with the performance objects (attributes) in the MIB as well as SNMP traps (alarms). Simulation experiments have been performed in order to understand the convergence of the algorithms, the timing of the neural network involved and the G2/NeurOn-Line software environment and MIB design",1997,0, 1100,Framework of a software reliability engineering tool,"The usage of commercial off-the-shelf (COTS) software modules in a large, complex software development project has well-known advantages (e.g. reduced development time and reduced cost). However, many such designs remain ill-understood in terms of end-to-end, overall reliability and assurance of the software system. Since the COTS components may not have been designed with assurance attributes in mind, a COTS module integrated system may fail to provide high end-to-end assurance. In applications that require timing, reliability and security guarantees, the usage of COTS software components can therefore mandate an analysis of the assurance attributes. The users of such systems may desire to predict the effect of the occurrence of an event on the reliability of other events in the system, and in general carry out an end-to-end analysis of the software system assurance. In this paper, we evaluate the causality, reliability and the overall performance aspects of large-scale software using a reverse engineering approach. The proposed code analysis approach can evaluate whether the COTS software meets the user-specified individual/group reliability constraints or not. In the case of reliability violation, our proposed approach can identify the modules of the software that may require re-design. The underlying idea is to merge event probabilities, event dependencies and fault propagation to calculate the occurrence probabilities of every event in the system",1997,0, 1101,Process measures for predicting software quality,"Software reliability is essential for tactical military systems, such as the Joint Surveillance Target Attack Radar System (JSTARS). It is an embedded, real-time military application, which performs real-time detection, location, classification and tracking of moving and fixed objects on the ground. A software quality model can make timely predictions of reliability indicators. These enable one to improve software development processes by targeting reliability improvement techniques more effectively and efficiently. This paper presents a case study of a large subsystem of JSTARS to improve integration and testing. The dependent variable of a logistic regression model was the class of a module: either fault-prone or not. Measures of the process history of each module were the independent variables. The case study supports our hypothesis that the likelihood of discovering additional faults during integration and testing can be usefully modeled as a function of the module history prior to integration. This history is readily available by combining data from the project's configuration management system and problem-reporting system",1997,0, 1102,Biometry: the characterisation of chestnut-tree leaves using computer vision,"The Department of Biology of the University of Tras-os-Montes e Alto Douro analyses every year a large number of chestnut-tree leaves, in order to measure their biometric characteristics, namely the leaf area, dimensions of the enclosing rectangle, number of teeth and secondary veins. Because for a human operator this is a time consuming and error prone task, a computer vision system has been set up to improve the process. The task of measuring the leaf presents no major problems, while counting the number of teeth and secondary veins has proved to be complex at the resolutions used. This paper describes the state of the project, going into some detail on the algorithms. A complete system has been assembled, based on a PC connected to an imaging system. A windows-based application has been developed, which integrates the control of the operations to grab, store and analyse images of different varieties of chestnut-tree leaves in an organised way. Because the accuracy of the computer vision algorithms used is not sufficient for the system to be completely autonomous, a semi-automatic solution has been adopted. The operator validates or corrects the results of the automatic analysis. This solution leads to a significant improvement in the performance of the human operator, both in terms of speed and quality of the results",1997,0, 1103,Measurements and feature extraction in high-speed sewing,"This paper presents the development of signal acquisition and analysis equipment for the measurement of sewing parameters on a high-speed overlock sewing machine. The objective of the work was to provide investigators of the textile area with hardware and software to ease the investigation on the dynamical behaviour of the following sewing parameters: force on needle bar; and presser-foot and thread tensions. It should also have enough flexibility to incorporate further signal entries, and provide the user with tools to ease the gathering and analysis of tests on different materials, sewing speeds and machine configurations and settings. Outputs for actuators to implement closed-loop control strategies are also available. The paper presents an overview of the system, which is a development of earlier hardware and software and focuses on the results concerning the measurement of the force on the needle-bar; this parameter is important to investigate needle penetration force in fabrics during sewing. Several signal-processing algorithms, that aim to automate the detection of some characteristics, are described. The purpose of this system, that implements some novel strategies, is to develop an add-on kit to apply to different sewing machines, but presently it has been implemented on a PC as a quality assessment system which will be used by textile technicians to build a quality database",1997,0, 1104,QUEM: an achievement test for knowledge-based systems,"This paper describes the Quality and Experience Metric (QUEM), a method for estimating the skill level of a knowledge based system based on the quality of the solutions it produces. It allows one to assess how many years of experience the system would be judged to have if it were a human by providing a quantitative measure of the system's overall competence. QUEM can be viewed as a type of achievement or job placement test administered to knowledge based systems to help system designers determine how the system should be used and by what level of user. To apply QUEM, a set of subjects, experienced judges, and problems must be identified. The subjects should have a broad range of experience levels. Subjects and the knowledge based system are asked to solve the problems; and judges are asked to rank order all solutions, from worst quality to best. The data from the subjects is used to construct a skill function relating experience to solution quality, and confidence bands showing the variability in performance. The system's quality ranking is then plugged into the skill function to produce an estimate of the system's experience level. QUEM can be used to gauge the experience level of an individual system, to compare two systems, or to compare a system to its intended users. This represents an important advance in providing quantitative measures of overall performance that can be applied to a broad range of systems",1997,0, 1105,Parametric optimization of measuring systems according to the joint error criterion,"This paper presents a method and exemplary results of the modeling and design of measuring systems. The minimum of designed system errors is achieved using parametric optimization methods of a measuring system model. A “structural methodâ€?of measuring system modeling is used. The properties of the equipment used, as well as data processing algorithms, are taken into account",1997,0, 1106,The influence of process physics on the MOS device performance the case of the reverse short channel effect,"The goal of the present review is to show how material experiments using simple structures combined with process simulation can give sufficient insight to complex device phenomena that are critical for the deep submicron MOS device performance. Specifically we shall first present experimental and simulation results on the 2-D distribution of silicon interstitial both in Si and Silicon-On-Insulator. The conclusion drawn from these results will then drive our device experiments and simulations. We shall show that as predicted by the above experiments, NMOS SOI devices exhibit a reduction of the Reverse Short Channel effect (RSCE). Coupled process-device simulation reveals the influence of the fundamental point defect properties on MOS device performance",1997,0, 1107,Network enabled solvers for scientific computing using the NetSolve system,"Agent-based computing is increasingly regarded as an elegant and efficient way of providing access to computational resources. Several metacomputing research projects are using intelligent agents to manage a resource space and to map user computation to these resources in an optimal fashion. Such a project is NetSolve, developed at the University of Tennessee and Oak Ridge National Laboratory. NetSolve provides the user with a variety of interfaces that afford direct access to preinstalled, freely available numerical libraries. These libraries are embedded in computational servers. New numerical functionalities can be integrated easily into the servers by a specific framework. The NetSolve agent manages the coherency of the computational servers. It also uses predictions about the network and processor performances to assign user requests to the most suitable servers. This article reviews some of the basic concepts in agent-based design, discusses the NetSolve project and how its agent enhances flexibility and performance, and provides examples of other research efforts. Also discussed are future directions in agent-based computing in general and in NetSolve in particular",1997,0, 1108,An automatic approach to object-oriented software testing and metrics for C++ inheritance hierarchies,"In this paper, we propose a concept named unit repeated inheritance (URI) to realize object-oriented testing and object-oriented metrics. The approach describes an inheritance level technique (ILT) as a guide to detect the software errors of the inheritance hierarchy and measure the software complexity of the inheritance hierarchy. The measurement of inheritance metrics and some testing criteria are formed based on the proposed mechanism. Thus, we use Lex and Yacc to construct a windowing tool which is used in conjunction with a conventional C++ programming environment to assist a programmer to analyze, test, and measure his/her C++ programs",1997,0, 1109,Translating process changes from one development environment to another,Summary form only given. Translating the results from one development environment to another is not an easy task. The authors present practical examples of how translating process improvement results might be done and the considerations involved. They discuss how tools may be applied and how the differences between the two environments and their interaction with the process change may be assessed,1997,0, 1110,A software fault detection and recovery in CDMA systems,"This paper describes an approach to software fault detection and recovery in CDMA systems, which is based on the sanity monitoring and auditing techniques. Sanity monitoring is a fault tolerance technique in a distributed system. It detects and recovers from software faults. We discuss how to identify performance classes and fault types for processes, and how to locate and isolate faults. An audit program detects and recovers from data errors in communication systems such as telephone switches. It is adapted to distributed systems. We discuss data error detection and recovery methods for CDMA systems that also use OS primitives",1997,0, 1111,SEMATECH projects in advanced process control,Scatterometer measurements of critical dimensions paralleled those of atomic force microscopy down to 0.14 μm. Application of a run to run controller to chemical mechanical processes demonstrated control to target for patterned wafers and improvements in CpK of 150% for epitaxial processes. Benchmarking of commercial software for fault detection of plasma etchers demonstrated feasibility in identifying faults during operation,1997,0, 1112,Statistical study of lightning induced overvoltages on distribution power lines and risk analysis,"Owing to the increasing sensitivity of low voltage electrical equipment, it is necessary to be able to assess the magnitude of overvoltages caused by lightning. These are the most problematic overvoltages and, for the most part, they travel through power distribution lines. A risk analysis must be conducted to assist the decision to install a protective device on a given installation. In the French approach, the first stage of this analysis is the evaluation of the level of exposure to lightning induced overvoltages. The second stage consists in allowing for a criterion to evaluate the consequences of these electrical disturbances (destruction or unavailability of equipment) for a given level of exposure and in a given situation. This decision aid method is empirical. It is based on the main characteristics of the electric power supply and on the level of lightning strike density in the relevant area. An analysis of the same type is proposed on a complementary basis by two IEC committees. Finally, a calculation tool (Anastasia) was developed in order to allow the accurate quantification of the level of overvoltages on an installation. It is based on modelling of the coupling of the lightning channel and a power distribution line, allowing overvoltages to be calculated with the EMTP program. The Monte Carlo method is then used, in combination with simulations, to obtain a statistical distribution of the stresses at a given point. A comparison was made between the results obtained with Anastasia and the measurements conducted on an installation in the context of a campaign organized for several years by France Telecom.",1997,0, 1113,Exploiting delayed synchronization arrivals in light-weight data parallelism,"SPMD (single program multiple data) models and other traditional models of data parallelism provide parallelism at the processor-level. Barrier synchronization is defined at the level of processors where, when a processor arrives at the barrier point early and waits for others to arrive, no other useful work is done on that processor. Program restructuring is one way of minimizing such latencies. However, such programs tend to be error-prone and less portable. The author discusses how multithreading can be used in data parallelism to mask delays due to application irregularity or processor load imbalance. The discussion is in the context of Coir, the object-oriented runtime system for parallelism. The discussion concentrates on shared memory systems. The sample application is an LU factorization algorithm for skyline sparse matrices. The author discusses performance results on the IBM PowerPC-based symmetric multiprocessor system",1997,0, 1114,Knowledge based technique to enhance the performance of neural network based motor fault detectors,"The monitoring and fault detection of motors is a very important and difficult topic. Neural networks can often be trained to recognize motor faults by examining the performance of certain motor measurements. Unfortunately, several weaknesses exist for neural networks when used in this application. Examples of these shortcomings are that they can take a considerable time to train, often have less than desirable accuracy and generally are very dependent on the choice of training data. Although neural networks can recognize the nonlinear relationships that exist between motor measurements and motor faults, all aspects of the neural network fault detector performance can be improved if appropriate heuristics can be used to preprocess the input-output training relationship. This paper presents a novel approach of applying knowledge based modeling techniques to preprocess the training data and significantly improve the overall performance of the neural network based motor fault detector",1997,0, 1115,An intelligence diagnostic system for reciprocating machine,"A vibration diagnostic system for a reciprocating machine has been developed. Its main function is to monitor operation state and to diagnose faults for the reciprocating machine. The software has two parts, one in a diagnostic machine and the other in a PC. It can perform diagnosis on the spot, realize self learning and adapt to many types of machine. It takes main factor autoregressive model as a character extractor and the ANN model as classifier. With the combination of the two technologies, the system can perform data sampling, signal analysis, character extracting and fault recognition automatically. It reduces factors involved by human beings in a diagnosis process, so the accuracy of the diagnosis and the level of intelligence and automation has been raised",1997,0, 1116,Dynamic quality of session control of real-time video multicast,"The paper presents a framework for dynamic control of the quality of real time video multicast applications. In such applications, there is a need for introducing the concept of “Quality of Session (Qoss)â€? beyond the Qos received by individual receivers. The Qoss can be best determined by the end application, depending on the application semantics and the actual Qos seen by each receiver. The control of Qoss is achieved by employing a Qoss monitoring mechanism at the application level and a sender receiver combination control mechanism to react to bottlenecks in the network or end systems. Each receiver performs local control, measures the stream quality offered to the end user and feedback the measurement to the sender using extended RTP receiver reports. According to the feedbacks and multiviewer synchronization requirement, the sender assesses the overall Qoss and adjusts the encoding and/or sending rate. The mechanism can be further enhanced if layered codecs are adopted. Policies of the Qos measurement and Qoss assessment were defined, algorithms of the source rate control were proposed, and the mechanisms of adaptive video encoding is described",1997,0, 1117,Software configuration management for the 21st century,"The increasing complexity of both software systems and the environments in which they are produced is pressuring projects to improve the development process using innovative methods. The role of software configuration management (SCM) systems, policies, and procedures that help control and manage software development environments is being stretched beyond the conceptual boundaries it has had for the last decade. One of the key enablers of producing higher quality software is a better software development process. The SCM system must instantiate a quality process, allow tracking and monitoring of process metrics, and provide mechanisms for tailoring and continual improvement of the software development process. More than a dozen SCM systems are now available, each one having a distinct architecture and set of core functionalities. Currently, no single system provides all the key SCM functions in the best form. Thus, a project must assess its real needs and choose the right SCM system to meet its software development challenges. This paper focuses on the characteristics of SCM systems, the SCM challenges for Lucent Technologies, the principal SCM systems being used within the company, and the issues of choosing and successfully implementing the best SCM systems.",1997,0, 1118,Fault simulation with PLDs,"One measure of the effectiveness of fault tolerance in a circuit is fault coverage, the ratio of the number of faults detectable or correctable to the number of possible faults. Fault simulation can provide an estimate of the fault coverage, but is often prohibitively slow if performed in software. Simulating a faulty circuit using programmable logic devices can speed up the process. Two basic approaches to fault simulation using PLDs are discussed. Fault simulation, for ripple carry adders is carried out in both a PLD and wholly in software and the performance compared.",1997,0, 1119,Requirements Modeling,"
First Page of the Article
",1997,0, 1120,Validation of the coupling dependency metric as a predictor of run-time failures and maintenance measures,The coupling dependency metric (CDM) is a successful design quality metric. Here we apply it to four case studies: run-time failure data for a COBOL registration system; maintenance data for a C text-processing utility; maintenance data for a C++ patient collaborative care system; and maintenance data for a Java electronic file transfer facility. CDM outperformed a wide variety of competing metrics in predicting run-time failures and a number of different maintenance measures. These results imply that coupling metrics may be good predictors of levels of interaction within a software product,1998,1, 1121,Predicting fault-prone classes with design measures in object-oriented systems,"The paper aims at empirically exploring the relationships between existing object oriented coupling, cohesion, and inheritance measures and the probability of fault detection in system classes during testing. The underlying goal of such a study is to better understand the relationship between existing product measurement in OO systems and the quality of the software developed. It is shown that by using a subset of existing measures, accurate models can be built to predict in which classes most of the faults are likely to lie in. By inspecting 48% of the classes, it is possible to find 95% of the faults. Besides the size of classes, the frequency of method invocations and the depth of inheritance hierarchies seem to be the main driving factors of fault proneness",1998,1, 1122,A comprehensive empirical validation of design measures for object-oriented systems,"This paper aims at empirically exploring the relationships between existing object-oriented coupling, cohesion, and inheritance measures and the probability of fault detection in system classes during testing. The underlying goal of such a study is to better understand the relationship between existing design measurement in OO systems and the quality of the software developed. Results show that many of the measures capture similar dimensions in the data set, thus reflecting the fact that many of them are based on similar principles and hypotheses. Besides the size of classes, the frequency of method invocations and the depth of inheritance hierarchies seem to be the main driving factors of fault-proneness",1998,1, 1123,Code churn: a measure for estimating the impact of code change,"This study presents a methodology that will produce a viable fault surrogate. The focus of the effort is on the precise measurement of software development process and product outcomes. Tools and processes for the static measurement of the source code have been installed and made operational in a large embedded software system. Source code measurements have been gathered unobtrusively for each build in the software evolution process. The measurements are synthesized to obtain the fault surrogate. The complexity of sequential builds is compared and a new measure, code churn, is calculated. This paper demonstrates the effectiveness of code complexity churn by validating it against the testing problem reports",1998,1, 1124,Software development: Processes and performance,"This paper presents data that describe the effects on software development performance due to both the production methods of software development and the social processes of how software developers work together. Data from 40 software development teams at one site that produces commercial software are used to assess the effects of production methods and social processes on both software product quality and team performance. Findings indicate that production methods, such as the use of software methodologies and automated development tools, provide no explanation for the variance in either software product quality or team performance. Social processes, such as the level of informal coordination and communication, the ability to resolve intragroup conflicts, and the degree of supportiveness among the team members, can account for 25 percent of the variations in software product quality. These findings suggest two paradoxes for practice: (1) that teams of software developers are brought together to create variability and production methods are used to reduce variability, and (2) that team-level social processes may be a better predictor of software development team performance than are production methods. These findings also suggest that factors such as other social actions or individual-level differences must account for the large and unexplained variations in team performance.",1998,0, 1125,An empirical study on identifying fault-prone module in large switching system,Software complexity metrics have been shown to be very closely related to the distribution of faults in software. This paper focuses on identification of fault-prone software modules based on discriminant analysis and classification for software that are developed by CHILL language. We define software complexity metrics for CHILL language. The technique is successful in classifying software modules with relatively low error rare. This procedure shows very useful method in the detection of software modules in the fault of programs with high potential,1998,0, 1126,A study on the design of large-scale mobile recording and tracking systems,"A mobile inventory control system involves a large number of highly mobile and dispersed databases in the form of RFID (radio frequency ID) tag devices. Radio tags have limited communication range, bandwidth, computing power and storage. We examine some critical impacts of these characteristics on design of facilities and techniques required for supporting integrated mobile distributed databases and computing environments. We analyze the performance of RF tag protocols and database mechanisms using process oriented discrete event simulation tools. We present the results of experiments on three simulation models for RF tag protocols: slotted ALOHA/TDMA, Id arbitration and CDMA. The performance results show that packet direct sequence (DS) CDMA gives superior performance compared to slotted ALOHA/TDMA and ID Arbitration. The main performance result of experiments on mobile databases shows that optimistic concurrency control performs better for situations where tag transaction probability and write percentage is high. This scenario is found in many active in-transit visibility recording and tracking systems",1998,0, 1127,Using GSSs to support error detection in software specifications,"Fagan inspections can be used to find errors in documents used during systems development. In the practice of Fagan inspections it has been found that Group Support Systems (GSSs) can significantly enhance error detection. This paper describes our findings on the use of a GSS by Fagan inspection teams in an experimental set-up. In this study, 24 students and 24 managers participated; they looked for defects in a standardized four-page document. In the preparation phase the participants, searching individually without GSS support, found within one hour between 12 and 40 defects. Prior to the second (or `logging') phase of the inspection we formed 16 groups, each consisting of three students or three managers. Eight groups were selected to work with GSS support, and eight groups without GSS but with a facilitator. We found that only 3 to 9 new defects were found in the second phase. The performance of the GSS groups did not differ significantly from the non-GSS groups, but the GSS participants evaluated their sessions significantly lower than the non-GSS participants",1998,0, 1128,Virtual disassembly-a software tool for developing product dismantling and maintenance systems,"A product is typically designed in the assembled configuration. However, CAD systems do not analyze the product design for disassemblability, a significant part of maintenance. This paper presents a new methodology for performing disassembly evaluation of CAD models of designs for maintenance. This methodology involves analyzing the product for disassemblability utilizing several disassembly techniques, such as selective disassembly and destructive disassembly, to assess: (i) the ease of product disassembly for maintenance, (ii) disassembly sequencing to maintain a selected set of components and (iii) disassembly cost for maintenance. The methodology utilizes a metric for quantitative assessment of the product in order to arrive at an acceptable design and to compare alternate designs for maintenance. Moreover, the disassembly methodology presented in this paper can serve as a basis for identifying an efficient product design for maintenance",1998,0, 1129,"Sample implementation of the Littlewood holistic model for assessing software quality, safety and reliability","Software quality, safety and reliability metrics should be collected, integrated, and analyzed throughout the development lifecycle so that corrective and preventive action can be taken in a timely and cost effective manner. It is too late to wait until the testing phase to collect and assess software quality information, particularly for mission critical systems. It is inadequate and can be misleading to only use the results obtained from testing to make a software safety or reliability assessment. To remedy this situation a holistic model which captures, integrates and analyzes product, process, and people/resource (P3R) metrics, as recommended by B. Littlewood (1993), is needed. This paper defines one such possible implementation",1998,0, 1130,Neural-network techniques for software-quality evaluation,"Software quality modeling involves identifying fault-prone modules and predicting the number of errors in the early stages of the software development life cycle. This paper investigates the viability of several neural network techniques for software quality evaluation (SQE). We have implemented a principal component analysis technique (used in SQE) with two different neural network training rules, and have classified software modules as fault-prone or nonfault-prone using software complexity metric data. Our results reveal that neural network techniques provide a good management tool in a software engineering environment",1998,0, 1131,MEADEP and its applications in evaluating dependability for air traffic control systems,"MEADEP (measure dependability) is a user-friendly dependability evaluation tool for measurement-based analysis of computing systems including both hardware and software. Features of MEADEP are: a data processor for converting data in various formats (records with a number of fields stored in a commercial database format) to the MEADEP format, a statistical analysis module for graphical data presentation and parameter estimation, a graphical modeling interface for constructing reliability block and Markov diagrams, and a model solution module for availability/reliability calculation with graphical parametric analysis. Use of the tool on failure data from measurements can provide quantitative assessments of dependability for critical systems, while greatly reducing requirements for specialized skills in data processing, analysis, and modeling from the user. MEADEP has been applied to evaluate dependability for several air traffic control systems (ATC) and results produced by MEADEP have provided valuable feedback to the program management of these critical systems",1998,0, 1132,High-availability transaction processing: practical experience in availability modeling and analysis,"High availability is driven by two types of factors: customer site factors such as the frequency of software and hardware upgrades, and system factors such as failure and repair rates, most often associated with mathematical models of reliability and availability. In this paper we describe several tools to assess the effects of these factors on the availability of high availability transaction processing (HATP) systems and make the expected level of performance more understandable to customers who purchase such systems, sales people who sell them, and managers who must make decisions based on the system availability. We employ a survey methodology to identify the key customer site factors that drive availability; we illustrate an accurate but greatly simplified technique for modeling system availability based on the internal system factors; and we apply statistical design of experiments and control chart methodologies to better understand the variability inherent in the system performance",1998,0, 1133,Integrated design method for probabilistic design,"This paper presents a rational design approach known as the integrated design method (IDM). By employing cost relationships within a probabilistic methodology, as is done in IDM, engineers have a new tool to objectively assess product cost, performance and reliability. The benefits of the method are: lower product cost; superior product performance; reduced product development time; and reduced product warranty and liability costs. To harness the full potential of IDM requires that users have an easy to use tool that is capable of performing all required analysis quickly, accurately, and with minimal user interaction. ProFORM is computational software, which focuses on integrating all the elements of the probabilistic analysis into a comprehensive package that is easily used and understood. Unlike other packages, ProFORM is integrated with the existing design tools in a seamless fashion. The numerical example at the end of the paper clearly demonstrates the advantages of IDM over the conventional design methods",1998,0, 1134,Measuring the effectiveness of a structured methodology: a comparative analysis,This study evaluates a vendor supplied structured methodology which was implemented in a large manufacturing organization. Thirty projects using the methodology are measured for efficiency and effectiveness of software maintenance performance. The performance of these projects is evaluated using both objective metrics and subjective measures taken from stakeholders of the software applications. The performance results of these thirty systems are then contrasted to the performance results of thirty five applications in the same organization that do not use this structured methodology and to one hundred and sixteen applications across eleven other organizations. All of these applications had been in operation at least six months when they were studied. The software applications developed and maintained using the structured methodology were found to have some significant cost and quality performance gains over the applications that do not use this methodology,1998,0, 1135,Towards more efficient TMN model development. TMN information modal-quality evaluation,"There has been a dramatic growth in the demand for telecommunications management systems since the advent of a large volume of new technologies for telecommunications service delivery (SDH, ATM, FR, etc.). TMN is an approach favoured by many management systems users and providers. Development of TMNs is a complex issue. Among the task to be performed during TMN development is the modelling of information to be exchanged across TMN interfaces. Quality is an important matter in all system development activities and this is also the case when producing the TMN information models. It is preferable to detect and resolve errors in all software artifacts as soon as possible in the system life-cycle as the costs of making corrections grows as development progresses",1998,0, 1136,Analysis of preventive maintenance in transactions based software systems,"Preventive maintenance of operational software systems, a novel technique for software fault tolerance, is used specifically to counteract the phenomenon of software “agingâ€? However, it incurs some overhead. The necessity to do preventive maintenance, not only in general purpose software systems of mass use, but also in safety-critical and highly available systems, clearly indicates the need to follow an analysis based approach to determine the optimal times to perform preventive maintenance. In this paper, we present an analytical model of a software system which serves transactions. Due to aging, not only the service rate of the software decreases with time, but also the software itself experiences crash/hang failures which result in its unavailability. Two policies for preventive maintenance are modeled and expressions for resulting steady state availability, probability that an arriving transaction is lost and an upper bound on the expected response time of a transition are derived. Numerical examples are presented to illustrate the applicability of the models",1998,0, 1137,"Complexities in DSP software compilation: performance, code size, power, retargetability","The paper presents a new methodology for software compilation for embedded DSP systems. Although it is well known that conventional compilation techniques do not produce high quality DSP code, few researchers have addressed this area. Performance, estimated power dissipation, and code size are important design constraints in embedded DSP design. New techniques for code generation targeting DSP processors are introduced and employed to show improvements and applicability to different fixed point and floating point DSP popular architectures. Code is generated in fast CPU times and is optimized for minimum code size, energy dissipation, or maximum performance. Code generated for realistic DSP applications provide performance and code size improvements of up to 118% and measured power improvements of up to 49% for popular DSP processors compared to previous research and a commercial compiler. This research is important for industry since DSP software can be efficiently generated, with constraints on code size, performance, and energy dissipation",1998,0, 1138,A fault-tolerant distributed sorting algorithm in tree networks,"The paper presents a distributed sorting algorithm for a network with a tree topology. The distributed sorting problem can be informally described as follows: nodes cooperate to reach a global configuration where every node, depending on its identifier, is assigned a specific final value taken from a set of input values distributed across all nodes; the input values may change in time. In our solution, the system reaches its final configuration in a finite time after the input values are stable and the faults cease. The fault tolerance and adaption to changing input are achieved using Dijkstra's paradigm of self stabilization. Our solution is based on continuous broadcast with acknowledgment along the tree network to achieve synchronization among processes in the system; it has O(nh) time complexity and only O(log(n)deg) memory requirement where deg is the degree of the tree and h is the height of the tree",1998,0, 1139,Enhancing multimedia protocol performance through knowledge based modeling,"Multimedia transmission protocols take advantage of video compression coders. The core of video sequence compression coders is the motion estimation task. Unfortunately video sequence coders cannot be implemented effectively on desktop workstations because of the high computing requirements of the motion estimation task. It is possible to reduce the computing power required penalizing the compression ratio (i.e. increasing output bandwidth) and/or video quality, but in case of standard software coders running on desktop computers the penalty of compression and quality becomes unacceptable. In order to improve both the compression ratio and processing time we propose to enhance multimedia protocol design through the modeling of the specific application domain. We show the technique in a case study, namely the design of a MPEG coding algorithm targeted to compress video sequences generated by highway surveillance cameras. In addition me show the effectiveness of the technique with experimental results",1998,0, 1140,New network QoS measures for FEC-based audio applications on the Internet,"New network QoS (quality of service) measures for interactive audio applications using FEC (forward error control) are proposed. Applications such as an Internet phone require both low data loss and a short delay for the underlying transport layer. The FEC-based error control has become popular as a way of meeting such requirements; new application data (or highly compressed data) are copied onto successive packets, so that random packet losses in networks can be concealed to some extent. From the viewpoint of FEC-based applications, actual QoS depends not only on the loss and delay of each packet, but also on the loss and delay of successive packets. Conventional network QoS measures such as loss rate and delay distribution, however, only focus on each packet. Therefore, the probability of long successive losses, for example, cannot be monitored, even though they strongly affect FEC-based application QoS. We propose a new concept named “loss window sizeâ€?for measuring the QoS of successive packets. Definitions of loss and delay are generalized using this concept. These definitions take the FEC-based error concealment into account. Therefore, these measures enable more precise estimation of FEC-based application-level QoS than conventional measures. In order to show the effectiveness of the proposed measures, we have built an experimental monitoring system on working networks. The actual data show that network QoS may vary from time to time in terms of newly defined measures, even though QoS variation using conventional measures are not so apparent. We also model FEC-based application QoS, and show that application QoS and proposed network measures correspond well",1998,0, 1141,A study on the application of wavelet analysis to power quality,"The quality of electric power is a major concern, since poor electric power quality results in malfunctions, instabilities and shorter lifetime of the load. The poor quality of electric power can be attributed to power line disturbances such as waveshape faults, overvoltages, capacitor switching transients, harmonic distortion and impulse transients. For diagnosing power quality problems, the causes of the disturbances should be understood before appropriate actions can be taken. Monitoring the voltages and currents is the first requirement in this process. In order to determine the causes, one should be able to classify the type of disturbances from the monitored data. Several approaches such as point-to-point comparison of adjacent systems, neural networks and expert systems have been proposed and implemented in the past. Wavelet analysis is gaining greater importance for the recognition of disturbances. A study about the performance of wavelets in detecting and classifying the power quality disturbances is reported in this paper",1998,0, 1142,Application of a fuzzy classification technique in computer grading of fish products,"This work presents the enhancement and application of a fuzzy classification technique for automated grading of fish products. Common features inherent in grading-type data and their specific requirements in processing for classification are identified. A fuzzy classifier with a four-level hierarchy is developed based on the “generalized K-nearest neighbor rulesâ€? Both conventional and fuzzy classifiers are examined using a realistic set of herring roe data (collected from the fish processing industry) to compare the classification performance in terms of accuracy and computational cost. The classification results show that the generalized fuzzy classifier provides the best accuracy at 89%. The grading system can be tuned through two parameters-the threshold of fuzziness and the cost weighting of error types-to achieve higher classification accuracy. An optimization scheme is also incorporated into the system for automatic determination of these parameter values with respect to a specific optimization function that is based on process renditions, including the product price and labor cost. Since the primary common features are accommodated in the classification algorithm, the method presented here provides a general capability for both grading and sorting-type problems in food processing",1998,0, 1143,Globally convergent edge-preserving regularized reconstruction: an application to limited-angle tomography,"We introduce a generalization of a deterministic relaxation algorithm for edge-preserving regularization in linear inverse problems. This algorithm transforms the original (possibly nonconvex) optimization problem into a sequence of quadratic optimization problems, and has been shown to converge under certain conditions when the original cost functional being minimized is strictly convex. We prove that our more general algorithm is globally convergent (i.e., converges to a local minimum from any initialization) under less restrictive conditions, even when the original cost functional is nonconvex. We apply this algorithm to tomographic reconstruction from limited-angle data by formulating the problem as one of regularized least-squares optimization. The results demonstrate that the constraint of piecewise smoothness, applied through the use of edge-preserving regularization, can provide excellent limited-angle tomographic reconstructions. Two edge-preserving regularizers-one convex, the other nonconvex-are used in numerous simulations to demonstrate the effectiveness of the algorithm under various limited-angle scenarios, and to explore how factors, such as the choice of error norm, angular sampling rate and amount of noise, affect the reconstruction quality and algorithm performance. These simulation results show that for this application, the nonconvex regularizer produces consistently superior results",1998,0, 1144,Analysis of checkpointing schemes with task duplication,"The paper suggests a technique for analyzing the performance of checkpointing schemes with task duplication. We show how this technique can be used to derive the average execution time of a task and other important parameters related to the performance of checkpointing schemes. The analysis results are used to study and compare the performance of four existing checkpointing schemes. Our comparison results show that, in general, the number of processors used, not the complexity of the scheme, has the most effect on the scheme performance",1998,0, 1145,Lessons from using Z to specify a software tool,"The authors were recently involved in the development of a COBOL parser (G. Ostrolenk et al., 1994), specified formally in Z. The type of problem tackled was well suited to a formal language. The specification process was part of a life cycle characterized by the front loading of effort in the specification stage and the inclusion of a statistical testing stage. The specification was found to be error dense and difficult to comprehend. Z was used to specify inappropriate procedural rather than declarative detail. Modularity and style problems in the Z specification made it difficult to review. In this sense, the application of formal methods was not successful. Despite these problems the estimated fault density for the product was 1.3 faults per KLOC, before delivery, which compares favorably with IBM's Cleanroom method. This was achieved, despite the low quality of the Z specification, through meticulous and effort intensive reviews. However, because the faults were in critical locations, the reliability of the product was assessed to be unacceptably low. This demonstrates the necessity of assessing reliability as well as “correctnessâ€?during system testing. Overall, the experiences reported in the paper suggest a range of important lessons for anyone contemplating the practical application of formal methods",1998,0, 1146,OOA metrics for the Unified Modeling Language,"UML is the emerging standard for expressing OOA/OOD models. New metrics for object oriented analysis models are introduced, and existing ones are adapted to the entities and concepts of UML. In particular, these metrics concern UML use case diagrams and class diagrams used during the OOA phase. The proposed metrics are intended to allow an early estimate of development efforts, implementation time and cost of the system under development, and to measure its object orientedness and quality since the beginning of the analysis phase. The proposed metric suite is described in detail, and its relations with proposed metrics found in the literature are highlighted. Some measurements on three software projects are given",1998,0, 1147,Software testability measurements derived from data flow analysis,"The purpose of the research is to develop formulations to measure the testability of a program. Testability is a program's property which is introduced with the intention of predicting efforts required for testing the program. A program with a high degree of testability indicates that a selected testing criterion could be achieved with less effort and the existing faults can be revealed more easily during testing. We propose a new program normalization strategy that makes the measurement of testability more precise and reasonable. If the program testability metric derived from data flow analysis could be applied at the beginning of a software testing phase, much more effective testing of resource allocation and prioritizing is possible",1998,0, 1148,Modeling maintenance effort by means of dynamic systems,"The dynamic evolution of ecological systems in which predators and prey compete for survival has been investigated by applying suitable mathematical models. Dynamic systems theory provides a useful way to model interspecies competition and thus the evolution of predators and prey populations. This kind of mathematical framework has been shown to be well suited to describe evolution of economical systems as well, where instead of predators and prey there are consumers and resources. This paper suggests how dynamic systems could be usefully applied to the maintenance context, namely to model the dynamic evolution of the maintenance effort. When maintainers start trying to recognize and correct code defects, while the number of residual defects decreases, the effort spent to find out any new defect has an initial increase, followed by a decline, in a similar way to prey and predator populations. The feasibility of this approach is supported by the experimental data about a 67 months maintenance task of a software project and its successive releases",1998,0, 1149,Identifying fault prone modules: an empirical study in telecommunication system,"Telecommunication systems are becoming more dependent on software-intensive products and systems. Poor software quality can threaten safety, risk a company's business, or alienate potential customers. It is no longer acceptable to ignore software quality until just prior to a product's release. This study identifies troublesome modules in a large telecommunication system. For this, we propose the software metrics of the CHILL language, which are used to develop the telecommunication software. We present the identification method of fault-prone software modules of telecommunication software using neural networks. We investigate the relationship between the proposed metrics and the change request frequency of software modules which are found during the development phase. Using the neural network model, we classify software modules as either fault prone or not fault prone based on the proposed metrics. We obtained the experimental results that the total fitting rate of 52 testing data sets was 96.2%. Therefore, for newly added software modules, we can predict whether the software module is fault prone or not",1998,0, 1150,Measuring design-level cohesion,"Cohesion was first introduced as a software attribute that, when measured, could be used to predict properties of implementations that would be created from a given design. Unfortunately, cohesion, as originally defined, could not be objectively assessed, while more recently developed objective cohesion measures depend on code-level information. We show that association-based and slice-based approaches can be used to measure cohesion using only design-level information. An analytical and empirical analysis shows that the design-level measures correspond closely with code-level cohesion measures. They can be used as predictors of or surrogates for the code-level measures. The design-level cohesion measures are formally defined, have been implemented, and can support software design, maintenance and restructuring",1998,0, 1151,Using process history to predict software quality,"Many software quality models use only software product metrics to predict module reliability. For evolving systems, however, software process measures are also important. In this case study, the authors use module history data to predict module reliability in a subsystem of JStars, a real time military system",1998,0, 1152,Adaptive thresholding for proactive network problem detection,"The detection of network fault scenarios has been achieved using the statistical information contained in the Management Information Base (MIB) variables. An appropriate subset of MIB variables was chosen in order to adequately describe the function of the node. The time series data obtained from these variables was analyzed using a sequential generalized likelihood ratio (GLR) test. The GLR test was used to detect the change points in the behavior of the variables. Using a binary hypothesis test, variable level alarms were generated based on the magnitude of the detected changes as compared to the normal situation. These alarms were combined using a duration filter resulting in a set of node level alarms, which correlated with the experimentally observed network faults and performance problems. The algorithm has been tested on real network data. The applicability of our algorithm to a heterogeneous node was confirmed by using the MIB data from a second node. Interestingly, for most of the faults studied, detection occurs in advance of the fault (at least 5 min) and the algorithm is simple enough for potential online implementation: thus allowing the possibility of prediction and recovery in the future",1998,0, 1153,Learning visually guided grasping: a test case in sensorimotor learning,"We present a general scheme for learning sensorimotor tasks, which allows rapid online learning and generalization of the learned knowledge to unfamiliar objects. The scheme consists of two modules, the first generating candidate actions and the second estimating their quality. Both modules work in an alternating fashion until an action which is expected to provide satisfactory performance is generated, at which point the system executes the action. We developed a method for off-line selection of heuristic strategies and quality predicting features, based on statistical analysis. The usefulness of the scheme was demonstrated in the context of learning visually guided grasping. We consider a system that coordinates a parallel-jaw gripper and a fixed camera. The system learns to estimate grasp quality by learning a function from relevant visual features to the quality. An experimental setup using an AdeptOne manipulator was developed to test the scheme",1998,0, 1154,The application of a 3-D empirical ionosphere model to WAAS simulation,"The Wide Area Augmentation System (WAAS) is designed to augment the Global Positioning System (GPS). One of the services provided by WAAS is a broadcast ionospheric correction based upon dual-frequency measurements. These observations are used to construct a grid of expected vertical ionospheric delays. To date, the broadcast single frequency model has been used to project vertical delay observations onto the predefined grid points. The performance of the WAAS ionospheric grid could be improved by incorporating a more realistic description of the ionosphere than this nominal transport model. In this paper we explore the use of IRI90 for this purpose by means of a software-based WAAS simulator. The quality of the broadcast WAAS ionospheric corrections is tested by comparing the ionospheric delays predicted by the WAAS grid algorithm with those derived from a truth source. Results to date indicate that using a transport model derived from IRI90 yields a small improvement over the broadcast model",1998,0, 1155,VHDL behavioral ATPG and fault simulation of digital systems,"Due to the increasing level of integration achieved by Very Large Scale Integrated (VLSI) technology, traditional gate-level fault simulation is becoming more complex, difficult, and costly. Furthermore, circuit designs are increasingly being developed through the use of powerful VLSI computer-aided design (CAD) synthesis tools which emphasize circuit descriptions using high-level representations of functional behavior, rather than physical architectures and layout. Behavior fault simulation applied to the top functional level models described using a hardware description language offers a very attractive alternative to these problems. A new way to simulate the behavioral fault models using the hardware description language (HDL), such as VHDL, is presented. Tests were generated by carrying out the behavioral fault simulation for a few circuit models. For comparison, a gate-level fault simulation on the equivalent circuit, produced via a synthesis tool, was used. The performance analysis shows that a very small number of test patterns generated by the behavioral automatic test pattern generation (ATPG)/fault simulation system detected around 98 percent of the testable gate-level faults that were detected by random test",1998,0, 1156,Performance test case generation for microprocessors,"We describe a systematic methodology for generating performance test cases for current generation microprocessors. Such rest cases are used for: (a) validating the expected pipeline flow behavior and timing; and, (b) detecting and diagnosing performance bugs in the design. We cite examples of application to a real, superscalar processor in pre- and post-silicon stages of development",1998,0, 1157,Estimation of error detection probability and latency of checking methods for a given circuit under check,"A technique of sampling of input vectors (SIV) with statistical measurements is used for the estimation of error detection probability and fault latency of different checking methods. Application of the technique for Berger code, mod3 and parity checking for combinational circuits is considered. The experimental results obtained by a Pilot Software System are presented. The technique may be implemented as an overhead to an already existing fault simulator",1998,0, 1158,An empirical study of regression test selection techniques,"Regression testing is an expensive maintenance process directed at validating modified software. Regression test selection techniques attempt to reduce the cost of regression testing by selecting tests from a program's existing test suite. Many regression test selection techniques have been proposed. Although there have been some analytical and empirical evaluations of individual techniques, to our knowledge only one comparative study, focusing on one aspect of two of these techniques, has been performed. We conducted an experiment to examine the relative costs and benefits of several regression test selection techniques. The experiment examined five techniques for reusing tests, focusing on their relative abilities to reduce regression testing effort and uncover faults in modified programs. Our results highlight several differences between the techniques, and expose essential tradeoffs that should be considered when choosing a technique for practical application",1998,0, 1159,Defect content estimations from review data,Reviews are essential for defect detection and they provide an opportunity to control the software development process. This paper focuses upon methods for estimating the defect content after a review and hence to provide support for process control. Two new estimation methods are introduced as the assumptions of the existing statistical methods are not fulfilled. The new methods are compared with a maximum-likelihood approach. Data from several reviews are used to evaluate the different methods. It is concluded that the new estimation methods provide new opportunities to estimate the defect content,1998,0, 1160,Analyzing effects of cost estimation accuracy on quality and productivity,"This paper discusses the effects of estimation accuracy for software development cost on both the quality of the delivered code and the productivity of the development team. The estimation accuracy is measured by metric RE (relative error). The quality and productivity are measured by metrics FQ (field quality) and TP (team productivity). Using actual project data on thirty-one projects at a certain company, the following are verified by correlation analysis and testing of statistical hypotheses. There is a high correlation between the faithfulness of the development plan to standards and the value of RE (a coefficient of correlation between them is -0.60). Both FQ and TP are significantly different between projects with -10%2 value of 0.57 for the full sample",1998,0, 1166,PC based monitoring and fault prediction for small hydroelectric plants,"Financial constraints often mean that small hydroelectric plants run unattended and hence the recording and monitoring of the performance of the plant relies on, sometimes infrequent, visits by appropriate personnel. The LabView package has appeared on the market and this enables the creation of “Virtual Instrumentsâ€?which can display, on the screen of a PC, measured quantities such as are required at a typical small hydroelectric power station. The use of a PC based metering scheme gives rise to numerous added benefits; the measured data can be stored in the memory of the PC to allow subsequent downloading and analysis; the data can be analysed by other software to detect and predict fault conditions, i.e. to perform a condition monitoring function; and the data can be interrogated remotely by a suitable communications link to enable analysis and monitoring, effectively from anywhere in the world. During the 1996/97 session, two final year Honours projects were running at Napier University which related to the use of affordable PC based systems, the first performing a metering function, and the second performing a fault prediction function. This paper reviews maintenance strategies and condition monitoring techniques and then describes the development and outcomes of the two projects (a generator monitoring system and a condition monitoring and fault prediction program)",1998,0, 1167,An automated approach for identifying potential vulnerabilities in software,"The paper presents results from analyzing the vulnerability of security-critical software applications to malicious threats and anomalous events using an automated fault injection analysis approach. The work is based on the well understood premise that a large proportion of security violations result from errors in software source code and configuration. The methodology employs software fault injection to force anomalous program states during the execution of software and observes their corresponding effects on system security. If insecure behaviour is detected, the perturbed location that resulted in the violation is isolated for further analysis and possibly retrofitting with fault tolerant mechanisms",1998,0, 1168,Uncertainty model for product environmental performance scoring,"As product designers begin to make design decisions motivated by environmental performance, it is critical for environmental experts to present designers with useful information about the quality of both performance measures and the data on which the measures are based. In this paper, we discuss sources of uncertainty in environmental performance measurement using Motorola's Green Design Advisor environmental scoring software. We present a probabilistic method for measuring both data and model uncertainty in Green Design Advisor scores. Using Monte Carlo simulation, this model was applied to two Motorola product designs to generate probability distributions for the scores. The products have the same functionality, but one was designed with environmental performance in mind. The results of this study indicate that the uncertainty model can distinguish between Green Design Advisor scores which differ by 10% or more. In addition, approximation formulas were developed which give results similar to the simulation results, with a computation time short enough to be useful for an environmental scoring software",1998,0, 1169,Performance and interface buffer size driven behavioral partitioning for embedded systems,"One of the major differences in partitioning for co-design is in the way the communication cost is evaluated. Generally, the size of the edge cut-set is used. When communication between components is through buffered channels, the size of the edge cut-set is not adequate to estimate the buffer size. A second important factor to measure the quality of partitioning is the system delay. Most partitioning approaches use the number of nodes/functions in each partition as constraints and attempt to minimize the communication cost. The data dependencies among nodes/functions and their delays are not considered. In this paper, we present partitioning with two objectives: (1) buffer size, which is estimated by analyzing the data flow patterns of the control data flow graph (CDFG) and solved as a clique partitioning problem, and (2) the system delay that is estimated using list scheduling. We pose the problem as a combinatorial optimization and use an efficient non-deterministic search algorithm, called the problem-space genetic algorithm, to search for the optimum. Experimental results indicate that, according to a proposed quality metric, our approach can attain an average 87% of the optimum for two-way partitioning",1998,0, 1170,Performance evaluation tool for rapid prototyping of hardware-software codesigns,"Performance evaluation is essential for tradeoff analysis during rapid prototyping. Existing performance evaluation strategies based on co-simulation and static analysis are either too slow or error prone. We therefore present an intermediate approach based on profiling and scheduling for rapid prototyping of hardware-software codesigns. Our performance evaluation tool obtains representative task timings by profiling which is done simultaneously with system specification. During design space exploration the tool obtains performance estimates by using well known scheduling and novel retiming heuristics. It is capable of obtaining both nonpipelined and pipelined schedules. The tool includes an area estimator which calculates the amount of hardware area required by the design by taking resource sharing between different hardware tasks into account. The tool also allows the user to evaluate the performance of a particular schedule with different task timings. In contrast to co-simulation and static analysis, the tool is able to provide fast and accurate performance estimates. The effectiveness of the tool in a rapid prototyping environment is demonstrated by a case study",1998,0, 1171,A controlled experiment to assess the benefits of procedure argument type checking,"Type checking is considered an important mechanism for detecting programming errors, especially interface errors. This report describes an experiment to assess the defect-detection capabilities of static, intermodule type checking. The experiment uses ANSI C and Kernighan & Ritchie (K&R) C. The relevant difference is that the ANSI C compiler checks module interfaces (i.e., the parameter lists calls to external functions), whereas K&R C does not. The experiment employs a counterbalanced design in which each of the 40 subjects, most of them CS PhD students, writes two nontrivial programs that interface with a complex library (Motif). Each subject writes one program in ANSI C and one in K&R C. The input to each compiler run is saved and manually analyzed for defects. Results indicate that delivered ANSI C programs contain significantly fewer interface defects than delivered K&R C programs. Furthermore, after subjects have gained some familiarity with the interface they are using, ANSI C programmers remove defects faster and are more productive (measured in both delivery time and functionality implemented)",1998,0, 1172,Finding a suitable wavelet for image compression applications,"In this paper we assess the relative merits of various types of wavelet functions for use in a wide range of image compression scenarios. We have delineated different algorithmic criteria that can be used for wavelet evaluation. The assessment undertaken includes both algorithmic aspects (fidelity, perceptual quality) as well as suitability for real-time implementation in hardware. The results obtained indicate that of the wavelets studied the biorthogonal 9&7 taps wavelet is the most suitable from a compression perspective and that the Daubechies 8 taps gives best performance when assessed solely in terms of statistical measures",1998,0, 1173,An ANN fault detection procedure applied in virtual measurement systems case,"The prompt detection of anomalous conditions of the Virtual Measurement Systems (VMS) elements, as sensors or transducers, involve a specialised implementation of the VMS software part. One solution for this software implementation is based on the neural processing structures. The implemented Artificial Neural Networks (ANN) are supplied with the voltage signals delivered by the conditioning circuits of the VMS sensors. The signal acquisition was performed using a data acquisition board or a programmable voltmeter. For the acquired signals the ANN delivers the values which can be used for fault detection and localisation of faulty elements. Referring to ANN architectures a study concerning the number of layers, the number of processing neurons, the type of neuron activation functions and the possibilities to optimise those parameters is included in this paper. The performance of proposed ANN fault detection solution was experimentally evaluated in the particular case of a VMS based on a data acquisition board (DAQ) and on a GPIB controller",1998,0, 1174,Fault-tolerant software voters based on fuzzy equivalence relations,"Redundancy based fault-tolerant software strategies frequently use some form of voting to decide which of the answers their functionally equivalent versions produce is “correct.â€?When voting involves comparisons of floating-point outputs, it is necessary to use a tolerance value in order to declare two outputs equal. Comparing more than two floating-point outputs for equivalence is potentially problematic since, in general, floating-point comparisons based on fixed tolerance may not form an equivalence relation, i.e., the comparisons may not be transitive. The more versions are involved, the more acute this problem becomes. This paper discusses an approach that in some situations alleviates this problem by forming a fuzzy equivalence relation (that preserves transitivity). We applied the method to different voting techniques such as consensus voting and maximum likelihood voting. Our analyses, based on simulations, show that voting that uses fuzzy equivalence relations performs better than “classicalâ€?voting provided some criteria are met",1998,0, 1175,Evaluation of fault-tolerance latency from real-time application's perspectives,"Information on Fault-Tolerance Latency (FTL), which is defined as the total time required by all sequential steps taken to recover from an error, is important to the design and evaluation of fault-tolerant computers used in safety-critical real-time control systems with deadline information. In this paper we evaluate FTL in terms of several random and deterministic variables accounting for fault behaviors and/or the capability and performance of error-handling mechanisms, while considering various fault-tolerance mechanisms based on the tradeoff between temporal and spatial redundancy, and use the evaluated FTL to check if an error-handling policy can meet the Control System Deadline (CSD) for a given real-time application",1998,0, 1176,Error propagation analysis studies in a nuclear research code,"Software fault injection has recently started showing great promise as a means for measuring the safety of software programs. This paper shows the results from applying one method for implementing software fault injection, data state error corruption, to the Departure from Nucleate Boiling Ratio (DNBR) code, a research code from the Halden Reactor Project, located in Halden Norway",1998,0, 1177,An integrated approach to achieving high software reliability,"In this paper we address the development, testing, and evaluation schemes for software reliability, and the integration of these schemes into a unified and consistent paradigm. Specifically, techniques and tools for the three software reliability engineering phases are described. The three phases are (1) modeling and analysis, (2) design and implementation, and (3) testing and measurement. In the modeling and analysis phase we describe Markov modeling and fault-tree analysis techniques. We present system-level reliability models based on these techniques, and provide modeling examples for reliability analysis and study. We describe how reliability block diagrams can be constructed for a real-world system for reliability prediction, and how critical components can be identified. We also apply fault tree models to fault tolerant system architectures, and formulate the resulting reliability quantity. Finally, we describe two software tools, SHARPE and UltraSAN, which are available for reliability modeling and analysis purpose",1998,0, 1178,Application of metrics to object-oriented designs,"The benefits of object-oriented software development are now widely recognized. However, methodologies that are used for the object-oriented software development process are still in their infancy. There is a lack of methods available to assess the quality of the various components that are derived during the development process. In this paper, we describe a method to assess the quality of object-oriented designs. We utilize a basic set of object-oriented metrics that is proposed by Shyam Chidamber et al. (1991 and 19994). We perform experimental tests on a set of object-oriented designs using the NOC metric. Also, we refer to our ongoing work to provide automated assistance to help restructure the design based on the metric findings",1998,0, 1179,ICC '98. 1998 IEEE International Conference on Communications. Conference Record. Affiliated with SUPERCOMM'98 (Cat. No.98CH36220),Presents the front cover of the proceedings.,1998,0, 1180,Worst case signalling traffic for a multi-service access protocol,"Modern multi-service medium access protocols use a collision based capacity request signalling channel. Such signalling channels may be based on the slotted Aloha multiaccess principle. This paper studies the performance of slotted Aloha subject to extreme inter-station correlation by means of a discrete-time Markov chain analysis. We study conditions whereby the time to collision resolution becomes unacceptably high (defined as deadlock). Three signalling channel management schemes for alleviating the deadlock problem are evaluated. Of these, the cyclic contention mini-slot (CMS) sharing technique employing multiple CMSs per data slot is the one that extends the protocol's useable load region the furthest. We find that implementation of a scheme, which dynamically adjusts the p-persistence parameter towards its optimal value, is desirable. Both error free and error prone conditions are studied. The results highlight the fact that the critical signalling load is largely unaffected by the presence of errors, so that even in extremely error prone environments, the limiting performance factor is still the collision rate",1998,0, 1181,Design and analysis of a fair scheduling algorithm for QoS guarantees in high-speed packet-switched networks,"B-ISDNs are required to support a variety of services such as audio, data, and video, so that the guarantee of quality-of-service (QoS) has become an increasingly important problem. An effective fair scheduling algorithm permits high-speed switches to divide link bandwidth fairly among competing connections. Together with the connection admission control, it can guarantee the QoS of connections. We propose a novel fair scheduling algorithm, called “virtual-time-based round robin (VTRR)â€? Our scheme maps the priorities of packets into classes and provides service to the first non-empty class in each round. Also, it uses an estimation method of the virtual time necessary to this service discipline. To find the first non-empty class, the VTRR adopts a priority queueing system of O(loglog c) which decreases the number of instructions which need to be carried out in one packet transmission time segment. These policies help the VTRR implementation in software, which presents flexibility for upgrades. Our analysis has demonstrated that the VTRR provides bounded unfairness and its performance is close to that of weighted fair queuing. Therefore, the VTRR has a good performance as well as simplicity, so that it is suitable for high-speed B-ISDN",1998,0, 1182,Verification of the redundancy management system for space launch vehicle: a case study,"Formal methods have been widely recognized as effective techniques to uncover design errors that could be missed by a conventional software engineering process. The paper describes the authors' experience with using formal methods in analyzing the redundancy management system (RMS) for a space launch vehicle. RMS is developed by AlliedSignal Inc. for the avionics of NASA's new space shuttle, called VentureStar, that meets the expectations for space missions in the 21st century. A process algebraic formalism is used to construct a formal specification based on the actual RMS design specifications. Analysis is performed using PARAGON, a toolset for formal specification and verification of distributed real-time systems. A number of real-time and fault-tolerance properties were verified, allowing some errors in the RMS pseudocode to be detected. The paper discusses the translation of the RMS specification into process algebra formal notation and results of the formal verification",1998,0, 1183,A framework based approach to the development of network aware applications,"Modern networks provide a QoS (quality of service) model to go beyond best-effort services, but current QoS models are oriented towards low-level network parameters (e.g., bandwidth, latency, jitter). Application developers, on the other hand, are interested in quality models that are meaningful to the end-user and, therefore, struggle to bridge the gap between network and application QoS models. Examples of application quality models are response time, predictability or a budget (for transmission costs). Applications that can deal with changes in the network environment are called network-aware. A network-aware application attempts to adjust its resource demands in response to network performance variations. This paper presents a framework-based approach to the construction of network-aware programs. At the core of the framework is a feedback loop that controls the adjustment of the application to network properties. The framework provides the skeleton to address two fundamental challenges for the construction of network-aware applications: how to find out about dynamic changes in network service quality; and how to map application-centric quality measures (e.g., predictability) to network-centric quality measures (e.g., QoS models that focus on bandwidth or latency). Our preliminary experience with a prototype network-aware image retrieval system demonstrates the feasibility of our approach. The prototype illustrates that there is more to network-awareness than just taking network resources and protocols into account and raises questions that need to be addressed (from a software engineering point of view) to make a general approach to network-aware applications useful",1998,0, 1184,Design of a software quality decision system: a computational intelligence approach,"This paper introduces an approximate reasoning system for assessing software quality and introduces the application of two computational intelligence methods in designing a software quality decision system, namely, granulation from fuzzy sets and rule-derivation from rough sets. This research is part of a computational intelligent systems approach to software quality evaluation, which includes a fuzzy-neural software quality factor-criteria selection model with learning and a rough-fuzzy-neural software quality decision system. Overall, computational intelligence results from a synergy of various combinations of genetic, fuzzy, rough and neural computing in designing engineering systems. Based on observations concerning software quality and the granulations of measurements in an extended form of the McCall software quality measurement framework, an approach to deriving rules about software quality is given. Quality decision rules express relationships between evaluations of software quality criteria measurements. A quality decision table is constructed relative to the degree of membership of each software quality measurement in particular granules. Decision-tables themselves are as a collection of sensors, which “senseâ€?inputs and output conditions for rules. Rosetta is used to generate quality decision rules. The approach described in this paper illustrates the combined application of fuzzy sets and rough sets in developing a software quality decision system",1998,0, 1185,Metrics for software architecture: a case study in the telecommunication domain,"Due to the ever increasing size and complexity of industrial software products, the issue of the design and architecture level metrics has received considerable attention. We propose some new metrics, in addition to employing some existing metrics, to understand their effects on four important quality attributes of software architectures: interoperability, reusability, reliability and maintainability. We also propose a simple `concept selection' methodology to assess these quality attributes using the raw metrics. Data from a very large-scale telecommunications software product and its changing software qualities are measured as the software architecture evolves during two releases of the product. Interesting questions arise as to how to determine the overall quality of an evolving product",1998,0, 1186,Is software quality visible in the code,"Finding and fixing software code defects is crucial to product quality. It also, however, often proves difficult and time-consuming, especially late in the development cycle. While some believe that code analysis provides a simple way to detect quality defects, the authors argue otherwise. To prove their point, they analyzed data from error reports, and their results show that code analysis detects only a small percentage of meaningful defects",1998,0, 1187,Return on investment of software quality predictions,"Software quality classification models can be used to target reliability enhancement efforts toward high risk modules. We summarize a generalized classification rule which we have proposed. Cost aspects of a software quality classification model are discussed. The contribution of this paper is a demonstration of how to assess the return on investment of model accuracy, in the context of a software quality classification model. An industrial case study of a very large telecommunications system illustrates the method. The dependent variable of the model was the probability that a module will have faults discovered by customers. The independent variables were software product and process metrics. The model is compared to random selection of modules for reliability enhancement. Calculation of return on investment can guide selection of the generalized classification rule's parameter so that the model is well-suited to the project",1998,0, 1188,Empirical studies of a safe regression test selection technique,"Regression testing is an expensive testing procedure utilized to validate modified software. Regression test selection techniques attempt to reduce the cost of regression testing by selecting a subset of a program's existing test suite. Safe regression test selection techniques select subsets that, under certain well-defined conditions, exclude no tests (from the original test suite) that if executed would reveal faults in the modified software. Many regression test selection techniques, including several safe techniques, have been proposed, but few have been subjected to empirical validation. This paper reports empirical studies on a particular safe regression test selection technique, in which the technique is compared to the alternative regression testing strategy of running all tests. The results indicate that safe regression test selection can be cost-effective, but that its costs and benefits vary widely based on a number of factors. In particular, test suite design can significantly affect the effectiveness of test selection, and coverage-based test suites may provide test selection results superior to those provided by test suites that are not coverage-based",1998,0, 1189,"Design, implementation, and evaluation of highly available distributed call processing systems","This paper presents the design of a highly available distributed call processing system and its implementation on a local area network of commercial, off-the-shelf workstations. A major challenge of using off-the-shelf components is meeting the strict performance and availability requirements in place for existing public telecommunications systems in a cost-effective manner. Traditional checkpointing and message logging schemes for general distributed applications are not directly applicable since call processing applications built using these schemes suffer from high failure-free overhead and long recovery delays. We propose an application-level fault-tolerance scheme that takes advantage of general properties of distributed call processing systems to avoid message logging and to limit checkpointing overhead. The proposed scheme, applied to a call processing system for wireless networks, shows average call setup latencies of 180 ms, failover times of less than three seconds, and recovery times of less than seventeen seconds. System availability is estimated to be 0.99995. The results indicate that using our proposed scheme meets the above challenge.",1998,0, 1190,A technique for automated validation of fault tolerant designs using laser fault injection (LFI),"This paper describes the successful development and demonstration of a Laser Fault Injection (LFI) technique to inject soft, i.e., transient, faults into VLSI circuits in a precisely-controlled, non-destructive, non-intrusive manner for the purpose of validating fault tolerant design and performance. The technique described in this paper not only enables the validation of fault-tolerant VLSI designs, but it also offers the potential for performing automated testing of board-level and system-level fault tolerant designs including fault tolerant operating system and application software. The paper describes the results of LFI testing performed to date on test metal circuit structures, i.e., ring oscillators, flip-flops, and multiplier chains, and on an advanced RISC processor, with comprehensive on-chip concurrent error detection and instruction retry, in a working single board computer. Relative to rapid, low cost testing and validation of complex fault tolerant designs, with the automated laser system at the Laser Restructuring Facility at the University of South Florida Center for Microelectronics Research (USF/CMR), a design with 10000 test points could be tested and validated in under 17 minutes. In addition to describing the successful demonstration of the technique to date, the paper discusses some of the challenges that still need to be addressed to make the technique a truly practical fault tolerant design validation tool.",1998,0, 1191,Validation of the fault/error handling mechanisms of the Teraflops supercomputer,"The Teraflops system, the world's most powerful supercomputer, was developed by Intel Corporation for the US Department of Energy (DOE) as part of the Accelerated Strategic Computing Initiative (ASCI). The machine contains more than 9000 Intel Pentium (R) Pro processors and performs over one trillion floating point operations per second. Complex hardware and software mechanisms were devised for complying with DOE's reliability requirements. This paper gives a brief description of the Teraflops system architecture and presents the validation of the fault/error handling mechanisms. The validation process was based on an enhanced version of the physical fault injection at the IC pin level. An original approach was developed for assessing signal sensitivity to transient faults and the effectiveness of the fault tolerance mechanisms. Several malfunctions were unveiled by the fault injection experiments. After corrective actions had been undertaken, the supercomputer performed according to the specification.",1998,0, 1192,Fault-tolerant computer for the Automated Transfer Vehicle,"Matra Marconi Space is developing their fourth generation fault-tolerant majority voting computers for use on future spacecraft such as the European Automated Transfer Vehicle, a servicing vehicle for the International Space Station. This paper presents how the computer participates in the spacecraft safety concept with emphasis on the critical on-orbit rendezvous phase. The functional architecture of the computer pool is described addressing fault containment layers, inter-channel synchronisation, communication, failed channel detection and passivation. Lastly, the hardware and software design of the test bench, built up to validate this new computer pool, is presented together with some performance measurements.",1998,0, 1193,Applying independent verification and validation to the automatic test equipment life cycle,"Automatic Test Equipment (ATE) is becoming increasingly more sophisticated and complex, with an even greater dependency on software. With the onset of Versa Modular Eurocard (VME) Extensions for Instrumentation (VXI) technology into ATE, which is supposed to bring about the promise of interchangeable instrument components. ATE customers are forced to contend with system integration and software issues. One way the ATE customer can combat these problems is to have a separate Independent Verification and Validation (IV&V) organization employ rigorous methodologies to evaluate the correctness and quality of the ATE product throughout its life cycle. IV&V is a systems engineering process where verification determines if the ATE meets its specifications, and validation determines if the ATE performs to the customer's expectations. IV&V has the highest potential payoff of assuring a safe and reliable system if it is initiated at the beginning of the acquisition life cycle and continued throughout the acceptance of the system. It is illustrated that IV&V effects are more pronounced when IV&V activities begin early in the software development life cycle, but later application of IV&V is still deemed to have a significant impact. IV&V is an effective technique to reducing costs, schedule, and performance risks on the development of complex ATE, and a “toolâ€?to efficiently and effectively manage ATE development risks. The IV&V organization has the ability to perform a focused and systematic technical evaluation of hardware and software processes and products. When performed in parallel with the ATE development life cycle, IV&V provides for early detection",1998,0, 1194,A temperature sensor fault detector as an artificial neural network application,The paper presents a fault detection method based on artificial neural network (ANN) capabilities applied in a particular case: a temperature virtual measurement system (VMS). A study concerning the ANN architectures and the ANN training rules involved in a fault detection procedure are also presented. The results obtained for the VMS based on a data acquisition board or on a GPIB controlled instrumentation highlight the general applicability of neural processing,1998,0, 1195,An efficient strategy for developing a simulator for a novel concurrent multithreaded processor architecture,"In developing a simulator for a new processor architecture, it often is not clear whether it is more efficient to write a new simulator or to modify an existing simulator. Writing a new simulator forces the processor architect to develop or adapt all of the related software tools. However, modifying an existing simulator and related tools, which are usually not well-documented, can be time-consuming and error-prone. We describe the SImulator for Multithreaded Computer Architectures (SIMCA) that was developed with the primary goal of obtaining a functional simulator as quickly as possible to begin evaluating the superthreaded architecture. The performance of the simulator itself was important, but secondary. We achieved our goal using a technique called process-pipelining that exploits the unique features of this new architecture to hide the details of the underlying simulator. This approach allowed us to quickly produce a functional simulator whose performance is only 3.8-4.9 times slower than the base simulator",1998,0, 1196,Prediction of distribution transformer no-load losses using the learning vector quantization neural network,"This paper presents an artificial neural network (ANN) approach to classification of distribution transformer no-load losses. The learning vector quantization (LVQ) neural network architecture is applied for this purpose. The ANN is trained to learn the relationship among data obtained from previous completed transformer constructions. For the creation of the training and testing set actual industrial measurements are used. Data comprise grain oriented steel electrical characteristics, cores constructional parameters, quality control measurements of cores production line and transformers assembly line measurements. It is shown that ANNs are very suitable for this application since they present classification success rates between 78% and 96% for all the situations examined",1998,0, 1197,Monitoring and fault diagnosis of a polymerization reactor by interfacing knowledge-based and multivariate SPM tools,"An intelligent process monitoring and fault diagnosis environment is developed by interfacing multivariate statistical process monitoring (MSPM) techniques and knowledge-based systems (KBS) for monitoring continuous multivariable process operation. The software is tested by monitoring the performance of a continuous stirred tank reactor for polymerization of vinyl acetate. The real-time KBS G2 and its diagnostic assistant (GDA) tool are integrated with MSPM methods based on canonical variate state space (CVSS) process models. Fault detection is based on T 2 of state variables and squared prediction errors (SPE) charts. Contribution plots in G2 are used for determining the process variables that have contributed to the out-of-control signal indicated by large T2 and/or SPE values, and GDA is used to diagnose the source cause of the abnormal process behavior. The MSPM modules developed in Matlab are linked with G2 and GDA, permitting the use of MSPM tools for multivariable processes with autocorrelated data. The presentation will focus on the structure and performance of the integrated system. On-line SPM of the multivariable polymerization process is illustrated by simulation studies",1998,0, 1198,An overview of the fault protection design for the attitude control subsystem of the Cassini spacecraft,"This paper describes Cassini's fault tolerance objectives, and how those objectives have influenced the design of its attitude control subsystem (ACS). Particular emphasis is placed on the architecture of the software algorithms that are used to detect, locate, and recover from ACS faults, and the integration of these algorithms into the rest of the object-oriented ACS flight software. The interactions of these ACS “fault protectionâ€?algorithms with Cassini's system-level fault protection algorithms is also described. We believe that the architecture of the ACS fault protection algorithms provides significant performance and operability improvements over the fault protection algorithms that have flown on previous interplanetary spacecraft",1998,0, 1199,Geophysical signatures from precise altimetric height measurements in the Arctic Ocean,"The use of altimeter data in the polar regions has previously been limited by the presence of permanent and seasonal ice cover. Changes in the radar echo shape received by the altimeter over sea ice, as compared with the open ocean, cause problems in the on-board estimates of surface height, making the data unusable. The majority of noise on the signal can be reduced by retracking the full waveform data set (WAP). Careful quality control is applied to ensure that only those return echoes from which accurate height measurements can be obtained are retained. Consideration of possible backscattering mechanisms from ice and water, and comparisons with imagery, suggest that the specular waveforms typically found in altimeter data over sea ice originate from regions of open water or new thin ice exposed within the altimeter footprint. However, diffuse waveforms similar to those found in ice free seas have been observed in areas of consolidated ice, and may be used to measure ice freeboard. Until recently, even retracked heights contained substantial residual errors due to the interaction of the on-board tracking system with the complex return echoes over sea ice. Software simulation of the tracking system has led to the development of new ground processing algorithms, which further reduce the short wavelength (~26 km) noise, from 30-50 cm to around 7 cm. This provides, for the first time in ice covered seas, the capability for accurate mean sea surface generation, measurement of tidal and oceanographic signals and determination of sea ice freeboard. The authors present the results of comparisons of sea surface height variability from ERS-2 radar altimetry in the Arctic with the output from a high resolution Arctic Ocean circulation model",1998,0, 1200,Teaching black box testing,"Historically, software testing received relatively less attention compared with other activities (e.g. systems analysis and design) of the software life cycle in an undergraduate computer science/information systems curriculum. Nevertheless, it is a common and important technique used to detect errors in software. This paper reports our recent experience of using a new approach to teaching software testing (particularly black box testing) at the University of Melbourne. Through this paper, we aim to increase the profile of software testing as well as to foster discussion of effective teaching methods of software testing",1998,0, 1201,A preliminary testability model for object-oriented software,"In software quality assurance, two approaches have been used to improve the effectiveness of fault detection. One attempts to find more efficient testing strategies; the other tries to use the design characteristics of the software itself to increase the probability of a fault revealing itself if it does exist. The latter which combines the best feature of fault prevention and fault detection, is also known as the testability approach to software quality assurance. This paper examines the factors that affect software testability in object-oriented software and proposes a preliminary framework for the evaluation of software testability metrics. The ultimate aim is to formulate a set of guidelines in object-oriented design to improve software quality by increasing their testability",1998,0, 1202,Action models: a reliability modeling formalism for fault-tolerant distributed computing systems,"Modern-day computing system design and development is characterized by increasing system complexity and ever shortening time to market. For modeling techniques to be deployed successfully, they must conveniently deal with complex system models, and must be quick and easy to use by non-specialists. In this paper we introduce “action modelsâ€? a modeling formalism that tries to achieve the above goals for reliability evaluation of fault-tolerant distributed computing systems, including both software and hardware in the analysis. The metric of interest in action models is the job success probability, and we will argue why the traditional availability metric is insufficient for the evaluation of fault-tolerant distributed systems. We formally specify action models, and introduce path-based solution algorithms to deal with the potential solution complexity of created models. In addition, we show several examples of action models, and use a preliminary tool implementation to obtain reliability results for a reliable clustered computing platform",1998,0, 1203,"Design, testing, and evaluation techniques for software reliability engineering","Software reliability is closely influenced by the creation, manifestation and impact of software faults. Consequently, software reliability can be improved by treating software faults properly, using techniques of fault tolerance, fault removal, and fault prediction. Fault tolerance techniques achieve the design for reliability, fault removal techniques achieve the testing for reliability, and fault prediction techniques achieve the evaluation for reliability. We present best current practices in software reliability engineering for the design, testing, and evaluation purposes. We describe how fault tolerant components can be applied in software applications, how software testing can be conducted to show improvement of software reliability, and how software reliability can be modeled and measured for complex systems. We also examine the associated tools for applying these techniques",1998,0, 1204,Managing change in software development using a process improvement approach,"Describes the DESIRE (DESigned Improvements in Requirements Evolution) project, which is investigating novel ways to better manage change during software development. Our work has identified five key process improvement practices: (1) adopting a change improvement framework; (2) establishing an organisation-wide change process; (3) classifying change; (4) change estimation; and (5) measuring change improvement. In this paper, we describe each practice in detail and propose a change maturity model that reflects an organisation's capability to manage change",1998,0, 1205,Performance and quality aspects of virtual software enterprises,"High performance organization concepts are applicable to software development and necessary to meet the business goals in a globalized software market. Therefore it is necessary for individual software companies to cooperate with other organizations to stay competitive. In this case, these companies interact in a holonic network which is the basic structure of a Virtual Software Enterprise and have to contribute their core competencies. The companies have to be able to trust in the ability of the processes of other companies to meet the predicted performance and quality goals. As a consequence, different quality management processes have to be established. As the individual companies are locally distributed they need good information and communication technologies to work together. Groupware systems can help to implement a Virtual Software Enterprise",1998,0, 1206,Practical aspects on the assessment of a review process,"The article presents results and lessons learned from a case study done to the Fixed Switching Product Line of Nokia Telecommunications. The study was initiated on the fact, that no regular monitoring was done on the efficiency or profitability of the reviews at the time. Two consecutive CMM self assessments showed that Peer Reviews of CMM Level 3 KPA was fully compatible. The results of this case study suggest that the software review process is clearly established, but not optimised or used to the full potential. The metrics used in this case study can be used in other organisations as well, to assess some aspects of the review process",1998,0, 1207,A product-process dependency definition method,"The success of most software companies depends largely on software product quality. High product quality is usually a result of advanced software development processes. Hence, improvement actions should be selected based on sound knowledge about the dependencies between software product quality attributes and software development processes. The article describes a method for developing product/process dependency models (PPDMs) for product driven software process improvement. The basic idea of the PPDM approach is that there are dependencies between product quality attributes, which are examined according to ISO 9126, and the software processes, which are assessed with the BOOTSTRAP methodology for example. The Goal-Question-Metric approach is used for product/process dependency hypothesis generation, analysis, and validation. We claim that after finding and using these dependencies, it is possible to focus improvement activities precisely and use resources more efficiently. The approach is currently being applied in three industrial applications in the ESPRIT project PROFES",1998,0, 1208,Computational graceful degradation for video decoding,"The implementation of software based video decoders is an advantageous solution over traditional dedicated hardware real-time systems. However, guaranteeing real-time performance, needed when processing video/audio bit-streams, by scheduling processing tasks of variable load and that might exceed the available resources, is an extremely difficult and in many cases an impossible challenge. In this paper we introduce the motivations and ideas of the Computational Graceful Degradation (CGD) approach for the implementation of video decoding. The goal is to provide results of possibly lower visual quality, but respecting the real-time constraints, when the required processing needs exceed the available processing resources. This approach is very attractive since it enables a more efficient usage of the hardware processing power without needing to implement resources for worst case complexity decoding. We show how such technique can be applied to the decoding of compressed video sequences. Simulation results for MPEG-4 and H.263 video compression standards showing some of the interesting achievements and potentialities of CGD approach are also reported. The presented approach can be virtually applied to any video compression standard and processor based platform, although the best performances are achieved when complexity prediction information provided as side information by the video standard is available. This option has been included in the new MPEG-4 standard.",1998,0, 1209,Prediction of decoding time for MPEG-4 video,"This paper presents a new technique capable of predicting the processing time of video decoding tasks. Such technique is based on the collection and transmission in the compressed video bit-stream, of a suitable statistic information relative to the decoding process. Simulation results show that very accurate decoding time predictions can be obtained without the need of the actual decoding. These predictions are useful for the fundamental problem of guaranteeing, at the same time, realtime performance and an efficient use of the processing power in processor based video decoders. This new approach gives the possibility of implementing optimal resource allocation strategies and new interaction schemes between the decoder and the real-time OS. Moreover, it is the key for providing a high quality of services when the required decoding resources exceed the available ones. The described method, which is of general application on any processor platform, has been included in the new video MPEG-4 ISO standard.",1998,0, 1210,A new technique for motion estimation and compensation of the wavelet detail images,"This work proposes a new block based motion estimation and compensation technique applied on the detail images of the wavelet pyramidal decomposition. The algorithm implements two block matching criteria, namely the absolute difference (AD) and the absolute sum (AS). To assess the coding performance of this method, we have implemented a software simulation of a wavelet video encoder based on our square partitioning coder. Coding results indicate a considerable image quality gain, expressed by PSNR values, of up to 3.4 dB compared to intra wavelet coding for the same bit rate. The quality gain, obtained by using two matching criteria (AS and AD) instead of one (AD), varies between 0.3 and 0.5 dB.",1998,0, 1211,A fault detection service for wide area distributed computations,"The potential for faults in distributed computing systems is a significant complicating factor for application developers. While a variety of techniques exist for detecting and correcting faults, the implementation of these techniques in a particular context can be difficult. Hence, we propose a fault detection service designed to be incorporated, in a modular fashion, into distributed computing systems, tools, or applications. This service uses well-known techniques based on unreliable fault detectors to detect and report component failure, while allowing the user to tradeoff timeliness of reporting against false positive rates. We describe the architecture of this service, report on experimental results that quantify its cost and accuracy, and describe its use in two applications, monitoring the status of system components of the GUSTO computational grid testbed and as part of the NetSolve network-enabled numerical solver",1998,0, 1212,Diagnosis and monitoring of high-voltage insulation using computerized analysis of partial discharge measurements,"Transmission of electrical power requires various types of apparatus. All the apparatus and transmission-line conductors must be insulated from the high and extra-high voltages arising from the need of transmitting a large quantum of energy. As the need grows, the quality and reliability of the insulator must also grow to handle extra stresses. Hence an attempt is made to evaluate insulation in power apparatus based on the computerized analysis of partial discharge (PD) testing measurements and observations, leading to the identification of PD fault-sources of electrical insulation degradation. The database offers a decision-making aid for product development and supports condition monitoring, assessing the state of insulation. By conducting suitable experiments on two-electrode models, one does partial discharge characterization by considering one PD defect at a time. Using the database extracted from the various PD distributions attained from physical models fabricated, PD sources or defects in simple power apparatus such as surge arrestors, current transformers and artificially created defects such as slot models and rigid terminal connectors are identified successfully and evaluated. Successful recognition rate is achieved with the developed software package even in the presence of two PD sources; say, surface and internal discharges contributing to electrical insulation degradation.",1998,0, 1213,VHDL-based distributed fault simulation using SAVANT,"There is a need for simulator-independent, VHDL-based fault simulation. Existing techniques for VHDL-based fault simulation are reviewed. Robust, a simulator-independent fault simulator tool, is described. Slow simulation speed is identified as one limitation of the current Robust tool and distributed simulation on a network of workstations is identified as a feasible way to improve its performance. Previous network-of-workstations fault simulation experiments are reviewed. Current efforts to enhance the Robust tool using SAVANT ate described. A system using Robust (with SAVANT extensions) for fault simulation on a network of workstations is proposed, using the TyVIS VHDL simulation kernel and the Legion distributed processing system",1998,0, 1214,Managing the real-time behaviour of a particle beam factory: the CERN Proton Synchrotron complex and its timing system principles,"In the CERN 26 GeV Proton Synchrotron (PS) accelerator network, super-cycles are defined as sequences of different kinds of beams produced repetitively. Each of these beams is characterised by attributes such as particle type, beam energy, its route through the accelerator network, and the final end user. The super-cycle is programmed by means of an editor through which the operational requirements of the physics programme can be described. Each beam in the normal sequence may later be replaced by a set of spare beams automatically depending on software and hardware interlocks and requests presented to the Master Timing Generator. The MTG calculates at run time how each beam is to be manufactured, and sends a telegram message to each accelerator, just before each cycle, describing what it should be doing now and during the next cycle. These messages, together with key machine timing events and clocks are encoded onto a timing distribution drop net where they are distributed around the PS complex to VME-standard timing reception TG8 modules which generate output pulses and VME bus interrupts for task synchronisation. The TG8 modules are able to use accelerator-related clocks such as the incremental/decremental magnetic field trains, or the beam revolution and radio frequencies to produce high precision beam synchronous timing. Timing Surveillance Modules (TSM) monitor these timings, which give high precision interval measurements used for the machine tuning, beam diagnostics, and fault detection systems",1998,0, 1215,Process scheduling for performance estimation and synthesis of hardware/software systems,The paper presents an approach to process scheduling for embedded systems. Target architectures consist of several processors and ASICs connected by shared busses. We have developed algorithms for process graph scheduling based on list scheduling and branch and bound strategies. One essential contribution is in the manner in which information on process allocation is used in order to efficiently derive a good quality or optimal schedule. Experiments show the superiority of these algorithms compared to previous approaches like critical path heuristics and ILP based optimal scheduling. An extension of our approach allows the scheduling of conditional process graphs capturing both data and control flow. In this case a schedule table has to be generated so that the worst case delay is minimized,1998,0, 1216,How faults can be simulated in self-testable VLSI digital circuits?,"Computer based simulation of self testable VLSI digital circuits is a time consuming process. This is why new methods are still being developed to optimise the simulation process and to reduce its duration. The paper presents a new method of fault simulation, intended for self testable digital circuits. In this method, fault masking performed by an in-circuit tester is estimated, based on only the signature itself which is stored in compressor. It is not necessary to carry out a time consuming analysis of the digital circuit's responses and compare them with stored model responses. Based on performed simulations, an observation was made that the developed method brings a substantial reduction of the duration of fault simulation processes performed for self testable digital circuits. It means the research laboratory needs considerably less time to verify the projects carried out on digital circuits",1998,0, 1217,Compression based refinement of spatial interpolation of missing pixels in grey scale images,"We present a method for refining interpolated values for pixels in grey scale images that were corrupted during transmission. The method uses an adaptive image model developed in the context of image compression, allowing it to exploit prior knowledge of image characteristics that can either be derived from examples coming from the same class of images or alternatively be determined by the sender and transmitted separately. The proposed method has been tested in a number of experiments and has been shown to improve both objective quality measures as well as subjective quality over a simpler non-adaptive method",1998,0, 1218,"IMAGE: a low cost, low power video processor for high quality motion estimation in MPEG-2 encoding","A low cost low power architecture dedicated to perform high quality motion estimation on MPEG-2 CCIR 601 sequences is presented. The chip implements a new high performance motion estimation algorithm based on a modified genetic search strategy that can be enhanced by tracking motion vectors through successive frames. When tested on several sequences at different bit rates, the algorithm delivers nearly full search quality (less than -0.1 dB for a ±75 H/V search range) while decreasing the processing power to 0.5%. The proposed MIMD processor exposes software programmability at different levels allowing the user to define his own searching strategy and combine multiple chips in Master-Salve configuration to meet the required processing power. Moreover post-processing of motion vectors, computation of dual prime vectors for low delay coding and selection of macroblock prediction mode can be programmed on this architecture. A low power strategy was also developed that mainly relies on an adaptive clocking scheme and low power caches implementation. The chip has been integrated in a 0.35 μm 5L CMOS technology on an area of 50 mm2",1998,0, 1219,An extensible system for source code analysis,"Constructing code analyzers may be costly and error prone if inadequate technologies and tools are used. If they are written in a conventional programming language, for instance, several thousand lines of code may be required even for relatively simple analyses. One way of facilitating the development of code analyzers is to define a very high-level domain-oriented language and implement an application generator that creates the analyzers from the specification of the analyses they are intended to perform. This paper presents a system for developing code analyzers that uses a database to store both a no-loss fine-grained intermediate representation and the results of the analyses. The system uses an algebraic representation, called F(p), as the user-visible intermediate representation. Analyzers are specified in a declarative language, called F(p)-l, which enables an analysis to be specified in the form of a traversal of an algebraic expression, with access to, and storage of, the database information the algebraic expression indices. A foreign language interface allows the analyzers to be embedded in C programs. This is useful for implementing the user interface of an analyzer, for example, or to facilitate interoperation of the generated analyzers with pre-existing tools. The paper evaluates the strengths and limitations of the proposed system, and compares it to other related approaches",1998,0, 1220,A new dependency model based testability analyzer,"This paper describes the development of a testability analysis tool called ADMA. The approach used in ADMA is based on use of dependency models. Dependency models are the basis of several important products for testability analysis. What makes ADMA different from existing testability analysis tools is its capability to use multiple analysis algorithms and its user-friendly report format. The rationale behind the decision to support multiple algorithms is that an algorithm may perform better for certain circuits than other algorithms ADMA allows users to compare the results of different analysis side by side. ADMA provides concise reports of different formats for different types of users, such as executives, system designers, test engineers, and dependency modelers. Thus, users can focus on the aspects that are of interest to them. The subject paper gives an overview of the ADMA, emphasizing the fundamental advances represented by ADMA. The paper also describes how this tool integrates with the Automatic Dependency Model Generator, and with the Automatic Built-in Test generator. The paper does not focus on ADMA as a product, but more on how it generally extends current dependency mode-based tools",1998,0, 1221,A virtual test-bench for analog circuit testability analysis and fault diagnosis,"Are fault simulation techniques feasible and effective for fault diagnosis of analog circuits? In this paper, we investigate these issues via a software tool which can generate testability metrics and diagnostic information for analog circuits represented by SPICE descriptions. This tool, termed the virtual test bench (VTB), incorporates three different simulation-based techniques for fault detection and isolation. The first method is based on the creation of fault-test dependency models, while the other two techniques employ machine learning principles based on the concepts of: (1) Restricted Coloumb Energy (RCE) Neural Networks, and (2) Learning Vector Quantization (LVQ). Whereas the output of the first method can be used for the traditional off-line diagnosis, the RCE and LVQ models render themselves more naturally to on-line monitoring, where measurement data from various sensors is continuously available. Since it is well known that analog faults and test measurements are affected by component parameter variations, we have also addressed the issues of robustness of our fault diagnosis schemes. Specifically, we have attempted to answer the questions regarding fixing of test measurement thresholds, obtaining the minimum number of Monte-Carlo runs required to stabilize the measurements and their deviations, and the effect of different thresholding schemes on the robustness of fault models. Although fault-simulation is a powerful technique for analog circuit testability analysis, its main shortcomings are the long simulation time, large volume of data and the fidelity of simulators in accurately modeling faults. We have plotted the simulation time and volume of data required for a range of circuit sizes to provide guidance on the feasibility and efficacy of this approach",1998,0, 1222,Using infrared thermography to detect age degradation in EPROM chips,"Dozens of circuit cards (about 5% of the total) failed to function after a recent upgrade of the digital Flight Control Computer (DFLCC), even though all of these cards operated correctly before the modifications. The shop called for the use of the infrared camera to assist in diagnosing and repairing these cards. What the Neural Radiant Energy Detection (NREDS) found was faulty and marginal chips. Of particular interest was the presence of degraded EPROM chips on the Program Memory (PM) cards. While it is known that EPROMs have a limited life cycle (in terms of number of total recordings), the failure has been further characterized. Thermography provides a quantification of the degradation in thermal performance as the EPROMs are reused. When the heat rates exceed a given value, the EPROM chips will not accept a program. Some of the failed chips exhibited enormous heat rates. What is clear from these results is that infrared thermography can be used to identify degrading EPROM chips for replacement before failures become immanent",1998,0, 1223,Can model-based and case-based expert systems operate together?,"In discussing diagnostic expert systems, there is an ongoing debate as to whether model-based systems are superior to case-based systems, or vice versa. Our experience has shown that there is no real need for debate because the two are not mutually exclusive and, to the contrary, complement each other. Current expert system technology is capable of two reasoning mechanisms, in addition to other mechanisms, into one integrated system. Depending on the knowledge available, and time and cost considerations, expert systems allow the user to decide the relative proportion of case-based to modal-based reasoning he/she wishes to employ in any given situation. Diagnostic support software should be evaluated by two critical factor groups, Ben-Bassat, et al., 1992: (a) cost and time to deployment; and (b) accuracy, completeness and efficiency of the diagnostic process. The question, therefore is: for a given budget of time and money, which approach will result in the best diagnostic performance-case-based, model-based, or combination of the two. In this paper we will discuss the role of expert systems in combining model based and case-based reasoning to effect the most efficient user defined solution to diagnostic performance",1998,0, 1224,Downsizing the estimation of software quality: a small object-oriented case study,"It would be beneficial if the quality of a software system could be estimated early in its lifecycle. Although many estimation methods exist, it is questionable whether the experiences gained in large software projects can successfully be transferred to small scale software development. The paper presents results obtained from a number of projects developed in a small company with limited resources. The projects were analyzed in order to find out metrics which would be suitable for early estimation of quality. A number of possible models were evaluated and discussed",1998,0, 1225,Time triggered protocol-fault tolerant serial communications for real-time embedded systems,"This article discusses an advanced serial communications protocol which has been developed for applications which require highly dependable, or fault-tolerant operation. A typical such application would be automotive brake-by-wire or steer-by-wire systems where the system must be `fail-operational' as it is safety critical. `By-wire' systems transfer electrical signals down a wire instead of using a medium such as hydraulic fluid to transfer muscular energy. A conventional Antilock Braking System (ABS) is considered `fail-silent'; if a fault in the electronic control system is detected, the control system is switched off, leaving the manual hydraulic back-up still operational. If no such hydraulic back-up is available (as in the case of a `by-wire' system), the system must continue to function in the event of a fault occurring. The automotive industry has identified many good reasons to develop `by-wire' systems; reduction in parts count, removal of hydraulic system, improved maintenance, increased performance and functionality, increased passive safety by removal of mechanical linkages to passenger compartment, fuel economy, etc. Although there are several non-trivial challenges which must be overcome before `by-wire' systems become the mainstream, there are many compelling reasons for the technology to be introduced and so it is expected that they'll be overcome relatively quickly. The Time Triggered Protocol (TTP) overcomes the challenge of fault-tolerant distributed embedded processing",1998,0, 1226,Reengineering the class-an object oriented maintenance activity,"When an Incremental Approach is used to develop an object-oriented system, there is a risk that the class design will deteriorate in quality with each increment. This paper presents a technique for detecting classes that may be prone to deteriorate, or if deterioration has occurred assists with reengineering those classes. Experience with applying this technique to an industrial software development project is also discussed",1998,0, 1227,Software products quality improvement with a programming style guide and a measurement process,"Quality requirements of industrial software are very high and the correctness of the source code is of great importance for this purpose. For this reason, the objective of QUALIMET (ESSI Project 23982) is to improve the software development process, introducing a set of style norms to be used when coding in C++, defining a set of metrics to be used on the source code in order to check its quality and testing this approach in a baseline project. The paper presents the work that is being carried out under this project It is expected that at the end of QUALIMET, the incorporation of these quality assurance techniques into the current methodology for developing software, will allow to have a complete methodology that guarantees software product quality, minimising the complexity of the code earlier in the programming process, yielding more maintainable and less error-prone software and improving the quality of the software and the satisfaction of customers",1998,0, 1228,Early measurement and improvement of software quality,"The paper combines relevant research in software reliability engineering and software measurement to develop an integrated approach for the early measurement and improvement of software quality. Recent research in these areas is extended 1) to select appropriate software measures based on. A formal model, 2) to construct tree-based reliability models for early problem identification and quality improvement, and 3) to develop tools to support industrial applications. Initial results applying this approach to several IBM products demonstrated the applicability and effectiveness of this approach",1998,0, 1229,An IDDQ sensor for concurrent timing error detection,"Error control is a major concern in many computer systems, particularly those deployed in critical applications. Experience shows that most malfunctions during system operation are caused by transient faults, which often manifest themselves as abnormal signal delays that may result in violations of circuit element timing constraints. We present a novel complementary metal-oxide-semiconductor-based concurrent error-detection circuit that allows a flip-flop (or other timing-sensitive circuit element) to sense and signal when its data has been potentially corrupted by a setup or hold timing violation. Our circuit employs on-chip quiescent supply current evaluation to determine when the input changes in relation to a clock edge. Current through the detection circuit should be negligible while the input is stable. If the input changes too close to the clock time, the resulting switching transient current in the detection circuit exceeds a reference threshold at the time of the clock transition, and an error is flagged. We have designed, fabricated, and evaluated a test chip that shows that such an approach can be used to detect setup and hold time violations effectively in clocked circuit elements",1998,0,1046 1230,Adaptive connection admission control for mission critical real-time communication networks,"We report our work on adaptive connection admission control in real-time communication networks. Much of the existing work on connection admission control (CAC) specifies the QoS parameters as fixed values and does not exploit the dynamic fluctuations in resource availability. We take an innovative approach. First, we allow an application to specify QoS over a range, rather than fixed values. Second and more importantly, we design, analyze, and implement CAC modules that, based on QoS specified over a range, adaptively allocate system resources to connections. Delay analysis is an integral part of connection admission control. Our adaptive CAC uses an efficient delay analysis method to derive a closed form solution for end-to-end delay of messages in a connection. Our adaptive CAC improves the system's performance in terms of the probability of admitting connections and the QoS offered to the connections",1998,0, 1231,Intelligent AUV on-board health monitoring software (INDOS),"Two aspects of situation analysis are of special importance in the operation of autonomous underwater vehicles: maintenance of system integrity; and identification of danger situations and appropriate reactions, or more generally early detection of potential failures and provision of intelligent countermeasures. The efforts with regard to fault avoidance are centred on the two functions: system health and mission management. It is mainly in these domains that fault detection, diagnosis and recovery are necessary. Any degradation in the performance of the system has to be detected as soon as possible, otherwise the vehicle controller would operate on the basis of abnormal and non-realistic measurement data. Consequently, software development for AUVs at STN ATLAS is currently focused on intelligent behaviour. In the case of the STN ATLAS AUV development “C-Catâ€? this complex is handled by the intelligent software INDOS",1998,0, 1232,"Instruction selection, resource allocation, and scheduling in the AVIV retargetable code generator","The AVIV retargetable code generator produces optimized machine code for target processors with different instruction set architectures. AVIV optimizes for minimum code size. Retargetable code generation requires the development of heuristic algorithms for instruction selection, resource allocation, and scheduling. AVIV addresses these code generation subproblems concurrently, whereas most current code generation systems address them sequentially. It accomplishes this by converting the input application to a graphical (Split-Node DAG) representation that specifies all possible ways of implementing the application on the target processor. The information embedded in this representation is then used to set up a heuristic branch-and-bound step that performs functional unit assignment, operation grouping, register bank allocation, and scheduling concurrently. While detailed register allocation is carried out as a second step, estimates of register requirements are generated during the first step to ensure high quality of the final assembly code. We show that near-optimal code can be generated for basic blocks for different architectures within reasonable amounts of CPU time. Our framework thus allows us to accurately evaluate the performance of different architectures on application code.",1998,0, 1233,TBRIM: decision support for validation/verification of requirements,"A decision support system has been developed that provides a structured approach to aid the validation/verification of requirement sets and enhance the quality of the resulting design by reducing risk. Additionally, an automated implementation of this approach has been completed utilizing the Advanced Integrated Requirements Engineering System (AIRES) software. Corroboration for the application of this decision support system has been attained from automated and manual testing performed on the system specification of a large-scale software development and integration project. The decision support approach, called the Test-Based Risk Identification Methodology (TBRIM), has been used to detect four major types of requirements risk potential: ambiguity, conflict, complexity, and technical factors. The TBRIM is based on principles of evidence testing and eliminative logic developed by Bacon and Cohen for making decisions under uncertainty. New techniques for detection of complexity and technical risks have been added to existing methods for identification of risk from ambiguous and conflicting requirements. Comparisons of the automated and manual test results on risk category-by-category basis showed good correlation, and category-independent comparisons showed improvements that were consistent with expectations, Benefits from use of this decision support system include higher accuracy, consistency, and efficiency in the validation/verification of requirements. Knowledge of senior personnel can be captured to provide an expert system for less-experienced personnel",1998,0, 1234,Evaluation of fourteen desktop data mining tools,"Fourteen desktop data mining tools (or tool modules) ranging in price from US$75 to $25,000 (median <$1,000) were evaluated by four undergraduates inexperienced at data mining, a relatively experienced graduate student, and a professional data mining consultant. The tools ran under the Microsoft Windows 95, Microsoft Windows NT, or Macintosh System 7.5 operating systems, and employed decision trees, rule induction, neural networks, or polynomial networks to solve two binary classification problems, a multi-class classification problem, and a noiseless estimation problem. Twenty evaluation criteria and a standardized procedure for assessing tool qualities were developed and applied. The traits were collected in five categories: capability, learnability/usability, interoperability, flexibility, and accuracy. Performance in each of these categories was rated on a six-point ordinal scale, to summarize their relative strengths and weaknesses. This paper summarizes a lengthy technical report (Gomolka et al., 1998), which details the evaluation procedure and the scoring of all component criteria. This information should be useful to analysts selecting data mining tools to employ, as well as to developers aiming to produce better data mining products",1998,0, 1235,Performability analysis of an avionics-interface,"This paper reports on a case study in the quantitative analysis of safety-critical systems. Although formal methods are becoming more and more accepted in the development of such systems, usually they are used in the verification of qualitative properties. However, in many cases system safety also depends on the fact that certain quantitative requirements are met. Therefore we are interested in statements about quantitative properties, which can be achieved by a rigorous formal method. Our approach is to create a generalized stochastic Petri net (GSPN) model of the system and use it for the analysis of the system. The object of this case study is a fault-tolerant computer (FTC) constructed by Daimler Benz Aerospace (DASA) for the International Space Station (ISS). One part of the FTC is the Avionics Interface (AVI) which connects the FTC with a bus-system. We want to determine the data throughput that can be reached by the AVI and obtain informations about bus-usage-profiles which can cause the rejection of messages. Although such rejections are allowed according to the specification, they can cause a significant deterioration in the overall bus performance. In this article we describe a GSPN model of the AVI software and its environment. This model is used to make predictions about the AVI performability. Since a complete analytical solution of the model is not possible due to its complexity and the infinite state space, a simulation is used to analyse the crucial AVI behavior for several bus-usage-profiles.",1998,0, 1236,Automatic commissioning of air-conditioning plant,"The paper presents the results of work being undertaken as part of the UK contribution to an International Energy Agency research project (Annex 34) concerned with computer-aided fault detection and diagnosis in real buildings. The aim of the project is to develop and test software tools, which are based on existing methods of fault detection and diagnosis, for analysing the performance of heating, ventilating and air-conditioning systems. The development of a tool, which can automate the commissioning of air-handling units in large-scale air-conditioning systems, is considered in this paper. The presence of faults is detected by a fuzzy model-based fault diagnosis scheme which uses generic reference models to describe the behaviour of the plant when it is operating correctly and when one of a predefined set of faults has occurred. The paper discusses practical issues such as the integration of the commissioning tool and the building energy management system, the automatic reconfiguration of the control strategy to implement open-loop commission tests, the automatic generation of the test sequences, and the presentation of the diagnostic results to the user. Experimental results are presented that demonstrate the use of the tool to remotely commission the cooling coil of an air-handling unit in a commercial office building",1998,0, 1237,Integrated dynamic scheduling of hard and QoS degradable real-time tasks in multiprocessor systems,"Many time critical applications require predictable performance and tasks in these applications have deadlines to be met. For tasks with hard deadlines, a deadline miss can be catastrophic, while for QoS degradable tasks (soft real time tasks) timely approximate results of poorer quality or occasional deadline misses are acceptable. Imprecise computation and (m,k) firm guarantee are two workload models that quantify the trade off between schedulability and result quality. We propose dynamic scheduling algorithms for integrated scheduling of real time tasks, represented by these workload models, in multiprocessor systems. The algorithms aim at improving the schedulability of tasks by exploiting the properties of these models in QoS degradation. We also show how the proposed algorithms can be adapted for integrated scheduling of multimedia streams and hard real time tasks, and demonstrate their effectiveness in quantifying QoS degradation. Through simulation, we evaluate the performance of these algorithms using the metrics-success ratio (measure of schedulability) and quality. Our simulation results show that one of the proposed algorithms, multilevel degradation algorithm, outperforms the others in terms of both the performance metrics",1998,0, 1238,Congestion control with two level rate control for continuous media traffic in network without QoS guarantee,"Recently, it is required to transfer continuous media over networks without QoS guarantee. In these networks, network congestion will cause transmission delay variance which degrades the quality of continuous media itself. This paper proposes a new protocol using congestion control with two level rate control in the data transfer level and the coding level. It introduces a TCP-like congestion control mechanism to the rate control of the data transfer level, which can detect the QoS change quickly, and adjust the coding rate of continuous media with a time interval long enough for its quality. The performance evaluation through software simulation with multiplexing continuous media traffic and TCP traffic shows that the proposed protocol works effectively in the case of network congestion",1998,0, 1239,Assessment criteria for safety critical computer,This paper deals with the approach for assessing the safety critical computer-based railway systems. It presents the method used for planning and conducting an assessment. This paper provides a precise and workable assessment criteria of safety critical digital architecture. It lists the generic measures/techniques used to construct a safe architecture. The objective of this paper is to present the activities related to the assessment of safety digital architectures. It gives a set of criteria used for judging the compliance with the quality and the safety requirements. The criteria are drawn from standards requirements and best practices. These criteria have been applied to assess various case studies used in the European project named ACRuDA (Assessment and Certification Rules for Digital Architecture),1998,0, 1240,A new approach for the failure mode analysis of optical character recognition algorithms,"Concerns the imprecise method of OCR algorithm development and improvement. While trial-and-error techniques must be employed, the evaluation of the failed attempts is the most crucial, tedious, and error prone task. The paper presents a precise and efficient solution to this problem. The overlooked cause of the problem stems from the passive nature of the analysis performed. The proposed approach is highly interactive by employing a tight integration between the analyst and the OCR code they are attempting to find fault with or improve upon. This close coupling can be achieved through the use of special tools built around the code which provide a level of control and precision previously unavailable to developers. By establishing a virtual awareness of the operations being performed and additionally providing the means to hypothesize scenarios with each of the OCR algorithms, an improved state of failure mode analysis (FMA) can be reached. This new approach to algorithm advancement circumvents the current inferior solution to a daunting problem and will allow the future of optical character recognition to rise to a new level at an excitingly fast pace",1998,0, 1241,Machine learning algorithms for fault diagnosis in analog circuits,"In this paper, we investigate and systematically evaluate two machine learning algorithms for analog fault detection and isolation: (1) restricted Coloumb energy (RCE) neural network, and (2) learning vector quantization (LVQ). The RCE and LVQ models excel at recognition and classification types of problems. In order to evaluate the efficacy of the two learning algorithms, we have developed a software tool, termed Virtual Test-Bench (VTB), which generates diagnostic information for analog circuits represented by SPICE descriptions. The RCE and LVQ models render themselves more naturally to online monitoring, where measurement data from various sensors is continuously available. The effectiveness of RCE and LVQ is demonstrated on illustrative example circuits",1998,0, 1242,Advanced control methods in rolling applications,"Continuously increasing quality demands in recent years have led to a considerable reduction of the permissible thickness tolerances for hot as well as cold rolled steel. However, conventional control concepts cannot completely compensate for the effects of various disturbances resulting from the rolling process. The utilization of high performance computer systems for the automation allows the application of advanced control techniques to overcome existing dynamic limitations. Particularly if older mechanical equipment is utilized, which is typical for revamping projects, these new concepts allow a significant improvement of the quality. In this paper, different examples of such technologies are presented: a new concept for advanced thickness control in hot strip mills based on a feedforward strategy combined with a Kalman filter, a statistical online identification tool; to enhance existing tension control systems via looper arms in finishing mills; a massflow control concept; and a predictive control concept to compensate reel tension variations. The development of these methods was based on extensive simulations as a development and testing tool. In the meantime, the simulation results have been verified by practical application of the methods. Some results of these practical implementations are added to demonstrate the potential improvements possible with these concepts.",1998,0, 1243,A strategy for improving safety related software engineering standards,"There are many standards which are relevant for building safety- or mission-critical software systems. An effective standard is one that should help developers, assessors and users of such systems. For developers, the standard should help them build the system cost-effectively, and it should be clear what is required in order to conform to the standard. For assessors, it should be possible to objectively determine compliance to the standard. Users, and society at large, should have some assurance that a system developed to the standard has quantified risks and benefits. Unfortunately, the existing standards do not adequately fulfil any of these varied requirements. We explain why standards are the way they are, and then provide a strategy for improving them. Our approach is to evaluate standards on a number of key criteria that enable us to interpret the standard, identify its scope and check the ease with which it can be applied and checked. We also need to demonstrate that the use of a standard is likely either to deliver reliable and safe systems at an acceptable cost or to help predict reliability and safety accurately. Throughout the paper, we examine, by way of example, a specific standard for safety-critical systems (namely IEC 1508) and show how it can be improved by applying our strategy",1998,0, 1244,Formalising the software evaluation process,"Software process evaluation is an essential activity for improving software development in an organisation. Actually there is a need for a formalised method of evaluation that encompasses all the factors that affect software production. If a software process evaluation is to faithfully reflect the current status of an organisation, it must also consider the assessment of other factors. A formalised method of evaluation is presented that jointly assesses the three essential factors in software production: processes, technology and human resources. The aim behind this is to provide a solution to the main shortcomings of current software process evaluation methods: partiality of the evaluated factors and non-formalisation of the evaluation processes. Experimentation in various organisations has proven the adequacy of the proposed method for obtaining results that accurately portray the current status of the organisation's software process",1998,0, 1245,Industrial application of criticality predictions in software development,"Cost-effective software project management has the serious need to focus resources on areas where they have biggest impact. Criticality predictions are typically applied for finding out such high-impact areas in terms of most effective defect detection. The paper investigates two related questions in the context of real projects, namely the selection of the best classification technique and the use of its results in directing management decisions. Results from a current large-scale switching project are included to show practical benefits. Fuzzy classification yielded best results and is in a second study enhanced with genetic algorithms to improve overall prediction effectiveness",1998,0, 1246,Applying testability to reliability estimation,"The purpose of the article is to implement the idea of using testability to estimate software reliability. The basic steps involve estimating testability, evaluating how well software was written, and assessing the effectiveness of testing. Results from these steps along with operational profiles are used to estimate software reliability. The paper describes an application of this method to evaluate the reliability of a real software system of about 6000 lines of executable code and discusses the results of such an estimation. The results are also compared with those obtained by using two reliability growth models",1998,0, 1247,The personal software process: a cautionary case study,"In 1995, Watts Humphrey introduced the Personal Software Process in his book, A Discipline for Software Engineering (Addison Wesley Longman, Reading, Mass.). Programmers who use the PSP gather measurements related to their own work products and the process by which they were developed, then use these measures to drive changes to their development behavior. The PSP focuses on defect reduction and estimation improvement as the two primary goals of personal process improvement. Through individual collection and analysis of personal data, the PSP shows how individuals can implement empirically guided software process improvement. The full PSP curriculum leads practitioners through a sequence of seven personal processes. The first and most simple PSP process, PSPO, requires that practitioners track time and defect data using a Time Recording Log and Defect Recording Log, then fill out a detailed Project Summary Report. Later processes become more complicated, introducing size and time estimation, scheduling, and quality management practices such as defect density prediction and cost-of-quality analyses. After almost three years of teaching and using the PSP, we have experienced its educational benefits. As researchers, however, we have also uncovered evidence of certain limitations. We believe that awareness of these limitations can help improve appropriate adoption and evaluation of the method by industrial and academic practitioners",1998,0, 1248,Pragmatic study of parametric decomposition models for estimating software reliability growth,"Numerous stochastic models for the software failure phenomenon based on Nonhomogeneous Poisson Process (NHPP) have been proposed in the last three decades (1968-98). Although these models are quite helpful for software developers and have been widely applied at industrial organizations or research centers, we still need to do more work on examining/estimating the parameters of existing software reliability growth models (SRGMs). We investigate and account for three possible trends of software fault detection phenomena during the testing phase: increasing, decreasing and steady state. We present empirical results from quantitative studies on evaluating the fault detection process and develop a valid time-variable fault detection rate model which has the inherent flexibility of capturing a wide range of possible fault detection trends. The applicability of the proposed model and the related methods of parametric decomposition are illustrated through several real data sets from different software projects. Our evaluation results show that the analytic parametric decomposition approach for SRGM have a fairly accurate prediction capability. In addition, the testing effort control problem based on the proposed model is also demonstrated",1998,0, 1249,The lognormal distribution of software failure rates: origin and evidence,"An understanding of the distribution of software failure rates and its origin will strengthen the relation of software reliability engineering both to other aspects of software engineering and to the wider field of reliability engineering. The paper proposes that the distribution of failure rates for faults in software systems tends to be lognormal. Many successful analytical models of software behavior share assumptions that suggest that the distribution of software event rates will asymptotically approach lognormal. The lognormal distribution has its origin in the complexity, that is the depth of conditionals, of software systems and the fact that event rates are determined by an essentially multiplicative process. The central limit theorem links these properties to the lognormal: just as the normal distribution arises when summing many random terms, the lognormal distribution arises when the value of a variable is determined by the multiplication of many random factors. Because the distribution of event rates tends to be lognormal and faults are just a random subset or sample of the events, the distribution of the failure rates of the faults also tends to be lognormal. Failure rate distributions observed by other researchers in twelve repetitive-run experiments and nine sets of field failure data are analyzed and demonstrated to support the lognormal hypothesis. The ability of the lognormal to fit these empirical failure rate distributions is superior to that of the gamma distribution (the basis of the Gamma/EOS family of reliability growth models) or a Power-law model",1998,0, 1250,Resource-constrained non-operational testing of software,"In “classicalâ€?testing approaches, “learningâ€?is said to occur if testers dynamically improve the efficiency of their testing as they progress through a testing phase. However, the pressures of modern business and software development practices seem to favor an approach to testing which is very akin to a “sampling without replacementâ€?of a relatively limited number of pre-determined structures and functions conducted under significant schedule and resource constraints. The primary driver is often the desire to “coverâ€?ONLY previously “untestedâ€?functions, operations or code constructs, and to meet milestones. We develop and evaluate a model that describes the fault detection and removal process in such an environment. Results indicate that in environments where “coverageâ€?based testing is promoted, but resources and decisions are constrained, very little dynamic “learningâ€?takes place, and that it may be an artifact of program structure or of the test case sequencing policy",1998,0, 1251,Reliability simulation of component-based software systems,"Prevalent Markovian and semi Markovian methods to predict the reliability and performance of component based heterogeneous systems suffer from several limitations: they are subject to an intractably large state space for more complex scenarios, and they cannot take into account the influence of various parameters such as reliability growth of individual components, dependencies among components, etc., in a single model. Discrete event simulation offers an alternative to analytical models as it can capture a detailed system structure, and can be used to study the influence of different factors separately as well as in a combined fashion on dependability measures. We demonstrate the flexibility offered by discrete event simulation to analyze such complex systems through two case studies, one of a terminating application, and the other of a real time application with feedback control. We simulate the failure behavior of the terminating application with instantaneous as well as explicit repair. We also study the effect of having fault tolerant configurations for some of the components on the failure behavior of the application. In the second case of the real time application, we initially simulate the failure behavior of a single version taking into account its reliability growth. We also study the failure behavior of three fault tolerant systems: DRB, NVP and NSCP which are built from the individual versions of the real time application. Results demonstrate the flexibility offered by simulation to study the influence of various factors on the failure behavior of the applications for single as well as fault tolerant configurations",1998,0, 1252,Software reliability analysis incorporating fault detection and debugging activities,"The software reliability measurement problem can be approached by obtaining the estimates of the residual number of faults in the software. Traditional black box based approaches to software reliability modeling assume that the debugging process is instantaneous and perfect. The estimates of the remaining number of faults, and hence reliability, are based on these oversimplified assumptions and they tend to be optimistic. We propose a framework relying on rate based simulation technique for incorporating explicit debugging activities along with the possibility of imperfect debugging into the black box software reliability models. We present various debugging policies and analyze the effect of these policies on the residual number of faults in the software. In addition, we propose a methodology to compute the reliability of the software, taking into account explicit debugging activities. An economic cost model to determine the optimal software release criteria in the presence of debugging activities is described. Finally, we present the high level architecture of a tool, called SRSIM, for the purpose of automating the simulation techniques presented",1998,0, 1253,Testing the robustness of Windows NT software,"To date, most studies on the robustness of operating system software have focused on Unix based systems. The paper develops a methodology and architecture for performing intelligent black box analysis of software that runs on the Windows NT platform. The goals of the research are three fold: first, to develop intelligent robustness testing techniques for commercial Off-The-Shelf (COTS) software; second, to benchmark the robustness of NT software in handling anomalous events; and finally, to identify robustness gaps to permit fortification for fault tolerance. The random and intelligent data design library environment (RIDDLE) is a tool for analyzing operating system software, system utilities, desktop applications, component based software, and network services. RIDDLE was used to assess the robustness of native Windows NT system utilities as well as Win32 ports of the GNU utilities. Experimental results comparing the relative performance of the ported utilities versus the native utilities are presented",1998,0, 1254,Studying the effects of code inspection and structural testing on software quality,"The paper contributes a controlled experiment to characterize the effects of code inspection and structural testing on software quality. Twenty subjects performed sequentially code inspection and structural testing using different coverage values as test criteria on a C-code module. The results of this experiment show that inspection significantly outperforms the defect detection effectiveness of structural testing. Furthermore, the experimental results indicate little evidence to support the hypothesis that structural testing detects different defects, that is, defects of a particular class, that were missed by inspection and vice versa. These findings suggest that inspection and structural testing do not complement each other well. Since 39 percent (on average) of the defects were not detected at all, it might be more valuable to apply inspection, together with other testing techniques, such as boundary value analysis, to achieve a better defect coverage. We are aware that a single experiment has many limitations and often does not provide conclusive evidence. Hence, we consider this experiment a starting point and encourage other researchers to investigate the optimal mix of defect detection techniques",1998,0, 1255,Software diagnosability,"This paper is concerned with diagnosability, its definition and the axiomatization of its expected behaviour. The intuitive expected behaviour of diagnosability is defined relative to basic operations that are applicable on software designs. A diagnosability measurement is proposed which is consistent with the stated axioms. The diagnosability metric is based on an analysis of the design structure: fault location effort and precision are measured for a given testing context. Compromises between global test difficulty and diagnostic precision are illustrated on part of a data-flow software design. Throughout the paper, we develop a case study",1998,0, 1256,Applying design metrics to a large-scale software system,"Three metrics were used to extract design information from existing code to identify structural stress points in a software system being analyzed: Di, an internal design metric which incorporates factors related to a module's internal structure; De , an external design metric which focuses on a module's external relationships to other modules in the software system; and D(G), a composite design metric which is the sum of Di and De . Since stress point modules generally have a high probability for being fault-prone, project managers can use the information to determine where additional testing effort should be spent and assign these modules to more experienced programmers if modifications are needed. To make the analysis more accurate and efficient, a design metrics analyzer (χMetrics) was implemented. We conducted experiments using χMetrics on part of a distributed software system, written in C, with a client-server architecture, and identified a small percentage of its functions as good candidates for fault proneness. Files containing these functions were then validated by the real defect data collected from a recent major release to its next release for their fault proneness. Normalized metrics values were also computed by dividing the Di, De, and D(G) values by the corresponding function size determined by non-blank and non-comment lines of code to study the possible impact of function size on these metrics. Results indicate that function size has little impact on the predictive quality of our design metrics in identifying fault-prone functions",1998,0, 1257,A methodology for detection and estimation of software aging,"The phenomenon of software aging refers to the accumulation of errors during the execution of the software which eventually results in it's crash/hang failure. A gradual performance degradation may also accompany software aging. Pro-active fault management techniques such as “software rejuvenationâ€?(Y. Huang et al., 1995) may be used to counteract aging if it exists. We propose a methodology for detection and estimation of aging in the UNIX operating system. First, we present the design and implementation of an SNMP based, distributed monitoring tool used to collect operating system resource usage and system activity data at regular intervals, from networked UNIX workstations. Statistical trend detection techniques are applied to this data to detect/validate the existence of aging. For quantifying the effect of aging in operating system resources, we propose a metric: “estimated time to exhaustionâ€? which is calculated using well known slope estimation techniques. Although the distributed data collection tool is specific to UNIX, the statistical techniques can be used for detection and estimation of aging in other software as well",1998,0, 1258,Determining fault insertion rates for evolving software systems,"In developing a software system, we would like to be able to estimate the way in which the fault content changes during its development, as well as determining the locations having the highest concentration of faults. In the phases prior to test, however, there may be very little direct information regarding the number and location of faults. This lack of direct information requires the development of a fault surrogate from which the number of faults and their location can be estimated. We develop a fault surrogate based on changes in relative complexity, a synthetic measure which has been successfully used as a fault surrogate in previous work. We show that changes in the relative complexity can be used to estimate the rates at which faults are inserted into a system between successive revisions. These rates can be used to continuously monitor the total number of faults inserted into a system, the residual fault content, and identify those portions of a system requiring the application of additional fault detection and removal resources",1998,0, 1259,Exploring defect data from development and customer usage on software modules over multiple releases,"Traditional defect analyses of software modules have focused on either identifying error prone modules or predicting the number of faults in a module, based on a set of module attributes such as complexity, lines of code, etc. In contrast to these metrics based modeling studies, the paper explores the relationship of the number of faults per module to the prior history of the module. Specifically, we examine the relationship between: (a) the faults discovered during development of a product release and those escaped to the field; and (b) faults in the current release and faults in previous releases. Based on the actual data from four releases of a commercial application product consisting of several thousand modules, we show that: modules with more defects in development have a higher probability of failure in the field; there is a way to assess the relative quality of software releases without detailed information on the exact release content or code size; and it is sufficient to consider just the previous release for predicting the number of defects during development or field. These results can be used to improve the prediction of quality at the module level of future releases based on the past history",1998,0, 1260,The repeatability of code defect classifications,"Counts of defects found during the various defect defection activities in software projects and their classification provide a basis for product quality evaluation and process improvement. However, since defect classifications are subjective, it is necessary to ensure that they are repeatable (i.e., that the classification is not dependent on the individual). We evaluate a slight adaptation of a commonly used defect classification scheme that has been applied in IBM's Orthogonal Defect Classification work, and in the SEI's Personal Software Process. The evaluation utilizes the Kappa statistic. We use defect data from code inspections conducted during a development project. Our results indicate that the classification scheme is in general repeatable. We further evaluate classes of defects to find out if confusion between some categories is more common, and suggest a potential improvement to the scheme",1998,0, 1261,Predicting the order of fault-prone modules in legacy software,"A goal of software quality modeling is to recommend modules for reliability enhancement early enough to prevent poor quality. Reliability improvement techniques include more rigorous design and code reviews and more extensive testing. This paper introduces the concept of module-order models for guiding software reliability enhancement and provides an empirical case study that shows how such models can be used. A module-order model predicts the rank-order of modules according to a quantitative quality factor. The case study examined a large legacy telecommunications system. We found that the amount of new and changed code due to the development of a release can be a better predictor of code churn due to subsequent bug fixes, compared to software product metrics alone. In such projects, process-related measures derived from configuration management data may be adequate for software quality modeling, without resorting to software product measurement tools and expertise",1998,0, 1262,On the test allocations for the best lower bound performance of partition testing,"A partition testing strategy divides the program's input domain into subdomains according to a partitioning scheme, and selects test cases from each subdomain according to a test allocation scheme. Previous studies have shown that partition testing strategies can be very effective or very ineffective in detecting faults, depending on both the partitioning scheme and the test allocation scheme. Given a partitioning scheme, the maximin criterion chooses a test allocation scheme that will maximally improve the lower bound performance of the partition testing strategy. In an earlier work, we have proved that the Basic Maximin Algorithm can generate such a test allocation scheme. In this paper, we derive further properties of the Basic Maximin Algorithm and present the Complete Maximin Algorithm that generates all possible test allocation schemes that satisfy the maximin criterion",1998,0, 1263,Metric selection for effort assessment in multimedia systems development,"This paper describes ongoing research directed at formulating a set of appropriate metrics for assessing effort requirements for multimedia systems development. An exploratory investigation of the factors that are considered by industry to be influential in determining development effort is presented. This work incorporates the use of a GQM framework to assist the metric selection process from a literature basis, followed by an industry questionnaire. The results provide some useful insights into contemporary project management practices in relation to multimedia systems",1998,0, 1264,Reliability modeling of freely-available Internet-distributed software,"A wealth of freely-available software is available on the Internet; however, developers are wary of its reuse because it is assumed to be of poor quality. Reliability is one way that software quality can be measured, but it requires metrics data that are typically not maintained for freely-available software. A technique is presented which allows reliability data to be extracted from available data, and is validated by showing that the data can be used to fit a logarithmic reliability model. By modeling the reliability, estimates of overall quality, remaining faults, and release times can be predicted for the software",1998,0, 1265,Applications of measurement in product-focused process improvement: a comparative industrial case study,"In ESPRIT project PROFES, measurement according to the Goal/Question/Metric (GQM) approach is conducted in industrial software projects at Drager Medical Technology, Ericsson Finland, and Schlumberger Retail Petroleum Systems. A comparative case study investigates three different ways of applying GQM in product-focused process improvement: long-term GQM measurement programmes at the application sites to better understand and improve software products and processes; GQM-based construction and validation of product/process dependency models, which describe the process impact on software quality; and cost/benefit investigation of the PROFES improvement methodology using GQM for (meta-) analysis of improvement programmes. This paper outlines how GQM is applied for these three purposes",1998,0, 1266,Experiences with program static analysis,"Conventionally, software quality has been measured mainly by the number of test items, the test coverage, and the number of faults in the test phase. This approach of relying heavily on testing is not satisfactory from a quality assurance viewpoint. Since software is becoming larger and more complex, quality must be assured from the early phases, such as requirements analysis, design and coding. Code reviews are effective to build in software quality from the coding phase. However, for a large-scale software development, there are limitations in covering all the programs. The advantage of using static analysis tools is the capability to detect fault-prone programs easily and automatically. We describe the effective use of a static analysis tool, and show the effectiveness of the static analysis technique",1998,0, 1267,Experimenting with error abstraction in requirements documents,"In previous experiments we showed that the Perspective-Based Reading (PBR) family of defect detection techniques was effective at detecting faults in requirements documents in some contexts. Experiences from these studies indicate that requirements faults are very difficult to define, classify and quantify. In order to address these difficulties, we present an empirical study whose main purpose is to investigate whether defect detection in requirements documents can be improved by focusing on the errors (i.e., underlying human misconceptions) in a document rather than the individual faults that they cause. In the context of a controlled experiment, we assess both benefits and costs of the process of abstracting errors from faults in requirements documents",1998,0, 1268,Getting a handle on the fault injection process: validation of measurement tools,"In any manufacturing environment, the fault injection rate might be considered one of the most meaningful criterion to evaluate the goodness of the development process. In our field, the estimates of such a rate are often oversimplified or misunderstood generating unrealistic expectations on their prediction power. The computation of fault injection rates in software development requires accurate and consistent measurement, which translates into demanding parallel efforts for the development organization. This paper presents the techniques and mechanisms that can be implemented in a software development organization to provide a consistent method of anticipating fault content and structural evolution across multiple projects over time. The initial estimates of fault insertion rates can serve as a baseline against which future projects can be compared to determine whether progress is being made in reducing the fault insertion rate, and to identify those development techniques that seem to provide the greatest reduction",1998,0, 1269,A cohesion measure for classes in object-oriented systems,"Classes are the fundamental concepts in the object-oriented paradigm. They are the basic units of object-oriented programs, and serve as the units of encapsulation, which promotes the modifiability and the reusability of them. In order to take full advantage of the desirable features provided by classes, such as data abstraction and encapsulation, classes should be designed to have good quality. Because object-oriented systems are developed by heavily reusing the existing classes, the classes of poor quality can be a serious obstacle to the development of systems. We define a new cohesion measure for assessing the quality of classes. Our approach is based on the observations on the salient natures of classes which have not been considered in the previous approaches. A Most Cohesive Component (MCC) is introduced as the most cohesive form of a class. We believe that the cohesion of a class depends on the connectivity of itself and its constituent components. We propose the connectivity factor to indicate the degree of the connectivity among the members of a class, and the structure factor to take into account the cohesiveness of its constituent components. Consequently, the cohesion of a class is defined as the product of the connectivity factor and the structure factor. This cohesion measure indicates how closely a class approaches MCC; the closely a class approaches MCC, the greater cohesion the class has",1998,0, 1270,Applying software metrics to formal specifications: a cognitive approach,"It is generally accepted that failure to reason correctly during the early stages of software development causes developers to make incorrect decisions which can lead to the introduction of faults or anomalies in systems. Most key development decisions are usually made at the early system specification stage of a software project and developers do not receive feedback on their accuracy until near its completion. Software metrics are generally aimed at the coding or testing stages of development, however, when the repercussions of erroneous work have already been incurred. This paper presents a tentative model for predicting those parts of formal specifications which are most likely to admit erroneous inferences, in order that potential sources of human error may be reduced. The empirical data populating the model was generated during a series of cognitive experiments aimed at identifying linguistic properties of the Z notation which are prone to admit non-logical reasoning errors and biases in trained users",1998,0, 1271,An integrated process and product model,"The relationship between product quality and process capability and maturity has been recognized as a major issue in software engineering based on the premise that improvements in process will lead to higher quality products. To this end, we have been investigating an important facet of process capability, stability, as defined and evaluated by trend, change, and shape metrics, across releases and within a release. Our integration of product and process measurement and evaluation serves the dual purpose of using metrics to assess and predict reliability and risk and concurrently using these metrics for process stability evaluation. We use the NASA Space Shuttle flight software to illustrate our approach",1998,0, 1272,The predictive validity criterion for evaluating binary classifiers,"The development of binary classifiers to identify highly error-prone or high maintenance cost components is increasing in the software engineering quality modeling literature and in practice. One approach for evaluating these classifiers is to determine their ability to predict the classes of unseen cases, i.e., predictive validity. A chi-square statistical test has been frequently used to evaluate predictive validity. We illustrate that this test has a number of disadvantages. The disadvantages include a difficulty in using the results of the test to determine whether a classifier is a good predictor, demonstrated through a number of examples, and a rather conservative Type I error rate, demonstrated through a Monte Carlo simulation. We present an alternative test that has been used in the social sciences for evaluating agreement with a “gold standardâ€? The use of this alternative test is illustrated in practice by developing a classification model to predict maintenance effort for an object oriented system, and evaluating its predictive validity on data from a second object-oriented system in the same environment",1998,0, 1273,On the validation of relative test complexity for object-oriented code,"In order to help software managers and engineers allocate test resources in object-oriented software development, the Relative Test Complexity (RTC) metric was proposed and applied to an industrial system developed in C++. The initial validation by an engineering quality circle supported the validity of the RTC metric. In this paper, the RTC metric is further validated using actual fault data. It is shown that the RTC metric is a better surrogate of faults than Relative Complexity (RC), Harrison's Macro-Micro Complexity (MMC), McCabe's cyclomatic complexity (V(g)), Halstead's effort (E), and simple measures of size like LOC. Finally, the RTC and RC models are applied to change data with results that indicate that the RTC and RC metrics can be used to predict source code turmoil",1998,0, 1274,Using classification trees for software quality models: lessons learned,"High software reliability is an important attribute of high-assurance systems. Software quality models yield timely predictions of reliability indicators on a module-by-module basis, enabling one to focus on finding faults early in development. This paper introduces the CART (Classification And Regression Trees) algorithm to practitioners in high-assurance systems engineering. This paper presents practical lessons learned in building classification trees for software quality modeling, including an innovative way to control the balance between misclassification rates. A case study of a very large telecommunications system used CART to build software quality models. The models predicted whether or not modules would have faults discovered by customers, based on various sets of software product and process metrics as independent variables. We found that a model based on two software product metrics had an accuracy that was comparable to a model based on 40 product and process metrics",1998,0, 1275,The application of fuzzy enhanced case-based reasoning for identifying fault-prone modules,"As highly reliable software is becoming an essential ingredient in many systems, the process of assuring reliability can be a time-consuming, costly process. One way to improve the efficiency of the quality assurance process is to target reliability enhancement activities to those modules that are likely to have the most problems. Within the field of software engineering, much research has been performed to allow developers to identify fault-prone modules within a project. Software quality classification models can select the modules that are the most likely to contain faults so that reliability enhancement activities can be performed to lower the occurrences of software faults and errors. This paper introduces fuzzy logic combined with case-based reasoning (CBR) to determine fault-prone modules given a set of software metrics. Combining these two techniques promises more robust, flexible and accurate models. In this paper, we describe this approach, apply it in a real-world case study and discuss the results. The case study applied this approach to software quality modeling using data from a military command, control and communications (C3) system. The fuzzy CBR model had an overall classification accuracy of more than 85%. This paper also discusses possible improvements and enhancements to the initial model that can be explored in the future",1998,0, 1276,Estimating the number of residual defects [in software],"The number of residual defects is one of the most important factors that allows one to decide if a piece of software is ready to be released. In theory, one can find all the defects and count them. However, it is impossible to find all the defects within a reasonable amount of time. Estimating the defect density can become difficult for high-reliability software, since the remaining defects can be extremely hard to test for. One possible way is to apply the exponential software reliablility growth model (SRGM), and thus estimate the total number of defects present at the beginning of testing. In this paper, we show the problems with this approach and present a new approach based on software test coverage. Test coverage directly measures the thoroughness of testing, avoiding the problem of variations of test effectiveness. We apply this model to actual test data in order to project the residual number of defects. This method results in estimates that are more stable than the existing methods. The method is also easier to understand, and the convergence to the estimate can be observed visually",1998,0, 1277,Error and failure analysis of a UNIX server,"This paper presents a measurement-based dependability study of a UNIX server. The event logs of a UNIX server are collected to form the dependability data basis. Message logs spanning approximately eleven months were collected for this study. The event log data are classified and categorized to calculate parameters such as MTBF and availability. Component analysis is also performed to identify modules that are prone to errors in the system. Next, the system error activity proceeding each system failure is analyzed to identify error patterns that may be precursors of the observed failure events. Lastly, the error/failure results from the measurement are reviewed in the perspective of the fault/error assumptions made in several popular fault injection studies",1998,0, 1278,Enhanced methods for the collection of on-scene information by emergency medical system providers,"Emergency Medical Systems (EMS) assess their cardiac care performance and effectiveness by measuring how quickly they respond to arrest and infarction. Time intervals defined by standard response models such as the Utstein Style for Cardiac Arrest Reporting are recorded and tracked to identify delays in patient treatment which can impact patient outcomes. Unfortunately, most event times, as entered into the responder's run reports, are inaccurate, since they are estimated retrospectively after the incident is over. A data collection device that can accurately gather information included in the run report is proposed. The device must be able to accept information about the incident context, record the time of key events, and capture the narrative summary without interfering with patient care. When the incident is over, it should import information from computer dispatch systems and diagnostic or therapeutic medical devices used on-scene. Information must be exported for remote diagnoses and for inclusion in the computerized patient care record. Current voice recognition systems which are able to recognize and time-stamp short phrases associated with on-scene events, allow a flow sheet to be composed as the incident progresses. Critical data and times can thus be accurately collected without the need for responders to touch a screen or keypad. These systems can also capture the clinical narrative by transcribing verbal reports as they are delivered to emergency department physicians. A Windows application, running on a waistband computer and suitable for use by EMS responders, has been developed based on these principles and technologies, and is now being evaluated in field use",1998,0, 1279,A comparison of efficient dot throwing and shape shifting extra material critical area estimation,"The paper reports a comparison of efficient methods of estimating extra material critical area of any angled IC layout using two different techniques, shape shifting and dot throwing. Both techniques are implemented using the same polygon libraries and are optimised to make best use of the library features. This allows an accurate comparison of the techniques with minimal dependence on the specific implementation. The results presented here suggest that for general yield prediction an efficient dot throwing implementation is best suited for layouts of any significant size (designs greater than 1 MB of layout data). However, the shape shifting technique is considerably more efficient in the analysis of smaller circuits but does not scale well to larger designs",1998,0, 1280,ASSISTing exit decisions in software inspection,"Software inspection is a valuable technique for detecting defects in the products of software development. One avenue of research within inspection concerns the development of computer support. It is hoped that such support will provide even greater benefits when applying inspection. A number of prototype systems have been developed by researchers, yet these suffer from some fundamental limitations. One of the most serious of these concerns is the lack of facilities to monitor the process, and to provide the moderator with quantitative information on the performance of the process. The paper begins by briefly outlining the measurement component that has been introduced into the system, and especially discusses an approach to estimating the effectiveness of the current process",1998,0, 1281,Automated knowledge acquisition and application for software development projects,"The application of empirical knowledge about the environment-dependent software development process is mostly based on heuristics. In this paper, we show how one can express these heuristics by using a tailored fuzzy expert system. Metrics are used as input, enabling a prediction for a related quality factor like correctness, defined as the inverse of criticality or error-proneness. By using genetic algorithms, we are able to extract the complete fuzzy expert system out of the available data of a finished project. We describe its application for the next project executed in the same development environment. As an example, we use complexity metrics which are used to predict the error-proneness of software modules. The feasibility and effectiveness of the approach is demonstrated with results from large switching system software projects. We present a summary of the lessons learned and give our ideas about further applications of the approach",1998,0, 1282,Static and dynamic metrics for effective object clustering,"In client/server and distributed applications, the quality of object clustering plays a key role in determining the overall performance of the system. Therefore, a set of objects with higher coupling should be grouped into a single cluster so that each cluster can have a higher cohesion. As a result, the overall message traffic among objects can be greatly minimized. In addition, it should also be considered in CORBA-based applications that clusters themselves can evolve due to the dynamic object migration feature of CORBA. Hence, dynamic metrics as well as as static metrics should be developed and used in order to measure the dynamic message traffic and to tune up the system performance effectively. Various object-oriented design metrics proposed mainly deal with static coupling and cohesion, and they only consider the basic class relationships such as association, inheritance, and composition. Therefore, these metrics are not appropriate for measuring the traffic load of object messages which is closely related to the system performance. In this paper, we propose a set of metrics which considers the relevant weights on the various class relationships and estimates the static and dynamic message flow among the objects at the detailed level of member functions. By applying these metrics along with OMT or UML, we believe that clusters can be defined more efficiently and systematically, yielding high performance distributed applications",1998,0, 1283,A fault-tolerant dynamic scheduling algorithm for multiprocessor real-time systems and its analysis,"Many time-critical applications require dynamic scheduling with predictable performance. Tasks corresponding to these applications have deadlines to be met despite the presence of faults. In this paper, we propose an algorithm to dynamically schedule arriving real-time tasks with resource and fault-tolerant requirements on to multiprocessor systems. The tasks are assumed to be nonpreemptable and each task has two copies (versions) which are mutually excluded in space, as well as in time in the schedule, to handle permanent processor failures and to obtain better performance, respectively. Our algorithm can tolerate more than one fault at a time, and employs performance improving techniques such as 1) distance concept which decides the relative position of the two copies of a task in the task queue, 2) flexible backup overloading, which introduces a trade-off between degree of fault tolerance and performance, and 3) resource reclaiming, which reclaims resources both from deallocated backups and early completing tasks. We quantify, through simulation studies, the effectiveness of each of these techniques in improving the guarantee ratio, which is defined as the percentage of total tasks, arrived in the system, whose deadlines are met. Also, we compare through simulation studies the performance our algorithm with a best known algorithm for the problem, and show analytically the importance of distance parameter in fault-tolerant dynamic scheduling in multiprocessor real-time systems",1998,0, 1284,"Identification of green, yellow and red legacy components","Software systems are often getting older than expected, and it is a challenge to try to make sure that they grow old gracefully. This implies that methods are needed to ensure that system components are possible to maintain. In this paper, the need to investigate, classify and study software components is emphasized. A classification method is proposed. It is based on classifying the software components into green, yellow and red components. The classification scheme is complemented with a discussion of suitable models to identify problematic components. The scheme and the models are illustrated in a minor case study to highlight the opportunities. The long term objective of the work is to define methods, models and metrics which are suitable to use in order to identify software components which have to be taken care of through either tailored processes (e.g. additional focus on verification and validation) or reengineering. The case study indicates that the long term objective is realistic and worthwhile.",1998,0, 1285,Can a software quality model hit a moving target?,This paper examines factors that make accurate quality modeling of an evolving software system challenging. The context of our discussion is development of a sequence of software releases. Our goal is to predict software quality of a release early enough co make significant quality improvements prior to release,1998,0, 1286,Testing of analogue and mixed-signal circuits by using supply current measurements,"A complete (hardware and software) system has been designed and implemented, which can measure the supply current (IPS) of analogue and mixed-mode circuits and detect or diagnose faults, based on RMS and spectrum calculations of the IPS and the corresponding fault dictionary. The system is applied to testing and diagnosis of various analogue and mixed-signal circuits, showing very encouraging results",1998,0, 1287,An efficient algorithm for causal message logging,"Causal message logging has many good properties such as nonblocking message logging and no rollback propagation. However, it requires a large amount of information to be piggybacked on each message, which may incur severe performance degradation. This paper presents an efficient causal logging algorithm based on the new message log structure, LogOn, which represents the causal interprocess dependency relation with much smaller overhead compared to the existing algorithms. The proposed algorithm is efficient in the sense that it requires no additional information other than LogOn to be carried in each message, while the other algorithms require extra information other than the message log, to eliminate the duplicates in log entries. Moreover, in those algorithms, as more extra information is added into the message, more duplicates can be detected. However, the proposed algorithm achieves the same degree of efficiency using only the message log carried in each message, without any extra information",1998,0, 1288,Dependability analysis of a cache-based RAID system via fast distributed simulation,"We propose a new speculation-based, distributed simulation method for dependability analysis of complex systems in which a detailed functional simulation of a system component is essential to obtain an accurate overall result. Our target example is a networked cluster with compute nodes and a single I/O node. Accurate system dependability characterization is achieved via a combination of detailed simulation of the I/O subsystem behavior in the presence of faults and more abstract simulation of the compute nodes and the switching network. Dependability measures like error coverage, error detection latency and performance measures such as delivery time in the presence of faults are obtained. The approach is implemented on a network of workstations, and experimental results show significant improvements over a Time Warp simulator for the same model",1998,0, 1289,"The Chameleon infrastructure for adaptive, software implemented fault tolerance","This paper presents Chameleon, an adaptive software infrastructure for supporting different levels of availability requirements in a heterogeneous networked environment. Chameleon provides dependability through the use of ARMORs-Adaptive, Reconfigurable, and Mobile Objects for Reliability. Three broad classes of ARMORs are defined: Managers, Daemons, and Common ARMORs. Key concepts that support adaptive fault tolerance include the construction of fault tolerance execution strategies from a comprehensive set of ARMORs, the creation of ARMORs from a library of reusable basic building blocks, the dynamic adaptation to changing fault tolerance requirements, and the ability to detect and recover from errors in applications and in ARMORs",1998,0, 1290,Two branch predictor schemes for reduction of misprediction rate in conditions of frequent context switches,"Branch misprediction is one of the important causes of performance degradation in superpipelined and superscalar processors. Most of the existing branch predictors, based on the exploiting of branch history, suffer from prediction accuracy decrease caused by frequent context switches. The goal of this research is to reduce misprediction rate (MPR) when the context switches are frequent, and not to increase the MPR when the context switches are relatively rare. We propose two independent, but closely related modifications of global adaptive prediction mechanisms: first, to flush only the branch history register (BHR) at context switch, instead of reinitialization of the whole predictor, and second, to use two separated BHRs, one for user and one for kernel branches, instead of one global history register. We have evaluated the ideas by measurements on real traces from IBS (Instruction Benchmark Set), and have shown that both modifications reduce MPR at negligible hardware cost",1998,0, 1291,Off-line diagnosis of parallel systems,"This paper presents an off-line diagnosis strategy for parallel message-passing systems. This strategy, called host-diagnosis, allows an external observer, i.e. the host system, to perform centralized diagnosis of the system state, given results of distributed tests performed among the system processors. Three algorithms that use the host-diagnosis strategy are proposed. The performance of the three algorithms are evaluated and compared to those of a classic distributed self-diagnosis algorithm. The obtained results show an interesting behaviour of the host-diagnosis algorithms in comparison with the self-diagnosis one",1998,0, 1292,Cost impacts of real-time non-intrusive (RTNI) monitoring technology to real-time embedded systems,"The use of RTNI monitoring has had an impact on life cycle costs of existing programs through a reduction in debug time. Other areas in which RTNI monitoring can provide potential benefits to future programs are through the use of increased dynamic testing and the sharing of test time among more engineers. There are a number of areas in which software life cycle costs are impacted by various cost drivers. To determine which areas were affected by the use of RTNI monitoring, a panel of expert users of RTNI monitoring was created using a form of the Nominal Group Technique methodology to achieve group consensus. The group concluded that the areas described above were most important to their programs. However, merely using RTNI monitoring alone may not have an impact on future programs; management commitment to integrate RTNI monitoring into test programs is also necessary for success",1998,0, 1293,An efficient random-like testing,"This paper introduces the concepts of random-like testing. In a random-like testing sequence, the total distance among all of the test patterns is chosen maximal so that the sets of faults detected by one test pattern are as different as possible from that of faults detected by the tests previously applied. Procedure of constructing a random-like testing sequence (RLTS) is described in detail. Theorems to justify the effectiveness and helpfulness of the procedure presented are developed. Experimental results on benchmark circuits as well as on other circuit are also given to evaluate the performances of our new approach",1998,0, 1294,Use of FFT and ANN techniques in monitoring of transformer fault gases,"This paper reports on continuing work undertaken by the authors in developing a portable monitor to detect multigases especially H2 and CO dissolved in oil. A significant improvement in reliability of detection has been achieved using FFT and ANN signal processing techniques. The monitor uses the membrane and forced diffusion techniques to extract the dissolved gases in oil, and senses the gases by two thin film semiconducting gas sensors. Stability in sensor detection especially in a practical varying ambient conditions is achieved by a cyclic heating technique. And ac conductance measurement technique. Sensitivity and quantitative selectivity are achieved by signal processing using FFT and ANN techniques. The FFT resolves the frequency components of the sensor response up to 3rd harmonics and the data are used to train a back propagation ANN network to map the fingerprints. Detection up to 20 ppm has been achieved by just using two sensors. Detection for other gases like C2H2 and CO2 has been done by using different membranes using the same two sensors and the measured response will be reported. A portable microcontroller based monitor was developed by incorporating in software the derived weights of ANN and the measuring techniques. The results are reported in this paper",1998,0, 1295,Communication synthesis for distributed embedded systems,"Designers of distributed embedded systems face many challenges in determining the tradeoffs when defining a system architecture or retargeting an existing design. Communication synthesis, the automatic generation of the necessary software and hardware for system components to exchange data, is required to more effectively explore the design space and automate very error prone tasks. The paper examines the problem of mapping a high level specification to an arbitrary architecture that uses specific, common bus protocols for interprocessor communication. The communication model presented allows for easy retargeting to different bus topologies, protocols, and illustrates that global considerations are required to achieve a correct implementation. An algorithm is presented that partitions multihop communication timing constraints to effectively utilize the bus bandwidth along a message path. The communication synthesis tool is integrated with a system co-simulator to provide performance data for a given mapping.",1998,0, 1296,Simulation of 8 kbps ACELP over GSM platform,"This study engages the 8 kbps ACELP vocoder on a GSM radio interface software simulation platform. A robust channel codec module is developed to protect the coded bits against fading and GAUSS noise and adapt the ACELP vocoder into the GSM TCH/FS platform. In addition, an error-concealment unit (ECU) is introduced into the ACELP decoder for better synthetic quality over a seriously corrupted channel. The performance of the ACELP vocoder over GSM environment is explored, which include the test of the reconstructed speech quality and the bit-error-rate statistics over typical channel models under different SNR. The final analyses show that this vocoder has better performance than GSM full rate speech coder over a deteriorated transmission environment",1998,0, 1297,Can model-based and case-based expert systems operate together?,"There is an ongoing debate as to whether model-based or case-based diagnostic expert systems are superior. Our experience has shown that the two are not mutually exclusive and, to the contrary, complement each other. Current expert system technology is capable of two reasoning mechanisms, in addition to other mechanisms, integrated into one system. Depending on the knowledge available, and time and cost considerations, expert systems allow the user to decide the relative proportion of case-based to model-based reasoning to employ in any given situation. Diagnostic support software should be evaluated by two critical factor groups: (a) cost and time to deployment, and (b) accuracy, completeness and efficiency of the diagnostic process. In this paper we will discuss the role of expert systems in combining model-based and case-based reasoning to effect the most efficient user defined solution to diagnostic performance",1998,0,1223 1298,A PC-based vibration analyzer for condition monitoring of process machinery,"A fast-response PC-based vibration analyzer has been developed for fault detection and preventive maintenance of process machinery. The analyzer acquires multiple vibration signals with high resolution, and computes frequency spectra, root mean square amplitude, and other peak parameters of interest. Fast execution speed has been achieved by performing data acquisition and frequency spectrum computation using C-language. Vibration signals up to 10 kHz can be analyzed by the spectrum analyzer. Special algorithms, such as window smoothing, digital filtering, data archiving and graphic display have also been incorporated. With these features the vibration analyzer can perform most of the functions available in complex, stand-alone machines. The software for the analyzer is menu driven and user-friendly. The personal computer used is a 66 MHz PC-486 compatible machine. The use of a general purpose PC and standard programming language makes the vibration analyzer simple, economical, and adaptable to a variety of problems. The applications of the system in malfunction detection in rotating machinery are also described",1998,0, 1299,Introduction to ProcessModel and ProcessModel 9000,"The paper presents an introduction to ProcessModel simulation software. ProcessModel is a discrete event simulation tool that combines the power of simulation with the simplicity of flowcharting to aid managers and business analysts who are looking for ways to quantify the effects of variability, uncertainty and resource interdependencies on the performance of complex business processes. ProcessModel 9000 is a variation of ProcessModel that allows quality managers to easily document, analyze and improve the quality processes which are typically reviewed during the ISO 9000 or QS-9000 certification process. ProcessModel 9000 includes ProcessModel software and more than 35 pre-defined flowcharts of the most common ISO/QS-9000 processes. Both ProcessModel and ProcessModel 9000 aid in the creation of graphical business process models for the purpose of predicting key process performance measures such as cost, throughput, cycle time and resource utilization",1998,0, 1300,A virtual instrument for measurement of flicker,"All types of fluctuations of the rms voltage in power systems may be assessed by using a “flickermeterâ€?which complies with the specifications given in IEC-EN 60868. In this paper, a digital flickermeter is presented based on the implementation in the frequency domain of the weighting filter block of the eye-brain simulation chain. The virtual instrument, implemented with an acquisition board inserted into a PC and with a software developed with the LabView tools, samples the voltage and furnishes the instantaneous flicker level and other severity coefficients. Results of simulated and actual data are reported to validate the performance of the developed flickermeter",1998,0, 1301,A recording and analyzing system for cutaneous electrogastrography,"The recording of gastric activities by surface electrodes is called electrogastrography (EGG). It should be carefully filtered and amplified because of its ultra-low frequency, tiny and noisy signal. A new system, including a signal acquisition device and Windows-based software, was designed to record and analyze the EGG signal. Many volunteers and patients have been successfully tested by the system",1998,0, 1302,Photoconductive probing and computer simulation of microwave potentials inside a SiGe MMIC,"Electrical potentials inside a SiGe MMIC are measured at frequencies up to 20 GHz using a micro-machined photoconductive sampling probe and compared with values predicted using microwave CAD software. The results illustrate that this combination of simulation and in-circuit measurement technique is a powerful tool for performing diagnostics of the microwave performance of Si-based RF circuits. The methodology can be used for applications such as fault isolation and validation of device models, as well as for investigation of the sensitivity of performance to process variations.",1998,0, 1303,An efficient computation-constrained block-based motion estimation algorithm for low bit rate video coding,"We present an efficient computation constrained block-based motion vector estimation algorithm for low bit rate video coding that offers good tradeoffs between motion estimation distortion and number of computations. A reliable predictor determines the search origin. An efficient search pattern exploits structural constraints within the motion field. A flexible cost measure used to terminate the search allows simultaneous control of the motion estimation distortion and the computational cost. Experimental results demonstrate the viability of the proposed algorithm in low bit rate video coding applications, achieving essentially the same levels of rate-distortion performance and subjective quality as that of the full search algorithm when used by the UBC H.263+ video coding reference software. However the proposed motion estimation algorithm provides substantially higher encoding speed as well as graceful computational degradation capabilities.",1998,0,1440 1304,Thermal transient testing of packages without a tester,"Thermal transient testing is an excellent method to measure the thermal properties of packages, to check die attach quality and to detect and localise physical defects in the heat removal path. The measured transient responses are also the basis for compact, dynamic thermal model generation. The measurement set-up for the thermal transient experiments is rather complex, including expensive dedicated equipment. The purpose of this paper is to present an alternative solution for thermal transient testing which eliminates the need for expensive hardware. A method for compact thermal model generation is also provided",1998,0, 1305,Harmonics in high voltage networks,"The paper presents the results of studies in the area of harmonics in HV networks that were obtained at the Siberian Energy Institute. The studies have been performed based on the method of harmonic distortion powers that was developed at the Siberian Energy Institute. It contains the notions of distortion power generation and absorption The method forms the basis of the software package “Harmonicsâ€?that is applied for calculation, analysis and study of the harmonic modes in networks of a complicated configuration. The probabilistic mathematical models of loads are suggested. The probabilistic approach to calculation of harmonic modes in the HV network is considered. High levels of harmonics are typical of the nodes with a lower value of the distortion power absorption. The harmonic voltage levels can be normalized in both operating and planned networks of a complicated configuration in a centralised way",1998,0, 1306,Harmonic monitoring system via synchronized measurements,"The New York Power Authority (NYPA), and the other Empire State Electric Energy Research Corporation (ESEERCO) member utilities, initiated deployment of a high voltage transmission harmonic measurement system (HMS) in 1992. The HMS consists of hardware and software, which determine the harmonic state of a transmission system, in real time, and stores the acquired data in a historical harmonic database. The HMS hardware consists of GPS synchronized digital event recorders linked to on-site computers, and a master station computer. The present installation includes instrumentation for a total of 150 measurements of which 138 are three phase quantities (voltages or currents) resulting in 46 phasors. The system performs synchronized waveform data acquisition every 15 minutes. The captured data are processed at the on-site computers to correct for error from the nonideal characteristics of the instrumentation, and to compute the harmonics. The computed harmonics (magnitude and phase) are transmitted to the master station computer where the system wide harmonic flow is constructed, using a harmonic state estimation technique. This paper describes the HMS hardware and software as well as HMS applications",1998,0, 1307,Measurement of harmonic impedance on an LV system utilising power capacitor switching and consequent predictions of capacitor induced harmonic distortion,"Due to the existence of harmonic voltage distortion in power systems, the connection of power factor correction capacitors can magnify harmonic distortion by resonance with the supply inductance. A common solution to this problem is to install detuning reactance to ensure that the power factor correction unit remains inductive for all the major harmonics present in the system voltage. This paper describes an algorithm to measure system harmonic impedance and harmonic damping, by monitoring the system response to capacitor switching transitions. A prediction of the harmonic voltage distortion due to different levels of capacitance can then be made. The algorithm offers the possibility of an economic software based solution to the problem of harmonic resonance due to power factor correction capacitors",1998,0, 1308,Modeling of ultrasonic fields and their interaction with defects,"Ultrasonic modeling at the French Atomic Energy Commission (CEA) is based onto continuous developments of two approximate models covering realistic commonly encountered NDT configurations. The first model, based on an extension of the Rayleigh integral, is dedicated to the prediction of ultrasonic fields radiated by arbitrary transducers. Being computed the field may be inputted in a model of the echo formation. This two step modeling yields the prediction of echo-structures detected with transducer scanning. The associated software is implanted in the CIVA system for processing and imaging multiple technique NDT data",1998,0, 1309,"Reliability-oriented software engineering: design, testing and evaluation techniques","Software reliability engineering involves techniques for the design, testing and evaluation of software systems, focusing on reliability attributes. Design for reliability is achieved by fault-tolerance techniques that keep the system working in the presence of software faults. Testing for reliability is achieved by fault-removal techniques that detect and correct software faults before the system is deployed. Evaluation for reliability is achieved by fault-prediction techniques that model and measure the reliability of the system during its operation. This paper presents the best current practices in software reliability engineering for design, evaluation purposes. There are descriptions how fault-tolerant components are designed and applied to software systems, how software testing schemes are performed to show improvement of software reliability, and how reliability quantities are obtained for software systems. The tools associated with these techniques are also examined, and some application results are described",1998,0, 1310,Design of a test station for the CMS HCAL waveshifter/waveguide fiber system,A test station has been designed and is under construction to test the quality of assembled waveguide to waveshifter fiber to be used in the scintillating tile calorimeter for the Compact Moun Solenoid (CMS) Hadron Calorimeter (HCAL). The test station consists of a light tight enclosure 6.8 meters long with the ability to move a light source over almost 6 meters of fiber. Data acquisition hardware and software are under development to analyze the quality of the fiber as well as motor control hardware and software to operate the moveable light source. The design and performance expectations of the test station will be presented,1998,0, 1311,A new concept of power control in cellular systems reflecting challenges of today's systems,"When the systems evolved from analog to digital, the performance was improved by the use of power control on the one hand and different modulations and coding schemes on the other. Condensing the available information we are able to propose a new concept of power control. The concept is applicable to real systems, since it uses the available measurements for estimating parameters necessary for the power control. It also supports the use of an adequate quality measure together with a quality specification supplied by the operator. We use frequency hopping GSM as an example and the resulting control algorithm is ready for implementation in the software in the base stations where the output powers are computed. No modifications are needed in the GSM standard, the mobile terminals, the radio interfaces or in the base station transmitters. Finally we provide simulation results confirming the benefits of using the new concept for power control",1998,0, 1312,Robust event correlation scheme for fault identification in communications network,"The complexity of communications network and the amount of information transferred in these networks have made the management of such networks increasingly difficult. Since faults are inevitable, quick detection, identification, and recovery are crucial to make the systems more robust and their operation more reliable. This paper proposes a novel event correlation scheme for fault identification in communications network. This scheme is based on the algebraic operations of sets. The causality graph model is used to describe the cause-and-effect relationships between network events. For each problem, and each symptom, a unique prime number is assigned. The use of the greatest common devisor (GCD) makes the correlation process simple and fast. A simulation model is developed to verify the effectiveness and efficiency of the proposed scheme. From simulation results, we notice that this scheme not only identifies multiple problems at one time but also is insensitive to noise alarms. The time complexity of the correlation process is close to a function of n, where n is the number of observed symptoms, with order O(n2); therefore, the on-line fault identification is easy to achieve",1998,0, 1313,Design and analysis of video-on-demand servers,"In this paper we deal with the design and analysis of a typical multimedia server, video-on-demand server (VoD). If we define the performance measure of the VoD server as the maximum number of concurrent access processes it can support, the system performance depends on a number of the system characteristics. They are: (i) storage devices contention; (ii) resource allocation, such as buffer size; (iii) QoS control. Thus, in order to achieve the best performance of a server the design trade-off among theses factors should be investigated quantitatively. In this paper we formulate and solve a problem of allocating resources for such a multimedia server in order to maximize concurrent video accesses under QoS guarantees. The solution is theoretically based on economic models and queuing theory. By using our optimization procedure we may estimate the maximum number of concurrent video accesses with optimal buffer allocation and QoS guarantees",1998,0, 1314,The 1998 York and Williamsburg workshops on dependability: the proposed research agenda,"We first provide a set of statements that explain the intent and discussion areas of the workshops. Dependability comprises three aspects: Attributes, which describe or limit dependability: these include availability, reliability, safety, confidentiality, integrity, and maintainability. Means, which refer to the major factors associated with dependability: fault prevention, fault tolerance, fault removal, and fault forecasting. Threats, which reduce or affect the dependability, including errors, faults, and failures. Dependable systems, among others, include: embedded systems, safety-critical systems, critical-information systems, and electronic commerce. The pair of workshops was initiated to discuss the means available and research to be accomplished in order to establish sets of scientific principles and practical engineering techniques for developing, maintaining, and evaluating dependable computer-based systems. Participants started by assuming that this could be initiated by concentrating on the integration of methods and disciplines used in the fields of security, fault-tolerance, and high-assurance, especially by concentrating on those techniques that enable measurement of their attributes. Emphasis was therefore placed on techniques that enable integration, composability, reuse, and cost prediction of dependable systems",1998,0, 1315,Prediction of fault-proneness at early phase in object-oriented development,"To analyse the complexity of object-oriented software, several metrics have been proposed. Among them, Chidamber and Kemerer's (1994) metrics are well-known object-oriented metrics. Also, their effectiveness has been empirically evaluated from the viewpoint of estimating the fault-proneness of object-oriented software. In the evaluations, these metrics were applied, not to the design specification but to the source code, because some of them measure the inner complexity of a class, and such information cannot be obtained until the algorithm and the class structure are determined at the end of the design phase. However, the estimation of the fault-proneness should be done in the early phase so as to effectively allocate effort for fixing the faults. This paper proposes a new method to estimate the fault-proneness of an object class in the early phase, using several complexity metrics for object-oriented software. In the proposed method, we introduce four checkpoints into the analysis/design/implementation phase, and we estimate the fault-prone classes using applicable metrics at each checkpoint",1999,1, 1316,An empirical study on object-oriented metrics,"The objective of this study is the investigation of the correlation between object-oriented design metrics and the likelihood of the occurrence of object oriented faults. Such a relationship, if identified, can be utilized to select effective testing techniques that take the characteristics of the program under test into account. Our empirical study was conducted on three industrial real-time systems that contain a number of natural faults reported for the past three years. The faults found in these three systems are classified into three types: object-oriented faults, object management faults and traditional faults. The object-oriented design metrics suite proposed by Chidamber and Kemerer (1994) is validated using these faults. Moreover, we propose a set of new metrics that can serve as an indicator of how strongly object-oriented a program is, so that the decision to adopt object oriented testing techniques can be made, to achieve more reliable testing and yet minimize redundant testing efforts",1999,1, 1317,"Real-time performance monitoring and anomaly detection in the Internet: An adaptive, objective-driven, mix-and-match approach","Algorithms for real-time performance management and adaptive fault/anomaly detection in Internet protocol (IP) networks and services have been developed, and a corresponding real-time distributed software platform has been implemented. These algorithms automatically and adaptively detect “softâ€?faults or anomalies (performance degradations) in IP networks and services, enabling timely correction of network exceptions before failures occur and services are compromised and thereby achieving proactive network and service management. Further, these algorithms are implemented as a reliable, fully distributed real-time software platform â€?the network/service anomaly detector (NSAD) â€?with the following features. First, it provides a flexible platform on which preconstructed monitoring and detection components can be mixed, matched, and distributed to form a wide range of application-specific NSADs and performance monitors. Second, NSAD and its components can auto-recover in real time, making it a reliable system for persistent monitoring of networks and their services. Third, anomaly detection is performed on raw network observables â€?for example, performance data such as management information base-2 (MIB2) and remote monitor-1/2 (RMON1/2) variables â€?and algebraic functions of these observables (objective functions), thereby enabling objective-driven anomaly detection of wide range and high sensitivity. Fourth, controlled testing (with anomalies injected a priori into a test network) demonstrates that NSAD can detect anomalies reliably in IP networks. Thus, NSAD provides a powerful framework/platform for automatic, proactive fault/anomaly detection and performance management in IP networks and services.",1999,0, 1318,A toolbox for model-based fault detection and isolation,"A toolbox for model-based fault detection and isolation (FDI) has been developed in the MATLAB/SIMULINK environment. It includes methods relying on analytical or qualitative models of the supervised process. A demonstration of each approach can be performed on a simulation of either a three-tank system, a cocurrent or a countercurrent heat exchanger. The results are displayed in a common format, which allows performance comparison. A user manual including guidelines for tuning method specific parameters is available.",1999,0, 1319,Availability modeling and validation methodology for RS/6000 systems,"In this paper we show the availability modeling and validation methodology for RS/6000 systems. The availability modeling methodology is used both for prediction of availability characteristics of systems under development, and for the prediction of availability in the special bids process. The core of our methodology is the availability modeling tool-System Availability Estimator (SAVE)-currently being used to predict availability measures. Our availability validation methodology has both a conceptual validation and an empirical validation component. Conceptual model validation is done via model walkthroughs and by talking to the designers. The empirical model validation methodology is performed using tracking data. Concluding, our availability modeling and validation methodology allows us to develop and configure higher availability systems which are critical in the server computer market space",1999,0, 1320,Wavelet based rate scalable video compression,"In this paper, we present a new wavelet based rate scalable video compression algorithm. We will refer to this new technique as the scalable adaptive motion compensated wavelet (SAMCoW) algorithm. SAMCoW uses motion compensation to reduce temporal redundancy. The prediction error frames and the intracoded frames are encoded using an approach similar to the embedded zerotree wavelet (EZW) coder. An adaptive motion compensation (AMC) scheme is described to address error propagation problems. We show that, using our AMC scheme, the quality of the decoded video can be maintained at various data rates. We also describe an EZW approach that exploits the interdependency between color components in the luminance/chrominance color space. We show that, in addition to providing a wide range of rate scalability, our encoder achieves comparable performance to the more traditional hybrid video coders, such as MPEG1 and H.263. Furthermore, our coding scheme allows the data rate to be dynamically changed during decoding, which is very appealing for network-oriented applications",1999,0, 1321,A framework backbone for software fault tolerance in embedded parallel applications,"The DIR net (detection-isolation-recovery net) is the main module of a software framework for the development of embedded supercomputing applications. This framework provides a set of functional elements, collected in a library, to improve the dependability attributes of the applications (especially the availability). The DIR net enables these functional elements to cooperate and enhances their efficiency by controlling and co-ordinating them. As a supervisor and the main executor of the fault tolerance strategy, it is the backbone of the framework, of which the application developer is the architect. Moreover, it provides an interface to which all detection and recovery tools should conform. Although the DIR net is meant to be used together within this fault tolerance framework, the adopted concepts and design decisions have a more general value, and can be applied in a wide range of parallel systems",1999,0, 1322,"Identification, classification and correlation of monitored power quality events","Sensitive customers of electricity have a deep interest in the performance, efficiency, power quality, interchangeability, and safety of electrical operating systems. Standards provide the essential tools that support and complement these interests. They provide economic benefits to the high-tech customers in system design, equipment design, electrical safety, operating performance, energy efficiency, power quality and reliability, and standardisation of products and services. For full utilisation of the guidelines and standards, it is important that the industry use accurate instrumentation to monitor the quality of electricity. This instrumentation must provide `information' that can be analysed using software to arrive at causes and reasons for power quality events. This paper presents an overview of the current power quality standards. The paper describes the characteristics of PQ events and the specifications of the system required to accurately analyse these. Details of the required monitoring software and hardware are presented",1999,0, 1323,Towards the design of a snoopy coprocessor for dynamic software-fault detection,"Dynamic Monitoring with Integrity Constraints (DynaMICs) is a software-fault monitoring approach in which the constraints are maintained separately from the program. Since the constraints are not entwined in the code, the approach facilitates the maintenance of the application and constraint code. Through code analysis during compilation, the points at which constraint checking should occur are determined. DynaMICs minimizes performance degradation, addressing a problem that has limited the use of runtime software-fault monitoring. This paper presents the preliminary design of a DynaMICs snoopy-coprocessor system, i.e., one that employs a coprocessor that utilizes bus-monitoring hardware to facilitate the concurrent execution of the application and constraint-checking code. In this approach, the coprocessor executes the constraint-checking code while the main processor executes the application code",1999,0, 1324,"An open solution to fault-tolerant Ethernet: design, prototyping, and evaluation","Presented is an open solution based approach to fault tolerant Ethernet for process control networks. This unique approach provides fault tolerance capability that requires no change of vendor hardware (Ethernet physical link and Network Interface Card) and software (Ethernet driver and protocol), yet it is transparent to control applications. The open fault tolerant Ethernet (OFTE) developed based on this approach performs failure detection and recovery for handling single point of network failure and serves regular IP traffic. Our experimentation shows that OFTE performs efficiently, achieving less than 1 ms end to end LAN swapping time and less than 2 sec failover time, and that concurrent application and system loads have little impact on the performance of failure detection and recovery operations",1999,0, 1325,A toolset for assisted formal verification,"There has been a growing interest in applying formal methods for functional and performance verification of complex and safety critical designs. Model checking is one of the most common formal verification methodologies utilized in verifying sequential logic due to its automated decision procedures and its ability to provide “counter examplesâ€?for debugging. However, model checking hasn't found broad acceptance as a verification methodology due to its complexity. This arises because of the need to specify correctness properties in a temporal logic language and develop an environment around a partitioned model under test in a non deterministic HDL-type language. Generally, engineers are not trained in mathematical logic languages and becoming proficient in such a language requires a steep learning curve. Furthermore, defining a behavioral environment at the complex and undocumented microarchitectural interface level is a time consuming and error prone activity. As such, there is a strong motivation to bring the model checking technology to a level such that the designers may utilize this technology as a part of their design process without being burdened with the details that are generally only within the grasps of computer theoreticians. The paper outlines two tools which greatly assist in this goal: the first, Polly, automates the difficult and error prone task of developing the behavioral environment around the partitioned model under test; the second Oracle, obviates the need for learning temporal logic to enter specification",1999,0, 1326,Bandwidth modelling for network-aware applications,"Network-aware applications attempt to adjust their resource demands in response to changes in resource availability, e.g., if a server maintains a connection to a client, the server may want to adjust the amount of data sent to the client based on the effective bandwidth realized for the connection. Information about current and future network performance is therefore crucial for an adaptive application. This paper discusses three aspects of the coupling of applications and networks: (1) a network-aware application needs timely information about the status of the network; (2) a simple bandwidth estimation technique per forms reasonably well for TCP-Reno connections without timeouts; (3) enhancements proposed to TCP-Reno to reduce the number of timeouts (i.e., SACKs and its variants) increase the bandwidth but also improve the accuracy of bandwidth estimators developed by other researchers. The empirical observations reported in this paper are based on an in-vivo experiment in the Internet. Over a 6-month period, we logged the micro dynamics of random connections between a set of selected hosts. These results are encouraging for the developer of a network-aware application since they provide evidence that a simple widening of the interface between applications and network (protocol) may provide the information that allows an application to successfully adapt to changes in resource availability",1999,0, 1327,Measuring long-range dependence under changing traffic conditions,"Previous measurements of various types of network traffic have shown evidence consistent with long-range dependence and self-similarity. However, an alternative explanation for these measurements is non-stationarity. Standard estimators of LRD parameters such as the Hurst parameter H assume stationarity and are susceptible to bias when this assumption does not hold. Hence LRD may be indicated by these estimators when none is present, or alternatively LRD taken to be non-stationarity. The Abry-Veitch (see IEEE Trans. on on Info. Theory, vol.44, no.1, p.2-15, 1998) joint estimator has much better properties when a time-series is non-stationary. In particular the effect of polynomial trends in data may be intrinsically eliminated from the estimates of LRD parameters. This paper investigates the behavior of the AV estimator when there are non-stationarities in the form of a level shift in the mean and/or the variance of a process. We examine cases where the change occurs both gradually or as a single jump discontinuity, and also examine the effect of the size of the shift. In particular we show that although a jump discontinuity may cause bins in the estimates of the H, the bias is negligible except when the jump is sharp, and large compared with the standard deviation of the process. We explain these effects and suggest how any introduced errors might be minimized. We define a broad class of non-stationary LRD processes so that LRD remains well defined under time varying mean and variance. The results are tested by applying the estimator to a real data set which contains a clear non-stationary event falling within this class",1999,0, 1328,Development of GIS fault location system using pressure wave sensors,This paper describes a series of development tests carried out when developing a GIS (gas insulated substation) fault location system using pressure wave sensors for a GIS with a main bus in which each gas section per bay is connected by gas piping. This paper also refers to the basic construction of the software of the fault location system actually applied to a GIS with a rated voltage of less than 168 kV,1999,0, 1329,TRACKER: a sensor fusion simulator for generalised tracking,"This paper presents a multisensor, multitarget tracking simulator TRACKER that can be used to demonstrate and evaluate airborne early warning and control surveillance systems. TRACKER is a Level 1 fusion surveillance software package that generates a unified object state vector and its measure of uncertainty for each tracked object based on cluttered multitarget augmented (kinematic and discrete type) measurements from multiple dissimilar sensors. It integrates sensors and other sources of information without requiring synchronism and unifies tracking and identification. The resulting track and identity estimates are of a higher quality than those from algorithm in which tracking and identification are carried out separately. The software package is modular in structure and consists of a tracking and data fusion component written in MATLAB, and a graphical user interface written in Java. The package is equipped with a scenario generator and a set of post-mission analysis tools",1999,0, 1330,Using codewords to protect database data from a class of software errors,"Increasingly, for extensibility and performance, special-purpose application code is being integrated with database system code. Such application code has direct access to database system buffers and, as a result, the danger of data being corrupted due to inadvertent application writes is increased. Previously proposed hardware techniques to protect data from corruption required system calls, and their performance depended on the details of the hardware architecture. We investigate an alternative approach which uses codewords associated with regions of data to detect corruption and to prevent corrupted data from being used by subsequent transactions. We develop several such techniques which vary in the level of protection, space overhead, performance and impact on concurrency. These techniques are implemented in the Dali´ main-memory storage manager, and the performance impact of each on normal processing is evaluated. Novel techniques are developed to recover when a transaction has read corrupted data caused by a bad write, and then gone on to write other data in the database. These techniques use limited and relatively low-cost logging of transaction reads to trace the corruption, and may also prove useful when resolving problems caused by incorrect data entry and other logical errors",1999,0, 1331,Safety as a metric,"Most software metrics measure the syntactic qualities of a program. While measuring such properties may reveal problems in programs, these metrics fail to measure the essence of programs: (partial) correctness and robustness. We therefore propose to base metrics on semantic, instead of syntactic, criteria. To illustrate the idea of semantics-based metrics, we have built static debuggers, which are tools that detect potential run-time failures. More specifically, a static debugger analyses programs written in safe programming languages and pinpoints those program operations that might trigger a run-time error. This paper briefly recalls what safety means for a programming language. It then sketches how a static debugger works and the role it plays in measuring the robustness of a program. The last section discusses the use of static debuggers in the classroom, an NSF Educational Innovation Project",1999,0, 1332,Using antenna patterns to improve the quality of SeaSonde HF radar surface current maps,"The SeaSonde coastal HF current mapping radar realizes its very small and convenient antenna size by employing the MUSIC direction finding (DF) algorithm, rather than beam forming to determine the bearing angle to each point on the sea plot. Beam forming requires large phased array antennas spanning up to 100 m of linear coastal extent. The SeaSonde uses two colocated crossed loops and an omnidirectional monopole as the receive antenna system. Unique features of DF polar maps can be angle sectors with sparse coverage or gaps. This occurs when the antenna patterns are distorted by nearby terrain or buildings, a frequent effect observed with all HF radars. In addition, uncorrected distortions can produce angle biases (misplacements) of the radial current vectors, sometimes as much as 10°. The actual antenna patterns (with distortions) are now measured with a small battery operated transponder, either from land in front of the antenna or from a boat. These are then inserted into the software to calibrate, i.e., correct for the distortions. A number of algorithms have been evaluated for this correction process, both using simulations (with known input) as well as actual measured sea echo. The authors present the latest findings on gap and bias mitigation, which show that nearly all of these deleterious effects can be reduced to acceptable levels",1999,0, 1333,Warehouse creation-a potential roadblock to data warehousing,"Data warehousing is gaining in popularity as organizations realize the benefits of being able to perform sophisticated analyses of their data. Recent years have seen the introduction of a number of data-warehousing engines, from both established database vendors as well as new players. The engines themselves are relatively easy to use and come with a good set of end-user tools. However, there is one key stumbling block to the rapid development of data warehouses, namely that of warehouse population. Specifically, problems arise in populating a warehouse with existing data since it has various types of heterogeneity. Given the lack of good tools, this task has generally been performed by various system integrators, e.g., software consulting organizations which have developed in-house tools and processes for the task. The general conclusion is that the task has proven to be labor-intensive, error-prone, and generally frustrating, leading a number of warehousing projects to be abandoned mid-way through development. However, the picture is not as grim as it appears. The problems that are being encountered in warehouse creation are very similar to those encountered in data integration, and they have been studied for about two decades. However, not all problems relevant to warehouse creation have been solved, and a number of research issues remain. The principal goal of this paper is to identify the common issues in data integration and data-warehouse creation",1999,0, 1334,Rule-induction and case-based reasoning: hybrid architectures appear advantageous,"Researchers have embraced a variety of machine learning (ML) techniques in their efforts to improve the quality of learning programs. The recent evolution of hybrid architectures for machine learning systems has resulted in several approaches that combine rule induction methods with case-based reasoning techniques to engender performance improvements over more traditional single-representation architectures. We briefly survey several major rule-induction and case-based reasoning ML systems. We then examine some interesting hybrid combinations of these systems and explain their strengths and weaknesses as learning systems. We present a balanced approach to constructing a hybrid architecture, along with arguments in favor of this balance and mechanisms for achieving a proper balance. Finally, we present some initial empirical results from testing our ideas and draw some conclusions based on those results",1999,0, 1335,Software metrics knowledge and databases for project management,"The construction and maintenance of large, high-quality software projects is a complex, error-prone and difficult process. Tools employing software database metrics can play an important role in the efficient execution and management of such large projects. In this paper, we present a generic framework to address this problem. This framework incorporates database and knowledge-base tools, a formal set of software test and evaluation metrics, and a suite of advanced analytic techniques for extracting information and knowledge from available data. The proposed combination of critical metrics and analytic tools can enable highly efficient and cost-effective management of large and complex software projects. The framework has the potential for greatly reducing venture risks and enhancing production quality in the domain of large-scale software project management",1999,0, 1336,Safety and reliability driven task allocation in distributed systems,"Distributed computer systems are increasingly being employed for critical applications, such as aircraft control, industrial process control, and banking systems. Maximizing performance has been the conventional objective in the allocation of tasks for such systems. Inherently, distributed systems are more complex than centralized systems. The added complexity could increase the potential for system failures. Some work has been done in the past in allocating tasks to distributed systems, considering reliability as the objective function to be maximized. Reliability is defined to be the probability that none of the system components falls while processing. This, however, does not give any guarantees as to the behavior of the system when a failure occurs. A failure, not detected immediately, could lead to a catastrophe. Such systems are unsafe. In this paper, we describe a method to determine an allocation that introduces safety into a heterogeneous distributed system and at the same time attempts to maximize its reliability. First, we devise a new heuristic, based on the concept of clustering, to allocate tasks for maximizing reliability. We show that for task graphs with precedence constraints, our heuristic performs better than previously proposed heuristics. Next, by applying the concept of task-based fault tolerance, which we have previously proposed, we add extra assertion tasks to the system to make it safe. We present a new heuristic that does this in such a way that the decrease in reliability for the added safety is minimized. For the purpose of allocating the extra tasks, this heuristic performs as well as previously known methods and runs an order of magnitude faster. We present a number of simulation results to prove the efficacy of our scheme",1999,0, 1337,Multi-domain surety modeling and analysis for high assurance systems,"Engineering systems are becoming increasingly complex as state of the art technologies are incorporated into designs. Surety modeling and analysis is an emerging science that permits an engineer to qualitatively and quantitatively predict and assess the completeness and predictability of a design. Surety is a term often used in the Department of Defense (DoD) and Department of Energy (DOE) communities, which refers to the integration of safety, security, reliability and performance aspects of design. Current risk assessment technologies for analyzing complex systems fail to adequately describe the problem, thus making assessment fragmented and non-integrated. To address this problem, we have developed a methodology and extensible software toolset to address model integration and complexity for high consequence systems. The MultiGraph Architecture (MGA) facilitates multi-domain, model-integrated modeling and analyses of complex, high-assurance systems. The MGA modeling environment allows the engineer to customize the modeling environment to match a design paradigm representative of the actual design. Previous modeling tools have a predefined model space that forces the modeler to work in less than optimal environments. Current approaches force the problem to be bounded and constrained by requirements of the modeling tool and not the actual design problem. In some small cases, this is only marginally adequate. The MGA facilitates the implementation of a surety methodology, which is used to represent high assurance systems with respect to safety and reliability. Formal mathematical models are used to correctly describe design safety and reliability functionality and behavior. The functional and behavioral representations of the design are then analyzed using commercial-off-the-shelf (COTS) tools",1999,0, 1338,Monitoring changes in system variables due to islanding condition and those due to disturbances at the utilities' network,"The continuous growth in embedded generation (EG) has emphasised the importance of being able to reliably detect loss of mains (LOM) especially when the EG is `islanded' with part of the utility network. Current methods of detection have not proved completely dependable, particularly when the `islanded' load's capacity matches that of the EG. This paper is the result of an investigation, using the Alternative Transients Program (ATP), into new methods of detecting LOM. To determine which system variables can be reliably used to detect LOM, it is necessary to accurately monitor all the relevant signals at the LOM relay location. This requires that the simulation realistically detects all the changes in system variables caused by disturbances on the network. In some cases, the results can vary markedly, depending on such factors as measurement technique and sampling rate. These problems can cause numerical instability leading to errors. The paper discusses these issues and the results obtained from ATP",1999,0, 1339,A change impact model for changeability assessment in object-oriented software systems,"Growing maintenance costs have become a major concern for developers and users of software systems. Changeability is an important aspect of maintainability, especially in environments where software changes are frequently required. In this work, the assumption that high-level design has an influence on maintainability is carried over to changeability and investigated for quality characteristics. The approach taken to assess the changeability of an object-oriented (OO) system is to compute the impact of changes made to classes of the system. A change impact model is defined at the conceptual level and mapped on the C++ language. In order to assess the practicality of the model on large industrial software systems, an experiment involving the impact of one change is carried out on a telecommunications system. The results suggest that the software can easily absorb this kind of change and that well chosen conventional OO design metrics can be used as indicators of changeability",1999,0, 1340,Application of a usage profile in software quality models,"Faults discovered by customers are an important aspect of software quality. The working hypothesis of this paper is that variables derived from an execution profile can be useful in software quality models. An execution profile of a software system consists of the probability of execution of each module during operations. Execution represents opportunities for customers to discover faults. However, an execution profile over an entire customer-base can be difficult to measure directly. Deployment records of past releases can be a valuable source of data for calculating an approximation to the probability of execution. We analyze a metric derived from deployment records which is a practical surrogate for an execution profile in the context of a software quality model. We define usage as the proportion of systems in the field which have a module deployed. We present a case study of a very large legacy telecommunications system. We developed models using a standard statistical technique to predict whether software modules will have any faults discovered by customers on systems in the field. Static software product metrics and usage were independent variables. The significance levels of variables in logistic regression models were analyzed, and models with and without usage as an independent variable were compared. The case study was empirical evidence that usage can be a significant contributor to a software quality model",1999,0, 1341,A software defect report and tracking system in an intranet,"Describes a case study where SofTrack (a software defect reporting and tracking system) was implemented using Internet technology in a geographically distributed organization. Four medium- to large-sized information systems with different levels of maturity are being analysed within the scope of this project. They belong to the Portuguese Navy's Information Systems Infrastructure and were developed using a typical legacy systems technology: COBOL with embedded SQL for queries in a relational database environment. This empirical software engineering pilot project has allowed the development of several techniques to help software managers to better understand, control and ultimately improve the software process. Among them are: the introduction of automatic system documentation, module complexity assessment, and effort estimation for software maintenance activities in the organization",1999,0, 1342,Identifying modules which do not propagate errors,"Our goal is to identify software modules that have some locations which do not propagate errors induced by a suite of test cases. This paper focuses on whether or not data state errors can propagate from a location in the code to the outputs or observable data state during random testing with inputs drawn from an operational distribution. If a code-location's probability of propagation is estimated to be zero, then a fault in that location could escape defection during testing. Because testing is never exhaustive, there is a risk that failures due to such latent faults could occur during operations. Fault injection is a technique for directly measuring the probability of propagation. However, measurement for every location in the code of a full-scale program is often prohibitively computation-intensive. Our objective is a practical, useful alternative to direct measurement. We present empirical evidence that static software product metrics can be useful for identifying software modules where the effects of a fault in that module are not observable. A case study of an intricate computer game program revealed a useful empirical relationship between static software product metrics and propagation of errors. The case study program was an order of magnitude larger than previously reported studies",1999,0, 1343,Software radio-an alternative for the future in wireless personal and multimedia communications,"Software radio development aims at wideband RF access and software partitioning for plug-and-play type of use. The development is facilitated by progress in silicon capabilities, signal processing power of new and future processors and reconfiguration methods (software download, smart cards, etc.). However, there are still a lot of problems in implementing software radio. These problems include high quality wideband RF access and high performance analog to digital and digital to analog conversion due to the extreme demands placed upon them. Due to lack of suitable methods for performance evaluation it is also difficult to estimate the signal processing requirement for a software radio when implementing several systems in the same platform. There are also no methods for estimating the processing capacity of a software radio platform consisting of multiple DSP processors, CPUs, memory modules and low speed/high speed buses. On top of this there are no universal design tools that could be used all the way from system level specification to actual implementation but instead different CAD-tools with different kind of options must be used in the design process. Due to finite processing capacity, novel and computationally efficient DSP-architectures must be defined for critical functions to optimize the designs. This includes identification of common functionalities. These topics will addressed in the paper",1999,0, 1344,Reducing system overheads in home-based software DSMs,"Software DSM systems suffer from the high communication and coherence-induced overheads that limit performance. This paper introduces our efforts in reducing system overheads of a home-based software DSM called JIAJIA. Three measures, including eliminating false sharing through avoiding unnecessarily invalidating cached pages, reducing virtual memory page faults with a new write detection scheme, and propagating barrier message in a hierarchical way, are taken to reduce the system overhead of JIAJIA. Evaluation with some well-known DSM benchmarks reveals that, though varying with memory reference patterns of different applications, these measures can reduce system overhead of JIAJIA effectively",1999,0, 1345,CRUSADE: hardware/software co-synthesis of dynamically reconfigurable heterogeneous real-time distributed embedded systems,"Dynamically reconfigurable embedded systems offer potential for higher performance as well as adaptability to changing system requirements at low cost. Such systems employ run-time reconfigurable hardware components such as field programmable gate arrays (FPGAs) and complex programmable logic devices (CPLDs). In this paper, we address the problem of hardware/ software co-synthesis of dynamically reconfigurable embedded systems. Our co-synthesis system, CRUSADE, takes as an input embedded system specifications in terms periodic acyclic task graphs with rate constraints and generates dynamically reconfigurable heterogeneous distributed hardware and software architecture meeting real-time constraints while minimizing the system hardware cost. We identify the group of tasks for dynamic reconfiguration of programmable devices and synthesize an efficient programming interface for reconfiguring reprogrammable devices. Real-time systems require that the execution time for tasks mapped to reprogrammable devices are managed effectively such that real-time deadlines are not exceeded. To address this, we propose a technique to effectively manage delay in reconfigurable devices. Our approach guarantees that the real-time task deadlines are always met. To the best of our knowledge, this is the first co-synthesis algorithm which targets dynamically reconfigurable embedded systems. We also show how our co-synthesis algorithm can be easily extended to consider fault-detection and fault-tolerance. Application of CRUSADE and its fault tolerance extension, CRUSADE-FT to several real-life large examples (up to 7400 tasks) from mobile communication network base station, video distribution router, a multi-media system, and synchronous optical network (SONET) and asynchronous transfer mode (ATM) based telecom systems shows that up to 56% system cost savings can be realized",1999,0, 1346,COFTA: hardware-software co-synthesis of heterogeneous distributed embedded systems for low overhead fault tolerance,"Embedded systems employed in critical applications demand high reliability and availability in addition to high performance. Hardware-software co-synthesis of an embedded system is the process of partitioning, mapping, and scheduling its specification into hardware and software modules to meet performance, cost, reliability, and availability goals. In this paper, we address the problem of hardware-software co-synthesis of fault-tolerant real-time heterogeneous distributed embedded systems. Fault detection capability is imparted to the embedded system by adding assertion and duplicate-and-compare tasks to the task graph specification prior to co-synthesis. The dependability (reliability and availability) of the architecture is evaluated during co-synthesis. Our algorithm, called COFTA (Co-synthesis Of Fault-Tolerant Architectures), allows the user to specify multiple types of assertions for each task. It uses the assertion or combination of assertions which achieves the required fault coverage without incurring too much overhead. We propose new methods to: 1) Perform fault tolerance based task clustering, which determines the best placement of assertion and duplicate-and-compare tasks, 2) Derive the best error recovery topology using a small number of extra processing elements, 3) Exploit multidimensional assertions, and 4) Share assertions to reduce the fault tolerance overhead. Our algorithm can tackle multirate systems commonly found in multimedia applications. Application of the proposed algorithm to a large number of real-life telecom transport system examples (the largest example consisting of 2,172 tasks) shows its efficacy. For fault secure architectures, which just have fault detection capabilities, COFTA is able to achieve up to 48.8 percent and 25.6 percent savings in embedded system cost over architectures employing duplication and task-based fault tolerance techniques, respectively. The average cost overhead of COFTA fault-secure architectures over simplex architectures is only 7.3 percent. In case of fault-tolerant architectures, which cannot only detect but also tolerate faults, COFTA is able to achieve up to 63.1 percent and 23.8 percent savings in embedded system cost over architectures employing triple-modular redundancy, and task-based fault tolerance techniques, respectively. The average cost overhead of COFTA fault-tolerant architectures over simplex architectures is only 55.4 percent",1999,0, 1347,Control charts for random and fixed components of variation in the case of fixed wafer locations and measurement positions,"In such processes as wafer-grinding and LPCVD, the variation within a group of measurements is traceable to various causes, and therefore, needs to be decomposed into relevant components of variation for an effective equipment monitoring and diagnosis. In this article, an LPCVD process is considered as an example, and control charting methods are developed for monitoring various fixed and random components of variation separately. For this purpose, the structure of measurement data (e.g., thickness) is described by a split-unit model in which two different sizes of experimental units (i.e., the wafer location as a whole unit and the measurement position as a split unit) are explicitly recognized. Then, control charts are developed for detecting unusual differences among fixed wafer locations within a batch and fixed measurement positions within a wafer. Control charts for the overall process average and random error variations are also developed, and the proposed method is illustrated with an example",1999,0, 1348,On modeling concurrent heavy-tailed network traffic sources and its impact upon QoS,"Studies of local area network and wide area network traffic have provided much evidence of what has been called “self-similarâ€? “long-range dependentâ€? “fractalâ€? “chaoticâ€? “heavy-tailâ€? or “power-tailâ€?(PT) network traffic. These studies have indicated that conventional traffic models (e.g. Poisson) generate overly optimistic performance measurements. In this paper we present an interarrival process which extends previous work in this area regarding a heavy-tailed ON/OFF source model described in Fiorini and Lipsky (1998). Extending their work, we construct a heavy-tailed ON/OFF traffic model which attempts to characterize traffic generated from concurrent sources. The primary application of this process is to model, say, traffic generated by (World Wide Web) WWW servers and/or, in general, Ethernet traffic. Characterizing and modeling the behavior are important for accessing quality of service (QoS) given much empirical evidence of heavy-tailed phenomena in network teletraffic. As a result, we derive an approximation for computing the “critical pointsâ€?(e.g. system utilization beyond which performance rapidly degrades), and a means to compute the cell loss probabilities for, say, WWW gateways and/or routers",1999,0, 1349,A multi-rate resource control (MRRC) scheme for wireless/mobile networks,"In this paper, we propose and study a multi-rate resource control (MRRC) scheme for non-real time data applications in mobile cellular networks. The MRRC scheme adapts to dynamically changing system load by adjusting resource assignments for mobiles. The MRRC scheme consists of four major techniques: 1) variable-rate admission control policy; 2) soft handoff procedure; 3) resource assignment prioritization based on mobile's location, non-handoff versus handoff zone; 4) handoff prioritization by queuing handoff requests. We evaluate the blocking probability and the forced termination probability. In addition, we study the effect of handoff to cell area ratio (HTCR) on MRRC performance. A streamlined analytical model is introduced to evaluate the MRRC scheme for the 2-cells case. Numerical results are presented for the 2-cells case; simulation results are presented for the 10-cells case. Our results show that the MRRC system achieves significant gain over the conventional single rate system in terms of the blocking probability and the forced termination probability",1999,0, 1350,Combining periodic and probabilistic checkpointing in optimistic simulation,"This paper presents a checkpointing scheme for optimistic simulation which is a mixed approach between periodic and probabilistic checkpointing. The latter based on statistical data collected during the simulation, aims at recording as checkpoints states of a logical process that have high probability to be restored due to rollback (this is done in order to make those states immediately available). The periodic part prevents performance degradation due to state reconstruction (coasting forward) cost whenever the collected statistics do not allow to identify states highly likely to be restored. Hence, this scheme can be seen as a highly general solution to tackle the checkpoint problem in optimistic simulation. A performance comparison with previous solutions is carried out through a simulation study of a store-and-forward communication network in a two-dimensional torus topology",1999,0, 1351,Optimal state prediction for feedback-based QoS adaptations,"In heterogeneous network environments with performance variations present, complex distributed applications, such as distributed visual tracking applications, are desired to adapt themselves and to adjust their resource demands dynamically, in response to fluctuations in either end system or network resources. By such adaptations, they are able to preserve the user-perceptible critical QoS parameters, and trade off non-critical ones. However, correct decisions on adaptation timing and scale, such as determining data rate transmitted from the server to clients in an application, depend on accurate observations of system states, such as quantities of data in transit or arrived at the destination. Significant end-to-end delay may obstruct the desired accurate observation. We present an optimal state prediction approach to estimate current states based on available state observations. Once accurate predictions are made, the applications can be adjusted dynamically based on a control-theoretical model. Finally, we show the effectiveness of our approach with experimental results in a client-server based visual tracking application, where application control and state estimations are accomplished by middleware components",1999,0, 1352,Evaluation of differentiated services using an implementation under Linux,"Current efforts to provide distinct levels of quality-of-service in the Internet are concentrated on the differentiated services (DS) approach. In order to investigate the gain for users of those differentiated services, early experiences with implementations with respect to real applications are needed. Simulation models are often not sufficient if a judgement of the behavior under realistic traffic scenarios is desired. Because implementing new functionality into dedicated router hardware is difficult and time-consuming, we focused on a software implementation for standard PC hardware. In this paper we present an implementation of differentiated services functions for a PC-based router running under the Linux operating system. Two per-hop forwarding behaviors for assured service and premium service were realized. Components for traffic conditioning such as traffic meter, token bucket, leaky bucket and traffic shaper were implemented as well as an efficient traffic classificator and queueing disciplines. We describe the design and implementation issues of these components, which were validated in detail by measurements. Evaluation of these measurements shows that the proposed forwarding behaviors work well for boundary and interior routers. But, it also becomes apparent that standard applications using short-lived TCP connections cannot always exploit the requested service completely whereas rate-controlled sending applications are able to take full advantage of it. Furthermore, it is planned to release the implementation to the public for research purposes",1999,0, 1353,Efficient multi-field packet classification for QoS purposes,"Mechanisms for service differentiation in datagram networks, such as the Internet, rely on packet classification in routers to provide appropriate service. Classification involves matching multiple packet header fields against a possibly large set of filters identifying the different service classes. In this paper, we describe a packet classifier based on tries and binomial trees and we investigate its scaling properties in three QoS scenarios that are likely to occur in the Internet. One scenario is based on integrated services and RSVP and the other two are based on differentiated services. By performing a series of tests, we characterize the processing and memory requirements for a software implementation of our classifier. Evaluation is done using real data sets taken from two existing high-speed networks. Results from the IntServ/RSVP tests on a Pentium 200 MHz show that it takes about 10.5 μs per packet and requires 2000 KBytes of memory to classify among 11000 entries. Classification for a virtual leased line service based on DiffServ with the same number of entries takes about 9 μs per packet and uses less than 250 KBytes of memory. With an average packet size of 2000 bits, our classifier can manage data rates of about 200 Mbit/s on a 200 MHz Pentium. We conclude that multi-field classification is feasible in software and that high-performance classifiers can run on low-cost hardware",1999,0, 1354,Determination of functional domains for use with functional size measurement-opportunities to classify software from a business perspective,"Functional size measurement (FSM) is the focus of ISO/IEC Project 14143. This project has five parts, ranging from the recently published Part 1 (“Concepts of FSMâ€? to Part 5 (“Determination of Functional Domains for Use with FSMâ€?, which is in development as a Technical Report Type 2 (TR2). This paper outlines the basic principles of FSM and why the functional domain concept has been seen as important to the overall project success. It covers the rationale behind this sub-project, its relevance to the software engineering world and how the functional domain topic extends far beyond the realm of simple FSM. The paper profiles the state of the software industry today and how the classifications of software in modern literature are insufficient for the needs of FSM. The current state of this TR2-in-progress is also presented, together with the various opinions of the international community involved in its development",1999,0, 1355,A systematic DFT procedure for library cells,"We present an automated procedure for improving the testability of a product by improving the testability of cells in the cell library. This method was applied to a scan flip-flop from Cyrix's standard cell library. Based on this analysis, some design and layout changes were suggested, which brought down the probability of difficult-to-detect faults by 70%, without compromising the performance or increasing the area of the circuit",1999,0, 1356,1999 IEEE International Conference on Communications (Cat. No. 99CH36311),"The following topics were dealt with: antennas; multiuser receivers and interference mitigation; turbo codes; ABR traffic control in ATM networks; OFDM and multicarrier radio systems; QoS and performance issues in advanced communications; last mile technologies; communication software; tactical communications; mobile Internet; architecture; traffic modelling and applications; coding techniques; QoS routing techniques for B-ISDN; signal processing for wireless communications; information transport using optical architecture; infrared access; wireless networks management; enterprise network management; Internet; scheduling and resource management; handover; ATM network applications; modulation and coding; multimedia traffic shaping and prediction in high speed networks; multiuser detection and interference cancellation; interactive video service and applications; network management; satellite communication systems design; multi-access systems; radio resource management; multi-access protocols; radio system synchronization and equalization; networked delivery of multimedia applications and services; distributed management; ATM switch architecture; reliable multimedia wireless networks; antenna diversity, beamforming and smart antennas; radio propagation and channel impairments; applications of coding for storage; wireless multimedia; optical system design; mobility management; wireless ATM; CDMA and spread spectrum communication; signal processing and detection for storage; WDM routing and performance management",1999,0, 1357,A software radio architecture for linear multiuser detection,"The integration of multimedia services over wireless channels calls for provision of variable quality of service (QoS) requirements. While radio resource management algorithms (such as power control and call admission control) can provide certain levels of variability in QoS, an alternate approach is to use reconfigurable radio architectures to provide diverse QoS guarantees. We outline a novel reconfigurable architecture for linear multiuser detection, thereby providing a wide range of bit error rate (BER) requirements amongst the constituent receivers of the reconfigurable architecture. Specifically, we focus on achieving this dynamic reconfiguration via a software radio implementation of linear multiuser receivers. Using a unified framework for achieving this reconfiguration, we partition functionality into two core technologies [field programmable gate arrays (FPGA) and digital signal processor (DSP) devices] based on processing speed requirements. We present experimental results on the performance and reconfigurability of the software radio architecture as well as the impact of fixed point arithmetic (due to hardware constraints)",1999,0, 1358,Fault diagnosis of electrical machines-a review,"Recently, research has picked up a fervent pace in the area of fault diagnosis of electrical machines. Like adjustable speed drives, fault prognosis has become almost indispensable. The manufacturers of these drives are now keen to include diagnostic features in the software to decrease machine down time and improve salability. Prodigious improvement in signal processing hardware and software has made this possible. Primarily, these techniques depend upon locating specific harmonic components in the line current, also known as motor current signal analysis (MCSA). These harmonic components are usually different for different types of faults. However with multiple faults or different varieties of drive schemes, MCSA can become an onerous task as different types of faults and time harmonics may end up generating similar signatures. Thus other signals such as speed, torque, noise, vibration etc., are also explored for their frequency contents. Sometimes, altogether different techniques such as thermal measurements, chemical analysis, etc., are also employed to find out the nature and the degree of the fault. It is indeed evident that this area is vast in scope. Hence, keeping in mind the need for future research, a review paper describing the different types of fault and the signatures they generate and their diagnostics schemes, will not be entirely out of place. In particular, such a review helps to avoid repetition of past work and gives a bird's eye-view to a new researcher in this area",1999,0, 1359,"It ain't what you charge, it's the way that you do it: a user perspective of network QoS and pricing","Shared networks, such as the Internet, are fast becoming able to support heterogeneous applications and a diverse user community. In this climate, it becomes increasingly likely that some form of pricing mechanism will be necessary in order to manage the quality of service (QoS) requirements of different applications. So far, research in this area has focused on technical mechanisms for implementing QoS and charging. This paper reports a series of studies in which users' perceptions of QoS, and their attitudes to a range of pricing mechanisms, were investigated. We found that users' knowledge and experience of networks, and the real-world task they perform with applications, determine their evaluation of QoS and attitude to payment. Users' payment behavior is governed by their level of confidence in the performance of salient QoS parameters. User confidence, in turn, depends on a number of other factors. In conclusion, we argue that charging models that undermine user confidence are not only undesirable from the users' point of view, but may also lead to user behavior that may have a negative impact on QoS",1999,0, 1360,Adaptive network/service fault detection in transaction-oriented wide area networks,"Algorithms and online software for automated and adaptive detection of network/service anomalies have been developed and field-tested for transaction-oriented wide area networks (WAN). These transaction networks are integral parts of electronic commerce infrastructures. Our adaptive network/service anomaly detection algorithms are demonstrated in a commercially important production WAN, currently monitored by our recently implemented real-time software system, TRISTAN (transaction instantaneous anomaly notification). TRISTAN adaptively and proactively detects network/service performance degradations and failures in multiple service-class transaction-oriented networks, where performances of service classes are mutually dependent and correlated, and where external or environmental factors can strongly impact network and service performances. In this paper, we present the architecture, summarize the implemented algorithms, and describe the operation of TRISTAN as deployed in the AT&T transaction access services (TAS) network. TAS is a commercially important, high volume, multiple service classes, hybrid telecommunication and data WAN that services transaction traffic in the USA and neighboring countries. It is demonstrated that TRISTAN detects network/service anomalies in TAS effectively. TRISTAN can automatically and dynamically detect network/service faults, which can easily elude detection by the traditional alarm-based network monitoring systems",1999,0, 1361,IP-address lookup using LC-tries,"There has been a notable interest in the organization of routing information to enable fast lookup of IP addresses. The interest is primarily motivated by the goal of building multigigabit routers for the Internet, without having to rely on multilayer switching techniques. We address this problem by using an LC-trie, a trie structure with combined path and level compression. This data structure enables us to build efficient, compact, and easily searchable implementations of an IP-routing table. The structure can store both unicast and multicast addresses with the same average search times. The search depth increases as Θ(log log n) with the number of entries in the table for a large class of distributions, and it is independent of the length of the addresses. A node in the trie can be coded with four bytes. Only the size of the base vector, which contains the search strings, grows linearly with the length of the addresses when extended from 4 to 16 bytes, as mandated by the shift from IP version 4 to IP version 6. We present the basic structure as well as an adaptive version that roughly doubles the number of lookups/s. More general classifications of packets that are needed for link sharing, quality-of-service provisioning, and multicast and multipath routing are also discussed. Our experimental results compare favorably with those reported previously in the research literature",1999,0, 1362,Group support systems and deceptive communication,"Electronic communication is becoming more pervasive worldwide with the spread of the Internet, especially through the World Wide Web and electronic mail. Yet, as with all human communication, electronic communication is vulnerable to deceit on the part of senders, and to the less than stellar performance of most people at defecting deceit aimed at them. Despite considerable research over the years into both computer-mediated communication and into deception, there has been little if any research at the intersection of these two research streams. In this paper, we review these two streams and suggest a research model for investigating their intersection, focusing on group support systems as an electronic medium. We focus specifically on research questions concerning how successful people are at deceiving others through computer-mediated communication, and how successful people are at detecting such deception. We also suggest three propositions guiding research in this area.",1999,0, 1363,Software reliability as a function of user execution patterns,"Assessing the reliability of a software system has always been an elusive target. A program may work very well for a number of years and this same program may suddenly become quite unreliable if its mission is changed by the user. This has led to the conclusion that the failure of a software system is dependent only on what the software is currently doing. If a program is always executing a set of fault free modules, it will certainly execute indefinitely without any likelihood of failure. A program may execute a sequence of fault prone modules and still not fail. In this particular case, the faults may lie in a region of the code that is not likely to be expressed during the execution of that module. A failure event can only occur when the software system executes a module that contains faults. If an execution pattern that drives the program into a module that contains faults is ever selected, then the program will never fail. Alternatively, a program may execute successfully a module that contains faults just as long as the faults are in code subsets that are not executed. The reliability of the system then, can only be determined with respect to what the software is currently doing. Future reliability predictions will be bound in their precision by the degree of understanding of future execution patterns. We investigate a model that represents the program sequential execution of nodules as a stochastic process. By analyzing the transitions between modules and their failure counts, we may learn exactly where the system is fragile and under which execution patterns a certain level of reliability can be guaranteed.",1999,0, 1364,Performance benefits of optimism in fossil collection,"Each process in a time-warp parallel simulation requires a time-varying set of state and event histories to be retained for recovering from erroneous computations. Erroneous computation is discovered when straggler messages arrive with time-stamps in the process's past. The traditional method of determining the set of histories to retain has been through the estimation of a global virtual time (GVT) for the distributed simulation. A distributed GVT calculation requires an estimation of the global progress of the simulation during a real-time interval. Optimistic fossil collection (OFC) predicts a bound for the needed histories using local information, or previously collected information, that enables the process to continue. In most cases, OFC requires less communication overhead and less memory usage, and estimates the set of committed events faster. These benefits come at the cost of a possible penalty of having to recover from a state history that was incorrectly fossil-collected (an OFC fault). Sufficiently lightweight checkpointing and recovery techniques compensate for this possibility while yielding good performance. In this paper, the requirements of an OFC-based simulator (algorithm has been implemented in the WARPED time-warp parallel discrete-event simulator) are detailed along with a presentation of results from an OFC simulation. Performance statistics are given comparing the execution time and required memory usage of each logical process for different fossil collection methods.",1999,0, 1365,Validation of ERS differential SAR interferometry for land subsidence mapping: the Bologna case study,"The city of Bologna, Italy, is ideal to assess the potential of ERS differential SAR interferometry for land subsidence mapping in urban areas for a couple of reasons: the subsiding area is large and presents important velocities of the vertical movements; there is a typical spatial gradient of the vertical movements; many ERS SAR frames are available; a large scientific community is involved in the study of subsidence; a large amount of levelling data is available. The authors analyze a time series of ERS-1/2 data from August 1992 to May 1996 and compare the subsidence maps derived from ERS SAR interferometry and levelling surveys. The authors conclude that for the mapping of land subsidence in urban environments ERS differential SAR interferometry is complementary to levelling surveys and GPS with regard to cost effectiveness, resolution and accuracy",1999,0, 1366,Design and evaluation of system-level checks for on-line control flow error detection,"This paper evaluates the concurrent error detection capabilities of system-level checks, using fault and error injection. The checks comprise application and system level mechanisms to detect control flow errors. We propose Enhanced Control-Flow Checking Using Assertions (ECCA). In ECCA, branch-free intervals (BFI) in a given high or intermediate level program are identified and the entry and exit points of the intervals are determined. BFls are then grouped into blocks, the size of which is determined through a performance/overhead analysis. The blocks are then fortified with preinserted assertions. For the high level ECCA, we describe an implementation of ECCA through a preprocessor that will automatically insert the necessary assertions into the program. Then, we describe the intermediate implementation possible through modifications made on gee to make it ECCA capable. The fault detection capabilities of the checks are evaluated both analytically and experimentally. Fault injection experiments are conducted using FERRARI to determine the fault coverage of the proposed techniques",1999,0, 1367,A prioritized handoff dynamic channel allocation strategy for PCS,"An analytical method is developed to calculate the blocking probability (pb), the probability of handoff failure (ph ), the forced termination probability (pft), and the probability that a call is not completed (pnc) for the no priority (NPS) and reserved channel (RCS) schemes for handoff, using fixed channel allocation (FCA) in a microcellular system. Based only on the knowledge of the new call arrival rate, a method of assessing the handoff arrival rate for any kind of traffic is derived. The analytical method is valid for uniform and nonuniform traffic distributions and is verified by computer simulations. An extension (generalization) to the nonuniform compact pattern allocation algorithm is presented as an application of this analysis. Based on this extended concept, a modified version of a dynamic channel allocation strategy (DCA) called compact pattern with maximized channel borrowing (CPMCB) is presented. With modifications, it is shown that CPMCB is a self-adaptive prioritized handoff DCA strategy with enhanced performance that can be exploited in a personal communications service (PCS) environment leading either to a reduction in infrastructure or to an increase in capacity and grade of service. The effect of user mobility on the grade of service is also considered using CPMCB",1999,0, 1368,Performance of quasi-exact cone-beam filtered backprojection algorithm for axially truncated helical data,"Concerns the problem of tomographic reconstruction from axially truncated cone-beam projections acquired with a helical vertex path. H. Kudo et al. (1998) developed a quasi-exact filtered backprojection algorithm for this problem. This paper evaluates the performance of the quasi-exact algorithm by comparing it with variants of the approximate Feldkamp (1984) algorithm. The results show that the quasi-exact algorithm possesses the following advantages over the Feldkamp algorithm. The first advantage is the improved image quality when the pitch of the helix is large. The second advantage is the use of a detector area close to the minimum detector area that is compatible with exact reconstruction. This area is shown to be much smaller than that required by the Feldkamp algorithm for typical geometries. One disadvantage of the quasi-exact algorithm is the computation time, which is much longer than for the Feldkamp algorithm",1999,0, 1369,On classes of problems in asynchronous distributed systems with process crashes,"This paper is on classes of problems encountered in asynchronous distributed systems in which processes can crash but links are reliable. The hardness of a problem is defined with respect to the difficulty to solve it despite failures: a problem is easy if it can be solved in presence of failures, otherwise it is hard. Three classes of problems are defined: F, NF and NFC. F is the class of easy problems, namely, those that can be solved in presence of failures (e.g., reliable broadcast). The class NF includes harder problems, namely, the ones that can be solved in a non-faulty system (e.g., consensus). The class NFC (NF-complete) is a subset of NF that includes the problems that are the most difficult to solve in presence of failures. It is shown that the terminating reliable broadcast problem, the non-blocking atomic commitment problem and the construction of a perfect failure detector (problem P) are equivalent problems and belong to NFC. Moreover the consensus problem is not in NFC. The paper presents a general reduction protocol that reduces any problem of NF to P. This shows that P is a problem that lies at the core of distributed fault-tolerance",1999,0, 1370,Tighter timing predictions by automatic detection and exploitation of value-dependent constraints,"Predicting the worst case execution time (WCET) of a real time program is a challenging task. Though much progress has been made in obtaining tighter timing predictions by using techniques that model the architectural features of a machine, significant overestimations of WCET can still occur. Even with perfect architectural modeling, dependencies on data values can constrain the outcome of conditional branches and the corresponding set of paths that can be taken in a program. While value-dependent constraint information has been used in the past by some timing analyzers, it has typically been specified manually, which is both tedious and error prone. The paper describes efficient techniques for automatically detecting value-dependent constraints by a compiler and automatically exploiting these constraints within a timing analyzer. The result is tighter timing analysis predictions without requiring additional interaction with a user",1999,0, 1371,Extending software quality assessment techniques to Java systems,"The paper presents extensions to Bell Canada source code quality assessment suite (DATRIX tm) for handling Java language systems. Such extensions are based on source code object metrics, including Java interface metrics, which are presented and explained in detail. The assessment suite helps to evaluate the quality of medium-large software systems identifying parts of the system which have unusual characteristics. The paper also studies and reports the occurrence of clones in medium-large Java software systems. Clone presence affects quality since it increases a system size and often leads to higher maintenance costs. The clone identification process uses Java specific metrics to determine similarities between methods throughout a system. The results obtained from experiments with software evaluation and clone detection techniques, on over 500 KLOC of Java source code, are presented",1999,0, 1372,Application-level measurements of performance on the vBNS,"We examine the performance of high-bandwidth multimedia applications over high-speed wide-area networks, such as the vBNS (Internet2). We simulate a teleimmersion application that sends 30 Mbps (2,700 datagrams per second) to various sites around the USA, and measure network- and application-level loss and throughput. We found that over the vBNS, performance is affected more by the packet-rate generated by an application than by its bit-rate. We also assess the amount of error recovery and buffering needed in times of congestion to present an effective multimedia stream to the user. Lastly, we compare our application-level measurements with data taken from a constellation of GPS-synchronized network probes (IPPM Surveyors). The comparison suggests that network-level probes will not provide information that a multimedia application can use to adapt its behavior to improve performance",1999,0, 1373,Measuring OO systems: a critical analysis of the MOOD metrics,"In parallel with the rise to prominence of the OO paradigm has come the acceptance that conventional software metrics are not adequate to measure object oriented systems. This has inspired a number of software practitioners and academics to develop new metrics that are suited to the OO paradigm. Arguably, the most thorough treatment of the subject is that of the MOOD team, under the leadership of Abreau. The MOOD metrics have been subjected to much empirical evaluation, with claims made regarding the usefulness of the metrics to assess external attributes such as quality and maintainability. We evaluate the MOOD metrics on a theoretical level and show that any empirical validation is premature, due to the majority of the MOOD metrics being fundamentally flawed. The metrics either fail to meet the MOOD team's own criteria or are founded on an imprecise, and in certain cases inaccurate, view of the OO paradigm. We propose our own solutions to some of these anomalies and clarify some important aspects of OO design, in particular those aspects that may cause difficulties when attempting to define accurate and meaningful metrics. The suggestions we make are not limited to the MOOD metrics but are intended to have a wider applicability in the field of OO metrics",1999,0, 1374,Trusted components - 2nd Workshop on Trusted Components,"Summary form only given, as follows. Using ??off-the-shelf?? software components in mission-critical applications is not yet as commonplace as in desktop applications. Opposite to electronic components, there is a lack of methods and quality estimates in the software domain which makes it difficult to build trust into software components. While electronic devices have sets of measures characterizing their quality (reliability, performances, use-domain, speed scale), no real consensus exists to measure such quality characteristics for software components. A second problem is the software component expected capability to evolve (addition of new functionality, implementation change) and to be adapted to various environments. Trusted components are reusable software elements equipped with a strong presumption of high quality, based on a combination of convincing arguments of various kinds: technical (e.g. Design by Contract, thorough and meaningful testing, appropriate programming language, formal proofs), managerial (e.g. systematic process based on CMM or similar), social (e.g. reputation of the components?? authors). The idea of trusted components was introduced in an IEEE Computer article of May 1998 and earlier presentations in the same journal (Object Technology column). A first Trusted Components workshop was held at Monash University (Melbourne, Australia) at the end of TOOLS PACIFIC 1998 on November 28, 1998. The purpose of this workshop is to set up a forum where interested people can share their point of view on the idea of trusted components and identify research directions to help the industry move towards this goal.",1999,0, 1375,Admission control and QoS negotiations for soft-real time applications,"Systems are increasingly supporting a new class of multimedia applications that allow degradation of the application quality so that they use reduced amounts of resources. Management of such applications require new mechanisms to be developed for admission control, negotiation, allocation and scheduling. We present a new admission control algorithm that exploits the degradability property of applications to improve the performance of the system. The algorithm is based on setting aside a portion of the resources as reserves and managing it intelligently so that the total utility of the system is maximized. The mixed greedy and predictive strategy leads to an efficient protocol that also improves the performance of the system. We use the constructs of application benefit functions and resource demand functions in the integrated admission control and negotiation protocol. Extensive simulation experiments are presented in the paper to evaluate the performance of the novel mechanisms and compare it to some of the other methods used in the past",1999,0, 1376,"Quasi-synchronous checkpointing: Models, characterization, and classification","Checkpointing algorithms are classified as synchronous and asynchronous in the literature. In synchronous checkpointing, processes synchronize their checkpointing activities so that a globally consistent set of checkpoints is always maintained in the system. Synchronizing checkpointing activity involves message overhead and process execution may have to be suspended during the checkpointing coordination, resulting in performance degradation. In asynchronous checkpointing, processes take checkpoints without any coordination with others. Asynchronous checkpointing provides maximum autonomy for processes to take checkpoints; however, some of the checkpoints taken may not lie on any consistent global checkpoint, thus making the checkpointing efforts useless. Asynchronous checkpointing algorithms in the literature can reduce the number of useless checkpoints by making processes take communication induced checkpoints besides asynchronous checkpoints. We call such algorithms quasi-synchronous. In this paper, we present a theoretical framework for characterizing and classifying such algorithms. The theory not only helps to classify and characterize the quasi-synchronous checkpointing algorithms, but also helps to analyze the properties and limitations of the algorithms belonging to each class. It also provides guidelines for designing and evaluating such algorithms",1999,0, 1377,Fault injection spot-checks computer system dependability,"Computer-based systems are expected to be more and more dependable. For that, they have to operate correctly even in the presence of faults, and this fault tolerance of theirs must be thoroughly tested by the injection of faults both real and artificial. Users should start to request reports from manufacturers on the outcomes of such experiments, and on the mechanisms built into systems to handle faults. To inject artificial physical faults, fault injection offers a reasonably mature option today, with Swift tools being preferred for most applications because of their flexibility and low cost. To inject software bugs, although some promising ideas are being researched, no established technique yet exists. In any case, establishing computer system dependability benchmarks would make tests much easier and enable comparison of results across different machines",1999,0, 1378,The average availability of parallel checkpointing systems and its importance in selecting runtime parameters,"Performance prediction of checkpointing systems in the presence of failures is a well-studied research area. While the literature abounds with performance models of checkpointing systems, none address the issue of selecting runtime parameters other than the optimal checkpointing interval. In particular the issue of processor allocation is typically ignored. In this paper we briefly present it performance model for long-running parallel computations that execute with checkpointing enabled. We then discuss how it is relevant to today's parallel computing environments and software, and present case studies of using the model to select runtime parameters.",1999,0, 1379,Evaluating the effectiveness of fault tolerance in replicated database management systems,"Database management systems (DBMS) achieve high availability and fault tolerance usually by replication. However fault tolerance does not come for free. Therefore, DBMSs serving critical applications with real time requirements must find a trade of between fault tolerance cost and performance. The purpose of this study is two-fold. It evaluates the effectiveness of DBMS fault tolerance in the presence of corruption in database buffer cache, which poses serious threat to the integrity requirement of the DBMSs. The first experiment of this study evaluates the effectiveness of fault tolerance, and the fault impact on database integrity, performance, and availability on a replicated DBMS, ClustRa, in the presence of software faults that corrupt the volatile data buffer cache. The second experiment identifies the weak data structure components in the data buffer cache that give fatal consequences when corrupted, and suggest the need for some forms of guarding them individually or collectively.",1999,0, 1380,Performance and reliability evaluation of passive replication schemes in application level fault tolerance,"Process replication is provided as the central mechanism for application level software fault tolerance in SwiFT and DOORS. These technologies, implemented as reusable software modules, support cold and warm schemes of passive replication. The choice of a scheme for a particular application is based on its availability and performance requirements. In this paper we analyze the performability of a server software which may potentially use these technologies. We derive closed form formulae for availability throughput and probability of loss of a job. Six scenarios of loss are modeled and for each, these expressions are derived. The formulae can be used either of time or online to determine the optimal replication scheme.",1999,0, 1381,Wrapping windows NT software for robustness,"As Windows NT workstations become more entrenched in enterprise-critical and even mission-critical applications, the dependability of the Windows 32-bit (Win32) platform is becoming critical. To date, studies on the robustness of system software have focused on Unix-based systems. This paper describes an approach to assessing the robustness for Win32 software and providing robustness wrappers for third party commercial off-the-shelf (COTS) software. The robustness of Win32 applications to failing operating system (OS) functions is assessed by using fault injection techniques at the interface between the application and the operating system. Finally, software wrappers are developed to handle OS failures gracefully in order to mitigate catastrophic application failures.",1999,0, 1382,Forecasting with fuzzy neural networks: a case study in stock market crash situations,"Neural networks have been used for forecasting purposes for some years now. The problem of the black-box approach often arises, i.e., after having trained neural networks to a particular problem, it is almost impossible to analyse them for how they work. Fuzzy neuronal networks allow one to add rules to neural networks. This avoids the black-box-problem. Additionally they are supposed to have a higher prediction precision in different situations. A case study describes a comparison of fuzzy neural networks and the classical approach during the stock market crashes of 1987 and 1998. It can be found that rules generate a more stable prediction quality, while the performance is not as good as when using classical neural networks",1999,0, 1383,Dynamic fault detection approaches,"Several dynamic acceptance test approaches, applicable to any linear, causal, time-invariant control system, are developed. The acceptance tests perform a reasonableness check on controller outputs, based upon critical control theory. The dynamic approach checks the control system online and in run-time and reacts to controller demand changes as the control system operates. Theory is presented to enable one to place dynamic magnitude and rate bounds on controller output signals in the time domain. Practical implementation of the acceptance tests into realistic control systems is discussed leading to verification of a highly successful fault detection technique",1999,0, 1384,Fault detection in boilers using canonical variate analysis,"A software based system was developed that monitors industrial boilers on-line for the purpose to detect and identify fault conditions. Dynamic models are used to predict an expected set of process conditions, which form the baseline for evaluation of process conditions. By continuously monitoring the deviations between the baseline and actual observations, insight in the presence of certain faults can be obtained. An online expert system is used to draw conclusions from the observed prediction errors. The dynamic models are based on state space models which are generated by system identification using canonical variate analysis (CVA). The objectives of the system, the approach to the fault detection process and experience with a commercially available system identification tool are discussed",1999,0, 1385,Availability modeling of modular software,"Dependability evaluation is a basic component in assessing the quality of repairable systems. A general model (Op) is presented and is specifically designed for software systems; it allows the evaluation of various dependability metrics, in particular, of availability measures. Op is of the structural type, based on Markov process theory. In particular, Op is an attempt to overcome some limitations of the well-known Littlewood reliability model for modular software. This paper gives the: mathematical results necessary to the transient analysis of this general model; and algorithms that can efficiently evaluate it. More specifically, from the parameters describing the: evolution of the execution process when there is no failure; failure processes together with the way they affect the execution; and recovery process, the results are obtained for the: distribution function of the number of failures in a fixed mission; and dependability metrics which are much more informative than the usual ones in a white-box approach. The estimation procedures of the Op parameters are briefly discussed. Some simple examples illustrate the interest in such a structural view and explain how to consider reliability growth of part of the software with the transformation approach developed by Laprie et al. The complete transient analysis of Op allows discussion of the Poisson approximation by Littlewood for his model",1999,0, 1386,Classification and correlation of monitored power quality events,"Summary form only given. The presentation provides an overview of the existing standards for characterising the quality of electric service to high-tech facilities. The measurement requirements and characteristics of power quality measurements to fully analyse and classify the various PQ events are discussed. The specifications of state-of-the-art PQ monitoring instrumentation are discussed. The requirements of the hardware, the firmware and the software are discussed. These specifications cover the actual data capture, the communication of the instruments to central servers, and the analysis and GUI software requirements. The presentation also includes analysis of events captured at a high-tech facility in the Bay Area of San Francisco, USA. Of special interest is the major event due to a short circuit at one of the substations in the utility's system. The effect of the resulting voltage sag are presented, showing the effect on various systems. This event, and other minor PQ events recorded at the high tech facility are correlated with some of the commonly used PQ standards, and the results are presented",1999,0, 1387,Integrated design approach for real-time embedded systems,"System designers face many problems during the design and implementation of real time embedded systems. Not only must such systems produce correct responses (logical correctness), they must also produce these at the correct time (temporal correctness). The paper presents results of research undertaken to develop an integrated design, development and run time environment for embedded systems that addresses the issues of: design correctness; software partitioning; performance prediction (timeliness, utilisation etc.); task management; and reliable, fast and secure run time behaviour. A description of a prototype tool set, based on a networked PC/UNIX system, is given",1999,0, 1388,Using cultural algorithms to improve performance in semantic networks,"Evolutionary computation has been successfully applied in a variety of problem domains and applications. We describe the use of a specific form of evolutionary computation known as cultural algorithms to improve the efficiency of the subsumption algorithm in semantic networks. Subsumption is the process that determines if one node in the network is a child of another node. As such, it is utilized as part of the node classification algorithm within semantic network based applications, One method of improving subsumption efficiency is to reduce the number of attributes that need to be compared for every node without impacting the results. We suggest that a cultural algorithm approach can be used to identify these defining attributes that are most significant for node retrieval. These results can then be utilized within an existing vehicle assembly process planning application that utilizes a semantic network based knowledge base to improve the performance and reduce complexity of the network",1999,0, 1389,Learning to assess the quality of genetic programs using cultural algorithms,"We explore solution generalizability, bloat, and effective length as examples of software quality issues and measurements that are useful in the analysis of genetic programming (GP) solution programs. The total program size of GP solution programs can be partitioned into effective program size and several types of excess code for which, following Angeline, we use the term “bloatâ€?(P.J. Angeline, 1998). We define several types of bloat: local, global, and representational. We use a cultural algorithm tool called the Metrics Apprentice to explore the relationships between the generalizability of the programmed solution, program size, and the effect of three GP processes purported to reduce bloat",1999,0, 1390,An open interface for probabilistic models of text,"Summary form only given. An application program interface (API) for meddling sequential text is described. The API is intended to shield the user from details of the modelling and probability estimation process. This should enable different implementations of models to be replaced transparently in application programs. The motivation for this API is work on the use of textual models for applications in addition to strict data compression. The API is probabilistic, that is, it supplies the probability of the next symbol in the sequence. It is general enough to deal accurately with models that include escapes for probabilities. The concepts abstracted by the API are explained together with details of the API calls. Such predictive models can be used for a number of applications other than compression. Users of the models do not want to be concerned about the details either of the implementation of the models or how they were trained and the sources of the training text. The problem considered is how to permit code for different models and actual trained models themselves to be interchanged easily between users. The fundamental idea is that it should be possible to write application programs independent of the details of particular modelling code, that it should be possible to implement different modelling code independent of the various applications, and that it should be possible to easily exchange different pre-trained models between users. It is hoped that this independence will foster the exchange and use of high-performance modelling code, the construction of sophisticated adaptive systems based on the best available models, and the proliferation and provision of high-quality models of standard text types such as English or other natural languages, and easy comparison of different modelling techniques",1999,0, 1391,PLS modelling and fault detection on the Tennessee Eastman benchmark,"This paper describes the application of multivariate regression techniques to the Tennessee Eastman benchmark process. Two methods are applied: linear partial least squares, and a nonlinear variant of this procedure using a radial basis function inner relation. These methods are used to create online inferential models of delayed process measurement. The redundancy so obtained is then used to generate a fault detection and isolation scheme for these sensors. The effectiveness of this scheme is demonstrated on a number of test faults",1999,0, 1392,Experiences from testing of functionality and software of meters for electrical energy,"The de-regulation of the electricity market in many European countries has increased the need for electricity meters with enhanced functionality. Both industrial and household customers are using the new types of meters. The dependability of the new technology has in some cases been discussed. The experiences from the evaluations carried out at SP Swedish National Testing and Research Institute for approximately 15 different electronic meters are that there are ways to test the functionality and the embedded software. The effort depends on the complexity of the system, the size of the code and the documentation available. All faults cannot be claimed to be detected, but that is not the objective. The aim must be to reach a certain level of confidence in the product, not to declare it fault-free. It is possible to design an electronic meter for electrical energy which will be accurate, withstand environmental disturbances, provide adequate safety against shock and fire, functioning according to specification, and dependability",1999,0, 1393,An effective approach for utilities and suppliers to assess the quality of new products before purchase,"Metering technology has advanced from the basic Ferraris technology to microprocessor-controlled meters with solid state sensors and electronic register displays. The functional requirement for the microprocessor-controlled meters and modules has grown dramatically. There may be thousands of lines of software code in an electronic meter to support this additional functionality. Yet the existing type approval methods really only test the basic metering functionality. Should other arrangements be made for type approval of the software component, to reduce the risk for the purchaser of the end equipment? This paper discusses the problems faced by purchasers and manufacturers of utility equipment relying on existing type approval methods, and the philosophy which appears to be emerging from a number of national and European groups to solve some of these problems. The authors propose a cost-effective method that should address these problems for the manufacturers and provide a high degree of quality assurance to the purchasers. The paper focuses on software quality processes but a similar approach could be applied to the hardware aspects of product development. The discussion refers to revenue and payment metering as examples but the ideas proposed could be applied equally well to other products",1999,0, 1394,A new metrics set for evaluating testing efforts for object-oriented programs,"Software metrics proposed and used for procedural paradigm have been found inadequate for object oriented software products, mainly because of the distinguishing features of the object oriented paradigm such as inheritance and polymorphism. Several object oriented software metrics have been described in the literature. These metrics are goal driven in the sense that they are targeted towards specific software qualities. We propose a new set of metrics for object oriented programs; this set is targeted towards estimating the testing efforts for these programs. The definitions of these metrics are based on the concepts of object orientation and hence are independent of the object oriented programming languages. The new metrics set has been critically compared with three other metrics sets published in the literature",1999,0, 1395,Adaptive fuzzy controller for predicting order of concurrent actions in real-time systems,"In controlling a real-time system with a large number of controlling points and when the time spent for setting up the reacting actions is longer than the deadline, we have to predict the reacting actions before receiving complete information about them. These actions are chosen from the concurrent past actions based on the rank of their utilities. Thus in predicting the rank we have to estimate the indefinite qualities defined by the uncompleted information. The paper presents a hardware approach to predict the order of concurrent actions based on a fuzzy and adaptive mechanism which measures the possible utilities given in intervals with additional statistical information. In order to process efficiently the incomplete data we design a fuzzy hardware controller (FHC) which specializes on ranking the intervals according to the observed data in the past. The analysis shows that the fuzzy measurement implemented by a hardware specialized controller with parallel architecture is able to reduce significantly the processing time, compared with implementation by software running on a general computer. Moreover, the FHC can adapt to the types of statistical data which is also safer and can be updated easily within the common memory of the FHC.",1999,0, 1396,Software fault tolerance in a clustered architecture: techniques and reliability modeling,"System architectures based on a cluster of computers have gained substantial attention recently. In a clustered system, complex software-intensive applications can be built with commercial hardware, operating systems, and application software to achieve high system availability and data integrity, while performance and cost penalties are greatly reduced by the use of separate error detection hardware and dedicated software fault tolerance routines. Within such a system a watchdog provides mechanisms for error detection and switch-over to a spare or backup processor in the presence of processor failures. The application software is responsible for the extent of the error detection, subsequent recovery actions and data backup. The application can be made as reliable as the user requires, being constrained only by the upper bounds on reliability imposed by the clustered architecture under various implementation schemes. We present reliability modeling and analysis of the clustered system by defining the hardware, operating system, and application software reliability techniques that need to be implemented to achieve different levels of reliability and comparable degrees of data consistency. We describe these reliability levels in terms of fault detection, fault recovery, volatile data consistency, and persistent data consistency, and develop a Markov reliability model to capture these fault detection and recovery activities. We also demonstrate how this cost-effective fault tolerant technique can provide quantitative reliability improvement within applications using clustered architectures",1999,0, 1397,Comparison of probability of non-visibility in case of faults of some satellites in LEO constellations,"In this paper we analyze the coverage performance of satellite constellation in low earth orbit. First we analyze the various possible configurations of satellite failures in the same orbital plane or in different orbital planes when one satellite or a number of satellites are at fault. To evaluate the performance of each constellation we use one parameter, which is the maximum non-visibility time at one receiver position on Earth by using simulation software. We have also simulated the coverage performance variations depending on the receiver position on earth by varying the latitude of the receive position from 0 to 90 degrees. According to the results of the simulations we compare the maximum non-visibility time depending on the configuration of satellite and identify the worst case combination of satellite failures in satellite constellation. We also evaluate the performances the satellite constellations in 780 km orbit in case of a fault of some satellite in the same orbital plane or in different orbital plane. Finally we evaluate the some orbital manoeuvre strategy, (phase changing in the same orbital plane), to minimize the non-visibility time on Earth in case of failure of number of satellites",1999,0, 1398,A self-validating digital Coriolis mass-flow meter,"A first generation self-validating sensor (SEVA) prototype based on the Coriolis meter consisted of the flowtube and conventional commercial transmitter with a PC. The PC and the modified transmitter together exemplified how a SEVA device would behave. The prototype demonstrated an ability to detect and correct for several fault modes, as well as the generation of SEVA metrics. However, one benefit of carrying out a sensor validation analysis is that it leads to fundamental redesign. The identification of the major limitations and fault modes of an instrument, and their impact on measurement quality, provides strong motivation for improvements in the basic design. The idea of an all-digital transmitter design emerged after this first round of validation activity. The primary intention was to replace all the analogue circuitry (other than essential front-end op-amps) with a small number of digital components, and to carry out almost all functionality within software. This paper describes aspects of the improved performance obtained from the digital transmitter: measurement precision, flowtube control, two-phase flow and batching from empty",1999,0, 1399,Prediction error as a quality metric for motion and stereo,"This paper presents a new methodology for evaluating the quality of motion estimation and stereo correspondence algorithms. Motivated by applications such as novel view generation and motion-compensated compression, we suggest that the ability to predict new views or frames is a natural metric for evaluating such algorithms. Our new metric has several advantages over comparing algorithm outputs to true motions or depths. First of all, it does not require the knowledge of ground truth data, which may be difficult or laborious to obtain. Second, it more closely matches the ultimate requirements of the application, which are typically tolerant of errors in uniform color regions, but very sensitive to isolated pixel errors or disocclusion errors. In the paper we develop a number of error metrics based on this paradigm, including forward and inverse prediction errors, residual motion error and local motion-compensated prediction error. We show results on a number of widely used motion and stereo sequences, many of which do not have associated ground truth data",1999,0, 1400,Continuity and contingency planning for medical devices,"The Medical Device Agency's (MDA) key aim is to take all reasonable steps to protect the public health and safeguard the interests of patients and users by ensuring that medical devices and equipment meet appropriate standards of safety, quality and performance and that they comply with relevant Directives of the European Union. It is expected that some computers and other electronic equipment will malfunction because of the Year 2000 date change (and other related dates). MDA's involvement is where such equipment is a medical device. Medical device manufacturers are better placed than manufacturers in some other sectors to identify and rectify whatever date-related problems may exist with their products. However they generally do not know exactly which versions of which products are in service in which places. These factors led MDA to identify an appropriate division of responsibility: manufacturers needed to assess each of their products (including any variants of the same model incorporating different software versions or different hardware), to identify any which were affected by year 2000 problems, and to make this information available to users, MDA and other interested parties, with an indication of the appropriate remedial action; users needed to identify what devices they have, by make and model, and if necessary also by serial number and/or software version, and then to contact the manufacturers for information; MDA needed to provide appropriate advice, to encourage manufacturers to provide compliance information, and to investigate any reported non-compliances that might adversely affect the safety of patients or users",1999,0, 1401,A generic platform for scalable access to multimedia-on-demand systems,"Access to multimedia servers is commonly done according to a client/server model where the end user at the client host retrieves multimedia objects from a multimedia server. In a distributed environment, a number of end users may need to access a number of multimedia servers through one or several communication networks. Such a scenario reveals the requirement for a distributed access platform. In addition, the demand for multimedia information is increasing beyond the capabilities of high performance storage devices. Therefore, load distribution and scalability issues must be addressed while designing and implementing the distributed access platform. This paper introduces a scalable access platform (SAP) for managing user access to multimedia-on-demand systems while optimizing resource utilization. The platform is generic and capable of integrating heterogeneous multimedia servers. SAP operation combines static replication and dynamic load distribution policies. It provides run time redirecting of client requests to multimedia servers according to the workload information dynamically collected in the system. To support multimedia-on-demand systems with differing quality-of-service (QoS) requirements, the platform also takes into account, as part of the access process, user QoS requirements and cost constraints. This paper also presents an application of the generic platform implementing a scalable movie-on-demand system, called SMoD. Performance evaluation based on simulation shows that in many cases SMoD can reduce the blocking probability of user requests, and thus can support more users than classical video-on-demand (VoD) systems. It also shows that the load is better distributed across the video servers of the system",1999,0, 1402,Fault emulation: A new methodology for fault grading,"In this paper, we introduce a method that uses the field programmable gate array (FPGA)-based emulation system for fault grading. The real-time simulation capability of a hardware emulator could significantly improve the performance of fault grading, which is one of the most time consuming tasks in the circuit design and test process. We employ a serial fault emulation algorithm enhanced by two speed-up techniques. First, a set of independent faults can be injected and emulated at the same time. Second, multiple dependent faults can be simultaneously injected within a single FPGA-configuration by adding extra circuitry. Because the reconfiguration time of mapping the numerous faulty circuits into the FPGA's is pure overhead and could be the bottleneck of the entire process, using extra circuitry for injecting a large number of faults can reduce the number of FPGA-reconfigurations and, thus, improving the performance significantly. In addition, we address the issue of handling potentially detected faults in this hardware emulation environment by using the dual-railed logic. The performance estimation shows that this approach could be several orders of magnitude faster than the existing software approaches for large sequential designs",1999,0, 1403,Off-line testing of reluctance machines,Experimental techniques for determining the machine parameters and results demonstrating their performance on an axially-laminated synchronous reluctance motor are presented. A new software implementation of the control algorithm for conducting an instantaneous flux-linkage test is developed. A low frequency AC test allowing better estimation accuracy compared to a standard 50-Hz test is proposed. Effects of stator slotting on the quality of DC torque test estimates are examined,1999,0, 1404,Effects of uncompensated vector control on synchronous reluctance motor performance,"The performance of a synchronous reluctance motor digital controller based on a saturated dq model without compensation for slotting and iron loss influences is investigated. The model predictions for the machine torque characteristics at various speeds including standstill, and the effects of using simplified control strategies are verified by experimental results. Issues related to the low cost software implementation of the control algorithm are discussed. Quality speed control of an axially-laminated machine has been achieved",1999,0, 1405,Extraction of type style based meta-information from imaged documents,"Extraction of some meta-information from printed documents without an OCR approach is considered. It can be statistically verified that important terms in articles are printed in italic, bold and all capital style. Detection of these type styles helps in automatic extraction of the lines containing titles, authors' names, subtitles, references as well as sentences having important terms occurring in the text. It also helps in improving the OCR performance for reading the italic text. Some experimental results on the performance of the approach on good quality as well as degraded document images are presented",1999,0, 1406,Multifont classification using typographical attributes,"This paper introduces a multifont classification scheme to help with the recognition of multifont and multisize characters. It uses typographical attributes such as ascenders, descenders and serifs obtained from a word image. The attributes are used as an input to a neural network classifier to produce the multifont classification results. It can classify 7 commonly used fonts for all point sizes from 7 to 18. The approach developed in this scheme can handle a wide range of image quality even with severely touching characters. The detection of the font can improve character segmentation as well as character recognition because the identification of the font provides information on the structure and typographical design of characters. Therefore, this multifont classification algorithm can be used for maintaining good recognition rates of a machine printed OCR system regardless of fonts and sizes. Experiments have shown that font classification accuracies reach high performance levels of about 95 percent even with severely touching characters. The technique developed for the selected 7 fonts in this paper can be applied to any other fonts",1999,0, 1407,Hybrid modeling for testing intelligent software for Lunar-Mars closed life support,"Intelligent software is being developed for closed life support systems with biological components, for human exploration of the moon and Mars. The intelligent software functions include planning/scheduling, reactive discrete control and sequencing, management of continuous control, and fault detection, diagnosis, and management of failures and errors. The CONFIG modeling and simulation tool has been used to model components and systems involved in production and transfer of oxygen and carbon dioxide gas in a plant growth chamber and between that chamber and a habitation chamber with physico-chemical systems for gas processing. CONFIG is a multi-purpose discrete event simulation tool that integrates all four types of models, for use throughout the engineering and operations life cycle. Within CONFIG, continuous algebraic quantitative models are used within an abstract discrete event qualitative modeling framework of component modes and activity phases. Component modes and activity phases are embedded in transition digraphs. Flows and flow reconfigurations are efficiently analyzed during simulations. Modeled systems for the Life Support Testbed have included biological plants and crew members, oxygen concentrators and an oxygen transfer control subsystem, and injectors and flow controllers in a carbon dioxide control subsystem. To test the discrete control software, some elements of the lower level control layer and higher level planning layer of the intelligent software architecture are modeled, using CONFIG activity models. CONFIG simulations show effects of events on a system, including control action or failures, local and remote effects, and behavioral and functional effects, the time course of effects, and how effects may be detected. The CONFIG models are interfaced to the discrete control software layer and used to perform dynamic interactive testing of the software in nominal and off-nominal scenarios",1999,0, 1408,Artificial immune systems in industrial applications,"Artificial immune system (AIS) is a new intelligent problem-solving technique that is being used in some industrial applications. This paper presents an immunity-based algorithm for tool breakage detection. The method is inspired by the negative-selection mechanism of the immune system, which is able to discriminate between the self (body elements) and the nonself (foreign pathogens). However, in our industrial application, the self is defined to be normal cutting operations and the nonself is any deviation beyond allowable variations of the cutting force. The proposed algorithm is illustrated with a simulation study of milling operations. The performance of the algorithm in detecting the occurrence of tool breakage is reported. The results show that the negative-selection algorithm detected tool breakage in all test cases",1999,0, 1409,Software quality maintenance model,"We develop a quality control and prediction model for improving the quality of software delivered by development to maintenance. This model identifies modules that require priority attention during development and maintenance. The model also predicts during development the quality that will be delivered to maintenance. We show that it is important to perform a marginal analysis when making a decision about how many metrics to include in a discriminant function. If many metrics are added at once, the contribution of individual metrics is obscured. Also, the marginal analysis provides an effective rule for deciding when to stop adding metrics. We also show that certain metrics are dominant in their effects on classifying quality and that additional metrics are not needed to increase the accuracy of classification. Data from the Space Shuttle flight software are used to illustrate the model process",1999,0, 1410,"Software reliability: assumptions, realities and data",Software reliability models are an important issue to identify the actual state of the system during the quality assurance process and to predict the customer release reliability. Major assumptions of state of the art models are compared with some realities of a very large system from industry. Data (failure statistics) observed during the quality assurance process and after 7 years of customer release (1990-96) are supplied to validate existing or upcoming research models,1999,0, 1411,Quality improvement in switching-system software,"The paper describes the effects on switching system software quality of development and maintenance activities. The effect of the application of object oriented methods and quantitative quality control methods in the development of switching system programs is evaluated by comparing the number of errors after shipping to that of earlier systems. Effective approaches to further improving the quality have been developed by analyzing error data obtained during development and maintenance. The main findings are as follows. (1) Although the error density in switching programs, which include call processing and system management functions, is the same as that in maintenance programs, errors in the switching programs are more likely to have a serious impact on the system when the errors are not discovered. Thus, quality control management that takes the potential impact of errors into account will more effectively improve the total software quality. (2) To increase the error detection rate in each testing phase, the omission of test items must be prevented by managing the correspondence of test items between each testing phase, and by supporting the generation of test items from the system specifications",1999,0, 1412,Preparing measurements of legacy software for predicting operational faults,"Software quality modeling can be used by a software maintenance project to identify a limited set of software modules that probably need improvement. A model's goal is to recommend a set of modules to receive special treatment. The purpose of the paper is to report our experiences modeling software quality with classification trees, including necessary preprocessing of data. We conducted a case study on two releases of a very large legacy telecommunications system. A module was considered fault-prone if any faults were discovered by customers, and not fault-prone otherwise. Software product, process, and execution metrics were the basis for predictors. The TREEDISC algorithm for building classification trees was investigated, because it emphasizes statistical significance. Numeric data, such as software metrics, are not suitable for TREEDISC. Consequently, we transformed measurements into discrete ordinal predictors by grouping. This case study investigated the sensitivity of modeling results to various groupings. We found that robustness, accuracy, and parsimony of the models were influenced by the maximum number of groups. Models based on two sets of candidate predictors had similar sensitivity",1999,0, 1413,Using coupling measurement for impact analysis in object-oriented systems,"Many coupling measures have been proposed in the context of object oriented (OO) systems. In addition, due to the numerous dependencies present in OO systems, several studies have highlighted the complexity of using dependency analysis to perform impact analysis. An alternative is to investigate the construction of probabilistic decision models based on coupling measurement to support impact analysis. In addition to providing an ordering of classes where ripple effects are more likely, such an approach is simple and can be automated. In our investigation, we perform a thorough analysis on a commercial C++ system where change data has been collected over several years. We identify the coupling dimensions that seem to be significantly related to ripple effects and use these dimensions to rank classes according to their probability of containing ripple effects. We then assess the expected effectiveness of such decision models",1999,0, 1414,JSF prognostics and health management,"JSF Prognostics and Health Management (PHM) is a comprehensive system for detecting and isolating failures as well as predicting remaining useful life for critical components. The PHM system is a hierarchical distribution of data collection and information management elements, both on-board and off-board that make maximum use of conventional failure symptom-detecting techniques combined with advanced software modeling to achieve excellent failure detection and isolation with zero false alarms. This same system collects and processes performance information on critical components to enable prediction of remaining useful life for those components. The information processed and managed on-board the aircraft enhances the pilot's knowledge of his remaining capabilities in the event of malfunction and, at the same time, triggers the Autonomic Logistics processes by relaying failure information to the ground. The use of on-board processing resources coupled with the Autonomic Logistics System provides operations and support cost savings over legacy aircraft while providing the Warfighter with superb system management capabilities",1999,0, 1415,Real-time video coding,"Many modified three-step search (TSS) algorithms have been studied for the speed up of computation and improved error performance over the original TSS algorithm. In this work, an efficient and fast TSS algorithm is proposed, which is based on the unimodal error search assumption (UESA), error surface properties, the matching error threshold and the partial sum of the matching error. For the search strategy, we propose a new and efficient search method, which shows a good performance in terms of the computational reduction and the prediction error compared with other search algorithms. Also, we add half-stop algorithms to the above algorithm with little degradation of the predicted image quality while obtaining more computational reduction. One of them is based on the assumption that if a small amount of motion compensation error is produced, we can consider the matching block as a matched block and the motion vector as a global one. The other removes the computational redundancy by stopping the useless calculation of the matching error in a matching block. With the added algorithms, we can reduce significantly the computation for the motion vector with a small degradation of the predicted image quality with a proper threshold. Experimentally, it is shown that the proposed algorithm is very efficient in terms of the speed up of the computation and error performance compared with other conventional modified TSS algorithms",1999,0, 1416,A case for relative differentiated services and the proportional differentiation model,"Internet applications and users have very diverse quality of service expectations, making the same-service-to-all model of the current Internet inadequate and limiting. There is a widespread consensus today that the Internet architecture has to extended with service differentiation mechanisms so that certain users and applications can get better service than others at a higher cost. One approach, referred to as absolute differentiated services, is based on sophisticated admission control and resource reservation mechanisms in order to provide guarantees or statistical assurances for absolute performance measures, such as a minimum service rate or maximum end-to-end delay. Another approach, which is simpler in terms of implementation, deployment, and network manageability, is to offer relative differentiated services between a small number of service classes. These classes are ordered based on their packet forwarding quality, in terms of per-hop metrics for the queuing delays and packet losses, giving the assurance that higher classes are better than lower classes. The applications and users, in this context, can dynamically select the class that best meets their quality and pricing constraints, without a priori guarantees for the actual performance level of each class. The relative differentiation approach can be further refined and quantified using the proportional differentiation model. This model aims to provide the network operator with the “tuning knobsâ€?for adjusting the quality spacing between classes, independent of the class loads. When this spacing is feasible in short timescales, it can lead to predictable and controllable class differentiation, which ore two important features for any relative differentiation model. The proportional differentiation model can be approximated in practice with simple forwarding mechanisms (packet scheduling and buffer management) that we describe",1999,0, 1417,An experiment to improve cost estimation and project tracking for software and systems integration projects,"It is becoming increasingly difficult to predict the resources, costs, and time scales for the integration of software and systems built from components supplied by third parties. Many cost models use the concept of product size as the prime driver for cost estimation but our experience has shown that the supplied quality of components and the required quality of the integrated systems are becoming the dominant factors affecting the costs and time scales of many projects today. ICL has undertaken an experiment using an alternative life cycle model, known as the Cellular Manufacturing Process Model (CMPM), which describes how a product is integrated from its constituent components. The experiment has helped to develop a method and sets of metrics to improve cost estimation and project tracking",1999,0, 1418,New fast and efficient two-step search algorithm for block motion estimation,"Block motion estimation using full search is computationally intensive. Previously proposed fast algorithms reduce the computation by limiting the number of searching locations. This is accomplished at the expense of less accuracy of motion estimation and gives rise to an appreciably higher mean squared error (MSE) for motion compensated images. We present a new fast and efficient search algorithm for block motion estimation that produces better quality performance and less computational time compared with a three-step search (TSS) algorithm. The proposed algorithms are based on the ideas of the dithering pattern for pixel decimation, multiple-candidate for pixel-decimation-based full search, and center-based distribution of the motion vector. From the experimental results, the proposed algorithm is superior to the TSS in both quality performance (about 0.2 dB) and computational complexity (about half)",1999,0, 1419,DSP based OFDM demodulator and equalizer for professional DVB-T receivers,"This work treats the design of base-band processing subsystems for professional DVB-T receivers using 1600 MIPS, fixed point DSPs. We show the results about the implementation of OFDM demodulation, channel estimation and equalization functional blocks, for the 2k/8k modes. The adoption of general purpose DSPs provides a great flexibility in the development of different configurations of the receiver. Furthermore it enables the feasibility of adaptive equalization schemes, where the quality of the channel estimation varies according to both channel characteristics and speed of the channel variations. The 16/32 bit fixed point architecture leads to a very low implementation loss, and a careful optimization of the pipeline architecture of the processor allows the receiver to obtain short processing delays. The flexibility of the software approach along with the obtained performance make the proposed implementation very interesting in professional and high-end receivers for interactive, multimedia applications of DVB-T",1999,0, 1420,A multi-layered system of metrics for the measurement of reuse by inheritance,"In spite of the intense efforts of metrics research, the impact of object-oriented software metrics is, for the moment, still quite reduced. The cause of this fact lies not in an intrinsic incapacity of metrics to help in assessing and improving the quality of object-oriented systems, but in the unsystematic, dispersed and ambiguous manner of defining and using the metrics. In this paper, we define a multi-layered system of metrics that measures inheritance-based reuse, and we propose a number of metric definitions for the layers of this system. By defining and using such systems of metrics, we obtain a unitary approach of related measures and a systematic, yet flexible manner of defining new measures as part of a particular metrics system. Organising metric definitions in systems of metrics contributes to (a strongly needed) order among object-oriented metrics, thus increasing their reliability and usability",1999,0, 1421,Influence of software on fault-tolerant microprocessor control system dependability,"The fault tolerance of the control system in this paper is based on cold standby and software redundancy. The control system is a microprocessor modular system. It consists of standard nonredundant modules and specific additional modules. The fault-tolerant control system performs the validation of the active microprocessor unit and the actualization of the spare unit. In the case of the active unit failure, the control from the failed active unit is switched to the spare unit. The fault detection is based on a watchdog timer, checksums, data duplication, etc. The evaluation of the fault tolerance of this control system is performed by fault injection into the active unit bus lines performed by a fault injection system that records the tested system reaction. The control system is tested for nonredundant and redundant software. The influence of the watchdog period and actualization of standby unit period on the number of detected faults in the active and the standby unit is observed. This experimental method is therefore one of the ways to compare the quality of some fault-tolerant methods. The fault tolerance of the control system could be easily increased by using software redundancy and software methods",1999,0, 1422,Application of a new laser scanning pattern wafer inspection tool to leading edge memory and logic applications at Infineon Technologies,"A new patterned wafer laser-based inspection tool has been introduced to the market place, incorporating double darkfield laser scanning technology. Developed from a well-known production-proven platform, the new system is intended to provide the sensitivity required for 0.18 μm design rules, with extendibility to 0.13 μm. The inspection technology combines low angle laser illumination with dual darkfield scattered light collection channels. Enhancements to the illumination and collection optics have allowed for improved defect sensitivity and capture; and enhanced software algorithms have provided greater compensation for process variation, further increasing defect capture. The sensitivity performance and production worthiness of the tool were evaluated at Siemens Microelectronics Centre and the key results are presented. Both memory and logic products were evaluated, including memory products with 0.2 μm design rule, and logic products with 0.2 μm design rule. Layers from the front-end and back-end of the manufacturing process were evaluated. On memory products, sensitivity to defects occurring during capacitor and isolation trench formation was demonstrated, including etch defects deep in the trench structures and sub 0.1 μm discrepancies in the formation of isolation trenches. Results from post-metal etch inspection demonstrated enhanced sensitivity in both array and periphery regions, largely achieved by exploiting the new ability to perform region-based optimisation, allowing full die area inspections. On logic products, surface foreign material less than 0.1 μm in diameter was detected amidst logic structures and, in the same inspection pass, etch residuals affecting the memory cache area were also captured. The machine was installed and operated in a high capacity wafer production environment and adhered to all specified throughput, up-time and reliability matrices throughout the evaluation period",1999,0, 1423,Using a proportional hazards model to analyze software reliability,"Proportional hazards models (PHMs) are proposed for the analysis of software reliability. PHMs facilitate the merging of two research directions that have to a large extent developed independently-defect modeling based on software static analyses and reliability growth modeling based on dynamic assumptions about the software failure process. Determinants of software reliability include a composite measure of software complexity, software development volatility as measured by non-defect changes, and cumulative testing effort. A PHM is developed using execution time-between-failure data for a collection of subsystems from two software projects. The PHM analysis yields non-parametric estimates of the baseline hazard functions for each of the projects and parametric estimates of the determinants of software reliability. Weibull curves are shown to provide a good fit to the non-parametric estimates of the baseline hazard functions. These curves are used to extrapolate the non-parametric estimates for times between failure to infinity in order to compute the mean time between failures. Failure curves are generated for each of the subsystems as a function of the cumulative project execution times and summed over the subsystems to obtain project failures vs. cumulative project execution times. These estimated project failure curves track the empirical project failure curves quite well. Project failure curves estimated for the case when no non-defect changes are made show that in excess of 50% of failures can be attributed to non-defect changes",1999,0, 1424,Efficient coding of homogeneous textures using stochastic vector quantisation and linear prediction,"Vector quantisation (VQ) has been extensively used as an effective image coding technique. One of the most important steps in the whole process is the design of the codebook. The codebook is generally designed using the LBG algorithm which uses a large training set of empirical data that is statistically representative of the images to be encoded. The LBG algorithm, although quite effective for practical applications, is computationally very expensive and the resulting codebook has to be recalculated each time the type of image to be encoded changes. Stochastic vector quantisation (SVQ) provides an alternative way for the generation of the codebook. In SVQ, a model for the image is computed first, and then the codewords are generated according to this model and not according to some specific training sequence. The SVQ approach presents good coding performance for moderate compression ratios and different type of images. On the other hand, in the context of synthetic and natural hybrid coding (SNHC), there is always need for techniques which may provide very high compression and high quality for homogeneous textures. A new stochastic vector quantisation approach using linear prediction which is able to provide very high compression ratios with graceful degradation for homogeneous textures is presented. Owing to the specific construction of the method, there is no block effect in the synthetised image. Results, implementation details, generation of the bit stream and comparisons with the verification model of MPEG-4 are presented which prove the validity of the approach. The technique has been proposed as a still image coding technique in the SNHC standardisation group of MPEG",1999,0, 1425,"End-system architecture for distributed networked multimedia applications: issues, trends and future directions","The emerging broadband computer networks (wired and wireless) are likely to have numerous multimedia applications, such as videoconferencing, interactive games, and collaborative computing standard features in desktop PCs and portables. In recent years there have been significant developments in the field of multimedia coding algorithms and their VLSI implementations along with the development of high speed networking technology (e.g., ATM, FDDI, fast Ethernet). But the end-system architecture (software and hardware) as a whole is lagging behind these advancements. Optimized hardware and software architectures are needed for end systems that will guarantee predictable performance of real-time multimedia applications according to user provided QoS. In this paper, we identify the major end-system hardware and software requirements for networked multimedia applications. Focusing mainly on hardware and operating system levels, we advocate an integrated end-system architecture rather than some ad-hoc solutions for real-time multimedia computing, in addition to general purpose computing in a distributed environment. A core-based system architecture is proposed. We observe that interfacing with high speed networks, inter-device level data transfer, power efficiency and in addition, system resource utilization are some of the issues that need serious consideration in an end-system architecture",1999,0, 1426,A separable method for incorporating imperfect fault-coverage into combinatorial models,"This paper presents a new method for incorporating imperfect FC (fault coverage) into a combinatorial model. Imperfect FC, the probability that a single malicious fault can thwart automatic recovery mechanisms, is important to accurate reliability assessment of fault-tolerant computer systems. Until recently, it was thought that the consideration of this probability necessitated a Markov model rather than the simpler (and usually faster) combinatorial model. SEA, the new approach, separates the modeling of FC failures into two terms that are multiplied to compute the system reliability. The first term, a simple product, represents the probability that no uncovered fault occurs. The second term comes from a combinatorial model which includes the covered faults that can lead to system failure. This second term can be computed from any common approach (e.g. fault tree, block diagram, digraph) which ignores the FC concept by slightly altering the component-failure probabilities. The result of this work is that reliability engineers can use their favorite software package (which ignores the FC concept) for computing reliability, and then adjust the input and output of that program slightly to produce a result which includes FC. This method applies to any system for which: the FC probabilities are constant and state-independent; the hazard rates are state-independent; and an FC failure leads to immediate system failure",1999,0, 1427,Using statistics of the extremes for software reliability analysis,"This paper investigates the use of StEx (statistics of the extremes) [Castillo, Sarabia 1992; Gumbel 1958; Ang, Tang 1984; Castillo 1988; Castillo, Alvarez, Cobo 1993] for analyzing software failure data for highly reliable systems. StEx classifies most distributions into one of three asymptotic families. The asymptotic family to which a set of data belongs is derived graphically on Gumbel-type probability paper. From the resulting empirical Cdf (cumulative distribution function), the asymptotic family to which the data might belong is hypothesized. Once the asymptotic family for a particular data set is determined, the parameters required for an explicit representation of this asymptotic form are derived using known analytic techniques. Using this approach, actual field data from two software case studies [Musa 1979; Kenney 1993] are analyzed using conventional software reliability growth models and using StEx",1999,0, 1428,Condition monitoring and fault diagnosis of electrical machines-a review,"Research has picked up a fervent pace in the area of fault diagnosis of electrical machines. Like adjustable speed drives, fault prognosis has become almost indispensable. The manufacturers of these drives are now keen to include diagnostic features in the software to decrease machine down time and improve salability. Prodigious improvement in signal processing hardware and software has made this possible. Primarily, these techniques depend upon locating specific harmonic components in the line current, also known as motor current signature analysis (MCSA). These harmonic components are usually different for different types of faults. However with multiple faults or different varieties of drive schemes, MCSA can become an onerous task as different types of faults and time harmonics may end up generating similar signatures. Thus other signals such as speed, torque, noise, vibration etc., are also explored for their frequency contents. Sometimes, altogether different techniques such as thermal measurements, chemical analysis, etc., are also employed to find out the nature and the degree of the fault. Human involvement in the actual fault detection decision making is slowly being replaced by automated tools such as expert systems, neural networks, fuzzy logic based systems to name a few. Keeping in mind the need for future research, this review paper describes different types of faults and the signatures they generate and their diagnostics' schemes",1999,0, 1429,A concept for the Naval Oceanographic Office Survey Operations Center,"The Naval Oceanographic Office (NAVOCEANO) Survey Operations Center (SOC) will enable scientists, engineers, and analysts ashore to evaluate the status and performance of shipboard systems and the quality of data collected from ships deployed around the world. Watchstanders will also remotely manage software configuration control, initiate shipboard software upgrades, troubleshoot onboard data collection systems, and monitor survey progress and coverage to assist with on-scene decisions",1999,0, 1430,A systems engineering view of TPS rehosting,"The TPS test design process results in UUT detailed test strategy that is independent of its eventual implementation in source code or on a specific ATS. It is test strategy, not source code, that embodies the critical metrics of UUT fault detection and isolation. Source code implementation is the first step in a design chain that finally results in signal-information flow across the test interface. This SI flow is the physical realization of test design. The objective of TPS rehosting is usually preservation of proven legacy test design, but it should also seek to preserve legacy SI flow to avoid the subtle, and often difficult, problems caused by rehosting to the new ATS. These problems can be exacerbated by inability of the original source code to describe what is happening at the UUT test interface, and by the repeated and characteristic need to choose between equivalent performance of, and incidental or substantive enhancements to, the rehosted TPS",1999,0, 1431,Detecting controller malfunctions in electromagnetic environments. II. Design and analysis of the detector,"For part I see ibid. Verifying the integrity of control computers in adverse operating environments is a key issue in the development, validation, certification, and operation of critical control systems. Future commercial aircraft will necessitate flight-critical systems with high reliability requirements for stability augmentation, flutter suppression, and guidance and control. Operational integrity of such systems in adverse environments must be validated. The paper considers the problem of dynamic detection techniques to monitoring the integrity of fault tolerant control computers in critical applications. Specifically, the paper considers the detection of malfunctions in an aircraft flight control computer (FCC) that is subjected to electromagnetic environment (EME) disturbances during laboratory testing. A dynamic monitoring strategy is presented and demonstrated for the FCC from glideslope engaged until flare under clear air turbulence conditions using a detailed simulation of the B737 Autoland. The performance of the monitoring system is analyzed",1999,0, 1432,A metric based technique for design flaws detection and correction,"During the evolution of object-oriented (OO) systems, the preservation of correct design should be a permanent quest. However, for systems involving a large number of classes and which are subject to frequent modifications, the detection and correction of design flaws may be a complex and resource-consuming task. Automating the detection and correction of design flaws is a good solution to this problem. Various authors have proposed transformations that improve the quality of an OO system while preserving its behavior. In this paper, we propose a technique for automatically detecting situations where a particular transformation can be applied to improve the quality of a system. The detection process is based on analyzing the impact of various transformations on software metrics using quality estimation models",1999,0, 1433,System for automated validation of embedded software in multiple operating configurations,"Embedded controllers in safety critical applications rely on the highest of software quality standards. Testing is performed to ensure that requirements and specifications are met for all the different environments in which the controllers operate. This paper describes the architecture of a system that uses a relational database for tracking tests, requirements, and configurations. The relational sub-schemas are integrated in a data warehouse and allow traceability from requirements to tests. A normalized representation of test cases enables the system to reason about the test topologies and is used for constructing clusters of similar tests. A representative test from each cluster can in turn provide a rapid estimation, of the software's requirement coverage and quality",1999,0, 1434,From system level to defect-oriented test: a case study,"The purpose of this paper is to demonstrate the usefulness of a recently proposed Object-Oriented (OO) based methodology and tools (SysObj and Test-Adder) when applied in the design of testable hardware modules (eventually used as embedded cores in SOCs). A public domain processor (PIG) is the vehicle for such assessment. A boundary scan soft wrapper design is presented and architectural quality is evaluated by adequate metrics. A novel mixed-level, defect-oriented fault simulator, VeriDOS, is used to demonstrate the efficiency of applying a functional lest (stored in the ROM) after the structural test (using built-in scan) to detect a significant number of physical defects not covered by the LSA test set. This illustrates how system-level test can be reused to enhance the quality of a production test.",1999,0, 1435,Scalable stability detection using logical hypercube,"This paper proposes to use a logical hypercube structure for detecting message stability in distributed systems. In particular, a stability detection protocol that uses such a superimposed logical structure is presented, and its scalability is compared with other known stability detection protocols. The main benefits of the logical hypercube approach are scalability, fault-tolerance, and refraining from overloading a single node or link in the system. These benefits become evident both by an analytical comparison and by simulations. Another important feature of the logical hypercube approach is that the performance of the protocol is in general not sensitive to the topology of the underlying physical network",1999,0, 1436,Fault injection based on a partial view of the global state of a distributed system,"This paper describes the basis for and preliminary implementation of a new fault injector, called Loki, developed specifically for distributed systems. Loki addresses issues related to injecting correlated faults in distributed systems. In Loki, fault injection is performed based on a partial view of the global state of an application. In particular, facilities are provided to pass user-specified state information between nodes to provide a partial view of the global state in order to try to inject complex faults successfully. A post-runtime analysis, using an off-line clock synchronization and a bounding technique, is used to place events and injections on a single global time-line and determine whether the intended faults were properly injected. Finally, observations containing successful fault injections are used to estimate specified dependability measures. In addition to describing the details of our new approach, we present experimental results obtained from a preliminary implementation in order to illustrate Loki's ability to inject complex faults predictably",1999,0, 1437,Resolving distributed deadlocks in the OR request model,In this work a new distributed deadlock resolution algorithm for the OR model is proposed. The algorithm verifies the correctness criteria: safety-false deadlocks are not resolved; and liveness-deadlocks are resolved in finite time,1999,0, 1438,Adaptive write detection in home-based software DSMs,"Write detection is essential in multiple-writer protocols to identify writes to shared pages so that these writes can be correctly propagated. Software DSMs that implement multiple-writer protocol normally employ the virtual memory page fault to detect writes to shared pages. It write-protects shared pages at the beginning of an interval to detect writes of the interval. This paper proposes a new write detection scheme in a home-based software DSM called JIAJIA. It automatically recognizes single write to a shared page by its home host and assumes the page will continue to be written by the home host in the future until the page is written by remote hosts. During the period the page is assumed to be singly written by its home host, no write detection of this home page is required and page faults caused by home host write detection call are avoided. Evaluation with some well-known DSM benchmarks reveals that the new write detection can reduce page faults dramatically and improve performance significantly",1999,0, 1439,Built-in self-test for GHz embedded SRAMs using flexible pattern generator and new repair algorithm,"This paper presents a built-in self-test (BIST) scheme, which consists of a flexible pattern generator and a practical on-macro two-dimensional redundancy analyzer, for GHz embedded SRAMs. In order to meet the system requirements and to detect a wide variety of faults or performance degradation resulting from recent technology advances, the microcode-based pattern generator can generate flexible patterns. A practical new repair algorithm for the Finite State Machine (FSM)-based on-macro redundancy analyzer is also presented. It can be implemented with simple hardware and can show fairly good performance compared with conventional software-based algorithms",1999,0, 1440,An efficient computation-constrained block-based motion estimation algorithm for low bit rate video coding,"We present an efficient computation constrained block-based motion vector estimation algorithm for low bit rate video coding that yields good tradeoffs between motion estimation distortion and number of computations. A reliable predictor determines the search origin, localizing the search process. An efficient search pattern exploits structural constraints within the motion field. A flexible cost measure used to terminate the search allows simultaneous control of the motion estimation distortion and the computational cost. Experimental results demonstrate the viability of the proposed algorithm in low bit rate video coding applications. The resulting low bit rate video encoder yields essentially the same levels of rate-distortion performance and subjective quality achieved by the UBC H.263+ video coding reference software. However, the proposed motion estimation algorithm provides substantially higher encoding speed as well as graceful computational degradation capabilities",1999,0, 1441,A software model for impact analysis: a validation experiment,"Impact analysis is the process of identifying software work products that may be affected by proposed changes. This requires a software representation model that can formalize the knowledge about the various dependencies between work products. This study was carried out with the aim of objectively assessing whether the effectiveness of an impact analysis approach depends on the software dependency model employed. ANALYST, a tool for impact analysis, was used to implement different impact analysis approaches. The results show that the nature of the components and the traceability relationships employed for impact analysis influence the effectiveness of the approach, but different traceability models affect the various aspects of effectiveness differently. Moreover, this influence is independent of the software development approach, but is sensitive to software quality decay",1999,0, 1442,Validation of software effort models,Several static models for software effort were examined on three data sets. Models applied to new data sets need to be evaluated relative to the quality of the data set and this can be benchmarked by fitting a least squares model to the data. This natural model can then be compared to the model that is being evaluated.,1999,0, 1443,Performance analysis of three multicast resource allocation algorithms,"Multicast applications such as electronic news delivery and live sports Webcasting grow with the Internet. These Web-based services demand high quality of service in the competitive global Internet business. Resources such as bandwidth and buffer space need to be reserved for different sessions to provide such guaranteed services. We analyze the performance of three multicast resource allocation algorithms, namely equal allocation with reclaim on all links (EAR-ALL), EAR-OPT and EAR-LL-a simple algorithm that reclaims only on the last link as opposed to all possible links. Simulation results using packetized voice sources on the MBONE network are reported comparing the performance in terms of bandwidth, call blocking and new receiver joining probability.",1999,0, 1444,Software product metrics,"Software metrics are measures of software. Their primary use is to help us plan and predict software development. We can better control software quality and development effort if we can quantify the software. This is especially true in the early stages of development. Research has been done in the area of predicting software maintenance effort from software metrics. In general, software metrics can be classified into two categories: software product metrics and software process metrics. Software product metrics are measures of software products such as source code and design documents. Software process metrics are measures of software development process. For example, the size of the software is a measure of the software product itself, thus a product metrics. The effort required to design a software system may be influenced by how the software is designed, thus a process metrics. In this article, only software product metrics are addressed",1999,0, 1445,A fuzzy set approach to cost estimation of software projects,The study is concerned with the development of models of software cost estimation using the technology of fuzzy sets. We propose an augmentation of the well-known class of COCOMO cost estimation models by admitting a granular form of the estimates of the variables used there. Granular models of cost estimation are also introduced. The performance of the granular models is illustrated by a series of numerical experiments.,1999,0, 1446,Reliable determination of object pose from line features by hypothesis testing,"To develop a reliable computer vision system, the employed algorithm must guarantee good output quality. In this study, to ensure the quality of the pose estimated from line features, two simple test functions based on statistical hypothesis testing are defined. First, an error function based on the relation between the line features and some quality thresholds is defined. By using the first test function defined by a lower bound of the error function, poor input can be detected before estimating the pose. After pose estimation, the second test function can be used to decide if the estimated result is sufficiently accurate. Experimental results show that the first test function can detect input with low qualities or erroneous line correspondences and that the overall proposed method yields reliable estimated results",1999,0, 1447,Confidence interval estimation of NHPP-based software reliability models,"Software reliability growth models, such as the non-homogeneous Poisson process (NHPP) models, are frequently used in software reliability prediction. The estimation of parameters in these models is often done by point estimation. However, some numerical problems arise with this approach, and make the actual computation hard, especially for automated reliability prediction tools. In this paper, confidence interval computation is studied in the Goel-Okumoto (1979) model and the S-shaped model (S. Yamada et al., 1983). The upper and the lower bounds of the parameters can be obtained. For reliability prediction, we implement a simplified Bayesian approach, which delivers improved results. The bounds on the predicted reliability are also computed. Furthermore, the numerical problems encountered in earlier point estimation methods are removed by this approach. Our results can thus be used as an important part of the assessment of software quality",1999,0, 1448,A measurement-based model for estimation of resource exhaustion in operational software systems,"Software systems are known to suffer from outages due to transient errors. Recently, the phenomenon of “software agingâ€? in which the state of the software system degrades with time, has been reported (S. Garg et al., 1998). The primary causes of this degradation are the exhaustion of operating system resources, data corruption and numerical error accumulation. This may eventually lead to performance degradation of the software or crash/hang failure, or both. Earlier work in this area to detect aging and to estimate its effect on system resources did not take into account the system workload. In this paper, we propose a measurement-based model to estimate the rate of exhaustion of operating system resources both as a function of time and the system workload state. A semi-Markov reward model is constructed based on workload and resource usage data collected from the UNIX operating system. We first identify different workload states using statistical cluster analysis and build a state-space model. Corresponding to each resource, a reward function is then defined for the model based on the rate of resource exhaustion in the different states. The model is then solved to obtain trends and the estimated exhaustion rates and the time-to-exhaustion for the resources. With the help of this measure, proactive fault management techniques such as “software rejuvenationâ€?(Y. Huang et al., 1995) may be employed to prevent unexpected outages",1999,0, 1449,Classification tree models of software quality over multiple releases,"Software quality models are tools for focusing software enhancement efforts. Such efforts are essential for mission-critical embedded software, such as telecommunications systems, because customer-discovered faults have very serious consequences and are very expensive to repair. We present an empirical study that evaluated software quality models over several releases to address the question, “How long will a model yield useful predictions?â€?We also introduce the Classification And Regression Trees (CART) algorithm to software reliability engineering practitioners. We present our method for exploiting CART features to achieve a preferred balance between the two types of misclassification rates. This is desirable because misclassifications of fault-prone modules often have much more severe consequences than misclassifications of those that are not fault-prone. We developed two classification-tree models based on four consecutive releases of a very large legacy telecommunications system. Forty-two software product, process, and execution metrics were candidate predictors. The first software quality model used measurements of the first release as the training data set and measurements of the subsequent three releases as evaluation data sets. The second model used measurements of the second release as the training data set and measurements of the subsequent two releases as evaluation data sets. Both models had accuracy that would be useful to developers",1999,0, 1450,An empirical study of experience-based software defect content estimation methods,"Capture-recapture models and curve-fitting models have been proposed to estimate the remaining number of defects after a review. This estimation gives valuable information to monitor and control software reliability. However, the different models provide different estimates making it difficult to know which estimate is the most accurate. One possible solution is to, as in this paper, focus on different opportunities to estimate intervals. The study is based on thirty capture-recapture data sets from software reviews. Twenty of the data sets are used to create different models to perform estimation. The models are then evaluated on the remaining ten data sets. The study shows that the use of historical data in model building is one way to overcome some of the problems experienced with both capture-recapture and curve-fitting models, to estimate the defect content after a review",1999,0, 1451,Predicting deviations in software quality by using relative critical value deviation metrics,"We develop a new metric, relative critical value deviation (RCVD), for classifying and predicting software quality. The RCVD is based on the concept that the extent to which a metric's value deviates from its critical value, normalized by the scale of the metric, indicates the degree to which the item being measured does not conform to a specified norm. For example, the deviation in body temperature above 98.6 Fahrenheit degrees is a surrogate for fever. Similarly, the RCVD is a surrogate for the extent to which the quality of software deviates from acceptable norms (e.g., zero discrepancy reports). Early in development, surrogate metrics are needed to make predictions of quality before quality data are available. The RCVD can be computed for a single metric or multiple metrics. Its application is in assessing newly developed modules by their quality in the absence of quality data. The RCVD is a part of the larger framework of our measurement models that include the use of Boolean discriminant functions for classifying software quality. We demonstrate our concepts using Space Shuttle flight software data",1999,0, 1452,Using simulation for assessing the real impact of test coverage on defect coverage,"The use of test coverage measures (e.g. block coverage) to control the software test process has become an increasingly common practice. This is justified by the assumption that higher test coverage helps achieve higher defect coverage and therefore improves software quality. In practice, data often shows that defect coverage and test coverage grow over time, as additional testing is performed. However, it is unclear whether this phenomenon of concurrent growth can be attributed to a causal dependency or if it is coincidental, simply due to the cumulative nature of both measures. Answering such a question is important as it determines whether a given test coverage measure should be monitored for quality control and used to drive testing. Although this is no general answer to the problem above, we propose a procedure to investigate whether any test coverage criterion has a genuine additional impact on defect coverage when compared to the impact of just running additional test cases. This procedure is applicable in typical testing conditions where the software is tested once, according to a given strategy and where coverage measures are collected as well as defect data. We then test the procedure on published data and compare our results with the original findings. The study outcomes do not support the assumption of a causal dependency between test coverage and defect coverage, a result for which several plausible explanations are provided",1999,0, 1453,Failure correlation in software reliability models,"Perhaps the most stringent restriction that is present in most software reliability models is the assumption of independence among successive software failures. Our research was motivated by the fact that although there are practical situations in which this assumption could be easily violated, much of the published literature on software reliability modeling does not seriously address this issue. In this paper we present a software reliability modeling framework based on Markov renewal processes which naturally introduces dependence among successive software runs. The presented approach enables the phenomena of failure clustering to be precisely characterized and its effects on software reliability to be analyzed. Furthermore, it also provides bases for a more flexible and consistent model formulation and solution. The Markov renewal model presented in this paper can be related to the existing software reliability growth models, that is, a number of them can be derived as special cases under the assumption of failure independence. Our future research is focused on developing more specific and detailed models within this framework, as well as statistical inference procedures for performing estimations and predictions based on the experimental data",1999,0, 1454,DynaMICs: an automated and independent software-fault detection approach,"Computers are omnipresent in our society, creating a reliance that demands high-assurance systems. Traditional verification and validation approaches may not be sufficient to identify the existence of software faults. Dynamic Monitoring with Integrity Constraints (DynaMICs) augments existing approaches by including: (1) elicitation of constraints from domain experts and developers that capture knowledge about real-world objects, assumptions and limitations, (2) constraints stored and maintained separate from the program, (3) automatic generation of monitoring code and program instrumentation, (4) performance-friendly monitoring, and (5) tracing among specifications, code and documentation. The primary motivation for DynaMICs is to facilitate the detection of faults, in particular those that result from insufficient communication, changes in intended software use and errors introduced through external interfaces. After presenting related work and an overview of DynaMICs, this paper outlines the methodology used to provide an automated and independent software-fault detection system",1999,0, 1455,Predicting fault-prone software modules in embedded systems with classification trees,"Embedded-computer systems have become essential elements of the modern world. For example, telecommunications systems are the backbone of society's information infrastructure. Embedded systems must have highly reliable software. The consequences of failures may be severe; down-time may not be tolerable; and repairs in remote locations are often expensive. Moreover, today's fast-moving technology marketplace mandates that embedded systems evolve, resulting in multiple software releases embedded in multiple products. Software quality models can be valuable tools for software engineering of embedded systems, because some software-enhancement techniques are so expensive or time-consuming that it is not practical to apply them to all modules. Targeting such enhancement techniques is an effective way to reduce the likelihood of faults discovered in the field. Research has shown software metrics to be useful predictors of software faults. A software quality model is developed using measurements and fault data from a past release. The calibrated model is then applied to modules currently under development. Such models yield predictions on a module-by-module basis. This paper examines the Classification And Regression Trees (CART) algorithm for predicting which software modules have high risk of faults to be discovered during operations. CART is attractive because it emphasizes pruning to achieve robust models. This paper presents details on the CART algorithm in the context of software engineering of embedded systems. We illustrate this approach with a case study of four consecutive releases of software embedded in a large telecommunications system. The level of accuracy achieved in the case study would be useful to developers of an embedded system. The case study indicated that this model would continue to be useful over several releases as the system evolves",1999,0, 1456,Building dependable distributed applications using AQUA,"Building dependable distributed systems using ad hoc methods is a challenging task. Without proper support, an application programmer must face the daunting requirement of having to provide fault tolerance at the application level, in addition to dealing with the complexities of the distributed application itself. This approach requires a deep knowledge of fault tolerance on the part of the application designer, and has a high implementation cost. What is needed is a systematic approach to providing dependability to distributed applications. Proteus, part of the AQuA architecture, fills this need and provides facilities to make a standard distributed CORBA application dependable, with minimal changes to an application. Furthermore, it permits applications to specify, either directly or via the Quality Objects (QuO) infrastructure, the level of dependability they expect of a remote object, and will attempt to configure the system to achieve the requested dependability level. Our previous papers have focused on the architecture and implementation of Proteus. This paper describes how to construct dependable applications using the AQuA architecture, by describing the interface that a programmer is presented with and the graphical monitoring facilities that it provides",1999,0, 1457,Fault detectability analysis for requirements validation of fault tolerant systems,"When high assurance applications are concerned, life cycle process control has witnessed steady improvement over the past two decades. As a consequence, the number of software defects introduced in the later phases of the life cycle, such as detailed design and coding, is decreasing. The majority of the remaining defects originate in the early phases of the life cycle. This is understandable, since the early phases deal with the translation from informal requirements into a formalism that will be used by developers. Since the step from informal to formal notation is inevitable, verification and validation of the requirements continue to be the research focus. Discovering potential problems as early as possible provides the potential for significant reduction in development time and cost. In this paper, the focus is on a specific aspect of requirements validation for dynamic fault tolerant control systems: the feasibility assessment of the fault detection task. An analytical formulation of the fault detectability condition is presented. This formulation is applicable to any system whose dynamics can be approximated by a linear model. The fault detectability condition can be used for objective validation of fault detection requirements. In a case study, we analyze an inverted pendulum system and demonstrate that “reasonableâ€?requirements for a fault detection system can be infeasible when validated against the fault detectability condition",1999,0, 1458,Towards a broader view on software architecture analysis of flexibility,"Software architecture analysis helps us assess the quality of a software system at an early stage. We describe a case study of software architecture analysis that we have performed to assess the flexibility of a large administrative system. Our analysis was based on scenarios, representing possible changes to the requirements of the system and its environment. Assessing the effect of these scenarios provides insight into the flexibility of the system. One of the problems is to express the effect of a scenario in such a way that it provides insight into the complexity of the necessary changes. Part of our research is directed at developing an instrument for doing just that. This instrument is applied in the analysis presented",1999,0, 1459,Maintainability myth causes performance problems in SMP application,"A challenge in software design is to find solutions that balance and optimize the quality attributes of the application. We present a case study of an application and the results of a design decision made on weak assumptions. The application has been assessed with respect to performance and maintainability. We present and evaluate an alternative design of a critical system component. Based on interviews with the involved designers we establish the design rationale. By analyzing the evaluation data of the two alternatives and the design rationale, we conclude that the design decision was based on a general assumption that an adaptable component design should increase the maintainability of the application. This case study is clearly a counter example to that assumption, and we therefore reject it as a myth. This study shows, however, that the myth is indeed responsible for the major performance problem in the application",1999,0, 1460,"Testing, reliability, and interoperability issues in the CORBA programming paradigm","CORBA (Common Object Request Broker Architecture) is widely perceived as an emerging platform for distributed systems development. We discuss CORBA's testing, reliability and interoperability issues among multiple program versions implemented by different languages (Java and C++) based on different vendor platforms (Iona Orbix and Visibroker). We engage 19 independent programming teams to develop a set of CORBA programs from the same requirement specifications, and measure the reliability of these programs. We design the required test cases and develop the operational profile of the programs. After running the test, we classify the detected faults and evaluate the reliability of the programs according to the operational profile. We also discuss how to test the CORBA programs based on their specification and interface design language (IDL). We measure the interoperability of these programs by evaluating the difficulty in exchanging the clients and servers between these programs. The measurement we obtained indicates that without a good discipline on the development of CORBA objects, interoperability would be very poor, and reusability of either client or server programs is very doubtful. We further discuss particular incidences where these programs are not interoperable, and describe future required engineering steps to make these programs interoperable, testable, and reliable",1999,0, 1461,Integrating approximation methods with the generalised proportional sampling strategy,"Previous studies have shown that partition testing strategies can be very effective in detecting faults, but they can also be less effective than random testing under unfavourable circumstances. When test cases are allocated in proportion to the size of subdomains, partition testing strategies are provably better than random testing, in the sense of having a higher or equal probability of detecting at least one failure (the P-measure). Recently, the Generalised Proportional Sampling (GPS) strategy, which is always satisfiable, was proposed to relax the proportionality condition. The paper studies the use of approximation methods to generate test distributions satisfying the GPS strategy, and evaluates this proposal empirically. Our results are very encouraging, showing that on average about 98.72% to almost 100% of the test distributions obtained in this way are better than random testing in terms of the P-measure",1999,0, 1462,Dynamic metrics for object oriented designs,"As object-oriented (OO) analysis and design techniques become more widely used, the demand on assessing the quality of OO designs increases substantially. Recently, there has been much research effort devoted to developing and empirically validating metrics for OO design quality. Complexity, coupling, and cohesion have received a considerable interest in the field. Despite the rich body of research and practice in developing design quality metrics, there has been less emphasis on dynamic metrics for OO designs. The complex dynamic behavior of many real-time applications motivates a shift in interest from traditional static metrics to dynamic metrics. This paper addresses the problem of measuring the quality of OO designs using dynamic metrics. We present a metrics suite to measure the quality of designs at an early development phase. The suite consists of metrics for dynamic complexity and object coupling based on execution scenarios. The proposed measures are obtained from executable design models. We apply the dynamic metrics to assess the quality of a pacemaker application. Results from the case study are used to compare static metrics to the proposed dynamic metrics and hence identify the need for empirical studies to explore the dependency of design quality on each",1999,0, 1463,Measuring coupling and cohesion: an information-theory approach,"The design of software is often depicted by graphs that show components and their relationships. For example, a structure chart shows the calling relationships among components. Object oriented design is based on various graphs as well. Such graphs are abstractions of the software, devised to depict certain design decisions. Coupling and cohesion are attributes that summarize the degree of interdependence or connectivity among subsystems and within subsystems, respectively. When used in conjunction with measures of other attributes, coupling and cohesion can contribute to an assessment or prediction of software quality. Let a graph be an abstraction of a software system and let a subgraph represent a module (subsystem). The paper proposes information theory based measures of coupling and cohesion of a modular system. These measures have the properties of system level coupling and cohesion defined by L.C. Briand et al. (1996; 1997). Coupling is based on relationships between modules. We also propose a similar measure for intramodule coupling based on an intramodule abstraction of the software, rather than intermodule, but intramodule coupling is calculated in the same way as intermodule coupling. We define cohesion in terms of intramodule coupling, normalized to between zero and one. We illustrate the measures with example graphs. Preliminary analysis showed that the information theory approach has finer discrimination than counting",1999,0, 1464,Can results from software engineering experiments be safely combined?,"Deriving reliable empirical results from a single experiment is an unlikely event. Hence to progress, multiple experiments must be undertaken per hypothesis and the subsequent results effectively combined to produce a single reliable conclusion. Other disciplines use meta-analytic techniques to achieve this result. The treatise of the paper is: can meta-analysis be successfully applied to current software engineering experiments? The question is investigated by examining a series of experiments, which themselves investigate: which defect detection technique is best? Applying meta-analysis techniques to the software engineering data is relatively straightforward, but unfortunately the results are highly unstable, as the meta-analysis shows that the results are highly disparate and don't lead to a single reliable conclusion. The reason for this deficiency is the excessive variation within various components of the experiments. Finally, the paper describes a number of recommendations for controlling and reporting empirical work to advance the discipline towards a position where meta-analysis can be profitably employed",1999,0, 1465,Assessing uncertain predictions of software quality,"Many development organizations try to minimize faults in software as a means for improving customer satisfaction. Assuring high software quality often entails time-consuming and costly development processes. A software quality model based on software metrics can be used to guide enhancement efforts by predicting which modules are fault-prone. The paper presents a way to determine which predictions by a classification tree should be considered uncertain. We conducted a case study of a large legacy telecommunications system. One release was the basis for the training data set, and the subsequent release was the basis for the evaluation data set. We built a classification tree using the TREEDISC algorithm, which is based on chi-squared tests of contingency tables. The model predicted whether a module was likely to have faults discovered by customers, or not, based on software product, process, and execution metrics. We simulated practical use of the model by classifying the modules in the evaluation data set. The model achieved useful accuracy, in spite of the very small proportion of fault-prone modules in the system. We assessed whether the classes assigned to the leaves were appropriate by examining the details of the full tree, and found sizable subsets of modules with substantially uncertain classification. Discovering which modules have uncertain classifications allows sophisticated enhancement strategies to resolve uncertainties. Moreover, TREEDISC is especially well suited to identifying uncertain classifications",1999,0, 1466,A metrics-based decision support tool for software module interfacing technique selection to lower maintenance cost,"The Interfacing Techniques Comparison Graph visually compares applications in terms of attributes that relate to maintenance cost. Applications that have both lower coupling and lower complexity lie closer to the origin of the graph and exhibit lower maintenance cost than those that do not. The study supports the idea that compositional techniques are important for achieving these improved metrics. The graph can be used in three ways. First it serves as a decision support tool for managers to determine whether expected maintenance savings compensate for the additional training, effort and time needed to support compositional development. Second, it functions as a decision support tool for designers and coders as they determine, for each module interface, whether to use coupled techniques or composition. The graph can help identify those situations in which the long term cost gain justifies the extra time needed for compositional design. Third, it can serve as a maintenance cost estimation tool. The study found a close correlation between predicted and actual maintenance effort",1999,0, 1467,Analyzing change effort in software during development,"We develop ordinal response models to explain the effort associated with non-defect changes of software during development. The explanatory variables include the extent of the change, the change type, and the internal complexity of the software components undergoing the change. The models are calibrated on the basis of a single software system and are then validated on two additional systems",1999,0, 1468,Quantitative modeling of software reviews in an industrial setting,"Technical reviews are a cost effective method commonly used to detect software defects early. To exploit their full potential, it is necessary to collect measurement data to constantly monitor and improve the implemented review procedure. This paper postulates a model of the factors that affect the number of defects detected during a technical review, and tests the model empirically using data from a large software development organization. The data set comes from more than 300 specification, design, and code reviews that were performed at Lucent's Product Realization Center for Optical Networking (PRC-ON) in Nuernberg, Germany. Since development projects within PRC-ON usually spend between 12% and 18% of the total development effort on reviews, it is essential to understand the relationships among the factors that determine review success. One major finding of this study is that the number of detected defects is primarily determined by the preparation effort of reviewers rather than the size of the reviewed artifact. In addition, the size of the reviewed artifact has only limited influence on review effort. Furthermore, we identified consistent ceiling effects in the relationship between size and effort with the number of defects detected. These results suggest that managers at PRC-ON must consider adequate preparation effort in their review planning to ensure high quality artifacts as well as a mature review process",1999,0, 1469,VERITAS-an application for knowledge verification,"Knowledge management is one of the most important goals of any organization. Therefore, several automatic tools are used for that purpose, e.g. Knowledge Based Systems (KBS), Experts Systems, Data Mining Applications and Computer Aided Decision Systems. The validation and verification (V&V) process is fundamental in order to ensure the quality of used knowledge. The usage of automatic verification tools can be a reliable, inexpensive and reusable way to overcome the constant growth of the Knowledge Bases, the shortening of development times and the costs of Validation, specially field tests. This paper addresses the verification of Knowledge Based Systems, focussing on VERITAS, a verification tool initially developed to verify a KBS used to assist operators of Portuguese Transmission Control Centers in incident analysis and power restoration VERITAS performs knowledge base structural analysis allowing the detection of knowledge anomalies",1999,0, 1470,JMTP: an architecture for exploiting concurrency in embedded Java applications with real-time considerations,"Using Java in embedded systems is plagued by problems of limited runtime performance and unpredictable runtime behavior. The Java Multi-Threaded Processor (JMTP) provides solutions to these problems. The JMTP architecture is a single chip containing an off-the-shelf general purpose processor core coupled with an array of Java Thread Processors (JTPs). Performance can be improved using this architecture by exploiting coarse-grained parallelism in the application. These performance improvements are achieved with relatively small hardware costs. Runtime predictability is improved by implementing a subset of the Java Virtual Machine (JVM) specification in the JTP and trimming away complexity without excessively restricting the Java code a JTP can handle. Moreover the JMTP architecture incorporates hardware to adaptively manage shared JMTP resources in order to satisfy JTP thread timing constraints or provide an early warning for a timing violation. This is an important feature for applications with quality-of-service demands. In addition to the hardware architecture, we describe a software framework that analyzes a Java application for expressed and implicit coarse-grained concurrent threads to execute on JTPs. This framework identifies the optimal mapping of an application to a JMTP with an arbitrary number of JTPs. We have tested this framework on a variety of applications including IDEA encryption with different JTP configurations and confirmed that the algorithm was able to obtain desired results in each case.",1999,0, 1471,A novel testing approach for safety-critical software,"In this paper we propose a new black-box testing approach on how to select test cases to overcome the shortcoming that some conventional black-box testing methods lack precise testing adequacy measures to measure the quality of testing and direct the testing process and thoroughly test safety-critical software. The method is based on two key ideas: (1) specification classifying, and (2) controlled objects covering. By using the method, several kinds of computer interlocking safety software have been successfully tested and the testing results show that the method is efficient",1999,0, 1472,Fault coverage in testing real-time systems,"Real-time systems interact with their environment, through time constrained input/output events. The misbehavior of real-time systems is generally caused by the violation of the specified time constraints. Validation of real-time system software is an important quality control activity in the software lifecycle. Among the validation processes, testing aims at assessing the conformance of an implementation against the reference specification. One of the important aspects in testing real-time software systems is the fault coverage measurement, which consists of studying the potential faults that can be detected by a test suite generated by a given test generation method. This paper addresses the fault coverage of the Tinted Wp-method we have introduced in (En-Nouaary et al., 1998). We present a timed fault model based on the TIOA model for real-time systems specification. We study the fault coverage of the timed Wp-method with respect to our fault model",1999,0, 1473,Worst case timing requirement of real-time tasks with time redundancy,"The improvement of system reliability can be achieved through the use of time redundancy techniques including retry, checkpointing and recovery block. The objective of this paper is to analyze the worst case timing requirement of real-time tasks with time redundancy in the presence of multiple transient faults. The timing behavior of system level checkpointing is modeled with additional tasks that save the state of all tasks. The timing information derived in this paper makes time redundancy techniques applicable to real-time systems while keeping the validity of schedulability check developed in the past",1999,0, 1474,Synthesis and decoding of emotionally expressive music performance,"A recently developed application of Director Musices (DM) is presented. The DM is a rule-based software tool for automatic music performance developed at the Speech Music and Hearing Dept. at the Royal Institute of Technology, Stockholm. It is written in Common Lisp and is available both for Windows and Macintosh. It is demonstrated that particular combinations of rules defined in the DM can be used for synthesizing performances that differ in emotional quality. Different performances of two pieces of music were synthesized so as to elicit listeners' associations to six different emotions (fear, anger, happiness, sadness, tenderness, and solemnity). Performance rules and their parameters were selected so as to match previous findings about emotional aspects of music performance. Variations of the performance variables IOI (Inter-Onset Interval), OOI (Offset-Onset Interval) and L (Sound Level) are presented for each rule-setup. In a forced-choice listening test 20 listeners were asked to classify the performances with respect to emotions. The results showed that the listeners, with very few exceptions, recognized the intended emotions correctly. This shows that a proper selection of rules and rule parameters in DM can indeed produce a wide variety of meaningful, emotional performances, even extending the scope of the original rule definition",1999,0, 1475,Software architecture analysis-a case study,"Presents a case study that evaluates two software quality attributes: performance and availability. We use three programs based on two architectural styles: pipe-filter and batch-sequential. The objective of this study is to identify the crucial factors that might have an influence on these quality attributes from the software architecture perspective. The benefit of this study is that early quality prediction can be facilitated by an analysis of the software architecture. The results from this study show that it is feasible to select a better architectural style based on variations in the execution environment to attain higher availability and/or better performance. Moreover, we demonstrate the effects of these variations on the quality measurements",1999,0, 1476,A framework for top-down cost estimation of software development,"The Function Point Method, estimation by analogy, and algorithmic modeling are three of the most commonly applied methods used to estimate the costs and worker hours needed for a software development project. These methods, however, require a deep and wide expertise in particular areas and may still result in unacceptable discrepancies between the estimated costs and the actual costs. The paper presents a framework for a top-down cost estimation method (TCE). The method is based on the assumption that different types of software have different intrinsic complexities. We expect that this method will produce easier, faster, and more accurate estimations in the early stages of a software project",1999,0, 1477,A critique of software defect prediction models,"Many organizations want to predict the number of defects (faults) in software systems, before they are deployed, to gauge the likely delivered quality and maintenance effort. To help in this numerous software metrics and statistical models have been developed, with a correspondingly large literature. We provide a critical review of this literature and the state-of-the-art. Most of the wide range of prediction models use size and complexity metrics to predict defects. Others are based on testing data, the “qualityâ€?of the development process, or take a multivariate approach. The authors of the models have often made heroic contributions to a subject otherwise bereft of empirical studies. However, there are a number of serious theoretical and practical problems in many studies. The models are weak because of their inability to cope with the, as yet, unknown relationship between defects and failures. There are fundamental statistical and data quality problems that undermine model validity. More significantly many prediction models tend to model only part of the underlying problem and seriously misspecify it. To illustrate these points the Goldilock's Conjecture, that there is an optimum module size, is used to show the considerable problems inherent in current defect prediction approaches. Careful and considered analysis of past and new results shows that the conjecture lacks support and that some models are misleading. We recommend holistic models for software defect prediction, using Bayesian belief networks, as alternative approaches to the single-issue models used at present. We also argue for research into a theory of “software decompositionâ€?in order to test hypotheses about defect introduction and help construct a better science of software engineering",1999,0, 1478,Defining and validating measures for object-based high-level design,"The availability of significant measures in the early phases of the software development life-cycle allows for better management of the later phases, and more effective quality assessment when quality can be more easily affected by preventive or corrective actions. We introduce and compare various high-level design measures for object-based software systems. The measures are derived based on an experimental goal, identifying fault-prone software parts, and several experimental hypotheses arising from the development of Ada systems for Flight Dynamics Software at the NASA Goddard Space Flight Center (NASA/GSFC). Specifically, we define a set of measures for cohesion and coupling, which satisfy a previously published set of mathematical properties that are necessary for any such measures to be valid. We then investigate the measures' relationship to fault-proneness on three large scale projects, to provide empirical support for their practical significance and usefulness",1999,0, 1479,Cost of ensuring safety in distributed database management systems,"Generally, applications employing database management systems (DBMS) require that the integrity of the data stored in the database be preserved during normal operation as well as after crash recovery. Preserving database integrity and availability needs extra safety measures in the form of consistency checks. Increased safety measures inflict adverse effect on performance by reducing throughput and increasing response time. This may not be agreeable for some critical applications and thus, a tradeoff is needed. This study evaluates the cost of extra consistency checks introduced in the data buffer cache in order to preserve the database integrity, in terms of performance loss. In addition, it evaluates the improvement in error coverage and fault tolerance, and occurrence of double failures causing long unavailability, with the help of fault injection. The evaluation is performed on a replicated DBMS, ClustRa. The results show that the checksum overhead in a DBMS inflicted with a very high TPC-B-like workload caused a reduction in throughput up to 5%. The error detection coverage improved from 62% to 92%. Fault injection experiments shows that corruption in database image went down from 13% to 0%. This indicates that the applications that require high safety, but can afford up to 5% performance loss can adopt checksum mechanisms",1999,0, 1480,Scenario management in Web-based simulation,"Internet communications in general and the World-Wide Web specifically are revolutionizing the computer industry. Today, the Web is full of important documents and clever applets. Java applets and servlets are beginning to appear that provide useful and even mission critical applications. From the perspective of simulation, a future Web will be full of simulation models and large amounts of simulation-generated data. Many of the models will include two or three dimensional animation as well as virtual reality. Others will allow human interaction with simulation models to control or influence their execution no matter where the user is located in the world. Analysis of data from Web-based simulations involves greater degrees of freedom than traditional simulations. The number of simulation models available and the amount of simulation data are likely to be much greater. In order to assure the quality of data, the execution of models under a variety of scenarios should be well managed. Since the user community will also be larger, quality assurance should be delegated to agents responsible for defining scenarios and executing models. A major element of simulation analysis is the analysis of output data, which manages the execution of simulation models, in order to obtain statistical data of acceptable quality. Such data may be used to predict the performance of a single system, or to compare the performance of two or more alternative system designs using a single or multiple performance measures",1999,0, 1481,Analysis of fuzzy logic and autoregressive video source predictors using T-tests,"This paper presents performance comparison of a fuzzy logic predictor and an autoregressive predictor. Multi-step ahead predictions were made on the traffic intensity of digital video sources coded with an MPEG coder (hybrid motion compensation/differential pulse code modulation/discrete cosine transform method). Although current coding standards recommend constant bit rate (CBR) output by means of a smoothing buffer, the method inherently produces variable bit rate (VBR) output, and VBR transmission is necessary for high quality delivery. Prediction results described in this paper have been obtained using a fuzzy logic prediction scheme, and the conventional linear autoregressive (AR) algorithm. Prediction errors are analysed with low order statistical parameters and hypothesis tests. The proposed fuzzy prediction methods offer higher accuracy and can be applied to the development of usage parameter control (UPC), connection admission control (CAC) and congestion control algorithms in ATM networks",1999,0, 1482,An efficient algorithm for leader-election in synchronous distributed systems,"Leader election is an important problem in distributed computing. H. Garcia-Molina's (1982) Bully algorithm is a classic solution to leader election in synchronous systems with crash failures. In this paper, we indicate the problems with the Bully algorithm and re-write it to use a failure detector instead of explicit time-outs. We show that this algorithm is more efficient than Garcia-Molina's one in terms of processing time",1999,0, 1483,Sensors signal processing using neural networks,"The authors proposed to use neural networks (single- and multi-layer perceptrons) for the prediction of measurement channel components (including sensors). This allows one to improve the processing quality of measurement information and the design of intelligent sensing instrumentation systems. The hardware and software of such systems, computer power distribution and processing algorithms are considered",1999,0, 1484,Content based very low bit rate video coding using wavelet transform,"In this paper we present a new hybrid compression scheme for videoconferencing and videotelephony applications at very low bit rates (i.e., 32 kbits/s). In these applications, human face is the most important region within a frame and should be coded with high fidelity. To preserve perceptually important information at low bit rates, such as face regions, skin-tone is used to detect and adaptively quantize these regions. Novel features of this coder are the use of overlapping block motion compensation in combination with discrete wavelet transform, followed by zerotree entropy coding with new scanning procedure of wavelet blocks such that the rest of the H.263 framework can be used. At the same total bit-rate, coarser quantization of the background enables the face region to be quantized finely and coded with higher quality. The simulation results demonstrates comparable objective and superior subjective performance when compared with H.263 video coding standard, while providing the advanced feature like scalability functionalities.",1999,0, 1485,Effect of a sparse architecture on generalization behaviour of connectionist networks: a comparative study,"Generalization, the ability of achieving equal performance with respect to the training patterns for the design of the system as well as the unknown test patterns outside the training set, is considered to be the most desirable aspect of a cognitive learning system. For connectionist classifiers, generalization depends on several factors like the network architecture and size, learning algorithm, complexity of the problem and the quality and quantity of the training samples. Various studies on the generalization behaviour of a neural classifier suggest that the architecture and the size of the network should match the size and complexity of the training sample set of the particular problem. The popular ideas for improving generalization is network pruning by removing redundant nodes or growing by adding nodes to reach the optimum size for matching. In this work a feedforward multilayer connectionist model proposed earlier by Chakraborty et al. (1997) with a sparse fractal connection structure between the neurons of adjacent layers has been studied for its suitability compared to other architectures for achieving good generalization. A comparative study with another feedforward multilayer model with network pruning algorithms for pattern classification problem revealed that it is easier to achieve good generalization with the proposed sparse fractal architecture",1999,0, 1486,Scalable latency tolerant architecture (SCALT) and its evaluation,"The deviation of the memory latency is hard to be predicted for in software, especially on the SMP or NUMA systems. As a hardware correspondent method, the multi-thread processor has been devised. However, it is difficult to improve the processor performance with a single program. We have proposed SCALT that uses a buffer in a software context. For the deviation of a latency problem, we have proposed a instruction to check the data arrival existence in a buffer. This paper describes the SCALT, which uses a buffer check instruction, and its performance evaluation results, obtained analyzing the SMP system through event-driven simulation",1999,0, 1487,"Measuring and evaluating maintenance process using reliability, risk, and test metrics","In analyzing the stability of a software maintenance process, it is important that it is not treated in isolation from the reliability and risk of deploying the software that result from applying the process. Furthermore, we need to consider the efficiency of the test effort that is a part of the process and a determinate of reliability and risk of deployment. The relationship between product quality and process capability and maturity has been recognized as a major issue in software engineering based on the premise that improvements in the process will lead to higher-quality products. To this end, we have been investigating an important facet of process capability-stability-as defined and evaluated by trend, change and shape metrics, across releases and within a release. Our integration of product and process measurement serves the dual purpose of using metrics to assess and predict reliability and risk and to evaluate process stability. We use the NASA Space Shuttle flight software to illustrate our approach",1999,0, 1488,Performance enhancement of ERICA+ using fuzzy logic prediction,"The long propagation delay and the existence of highly bursty VBR traffic pose a significant challenge for any end-to-end feedback control scheme on an ATM network. A fuzzy logic based scheme aims at improving the QoS of the ABR service under these conditions is proposed. This paper applies the proposed fuzzy logic control technique of Qiu(see Proc. IEEE GLOBE-COM'97, Arizona, USA, p.967-71, 1997 and Proc. IEEE ICC'98, Atlanta, Georgia, USA, vol. 3, p.1769-73, 1998) to the standard proposed switch algorithm-Explicit Rate Indication for Congestion Avoidance (ERICA+) so as to study its performance. A fuzzy logic predictor is used to estimate the ABR output switch queue length one round-trip delay in advance. This information, together with the total queue growth rate and current ABR queue length are provided to a fuzzy inference system which generates a rate factor. This factor is used as an additional component to increase/decrease the explicit rate (ER) value of a backward resource management (BRM) cell. Simulation results show that the fuzzy logic control scheme can out-perform ERICA+ in terms of queue delay, variation and cell loss ratio under different propagation delays with the existence of CBR and VBR traffic.",1999,0, 1489,The effect of traffic prediction on ABR buffer requirement,"It has been demonstrated that traffic prediction improves the network efficiency and QoS performance in ATM networks. This was achieved by avoidance and control of congestion. In this paper, the effects of traffic prediction schemes on the available bit rate buffer are studied. It can be seen that under the same network environment and traffic intensity, traffic prediction reduces the buffer size for the ABR service. In particular, the fuzzy logic prediction introduced by Qiu (see Proc. IEEE GLOBECOM'97, p.967-71, 1997, and Proc. IEEE International Conference on Communications (ICC'98), p.1769-73, 1998 performs better than conventional autoregressive prediction.",1999,0, 1490,Dynamic channel allocation by using allocation function in PCS,"A dynamic channel allocation method applicable to the microcell system such as the personal communications service (PCS) is presented. The factors determining the quality of the channels are analyzed. The allocation function that consists of these factors are defined. The channel allocation for a call is processed by using the function value. While a channel is being allocated to a call, an adequate quality channel is used instead of the best quality channel. The probability of call blocking is decreased even in the cells having a high traffic load.",1999,0, 1491,UPC parameter estimation using virtual buffer measurement with application to AAL2 traffic,"This paper describes an efficient, measurement-based technique for estimating all minimal sets of usage parameter control (UPC) parameters for variable bit rate (VBR) ATM traffic streams. This virtual buffer measurement technique is applicable to simulation or live traffic measurement environments and is considerably more efficient than direct evaluation using the generic cell rate algorithm (GCRA). We derive analytically the relationships between virtual buffer measurements and minimal sets of UPC parameters as determined by GCRA, then illustrate the technique by estimating and verifying minimal UPC parameters for ATM Adaptation Layer, Type 2 (AAL2) traffic streams representing voice traffic",1999,0, 1492,Seamless Interconnection for Universal Services. Global Telecommunications Conference. GLOBECOM'99. (Cat. No.99CH37042),"The following topics were dealt with: satellite systems and wireless access; ATM switches; multimedia systems; quality of service and performance analysis; CDMA; communication hardware and implementation; traffic models and control for multimedia; mobile ad-hoc networks; future satellite communication systems; wireless components design and implementation; emerging agent software technology; channel modeling and analysis; equalization and adaptive filters; interference suppression; receiver processing and decoding; network operation and distributed management; medium access control; ATM based information infrastructure; packet radio resource management; coding and modulation issues in wireless systems; turbo codes for storage; TCP issues; communications service software; wireless receivers and diversity; signal design for communication systems; signal processing for storage; multiuser detection performance; optical networks; high speed networks; switch architectures; flow control; network protocols design and analysis; scheduling; ATM networks; optical switching and WDM networks; resource allocation; routing and network reliability; fast IP routing; admission control; resource management for wireless networks; buffer management mechanisms; application level services; differentiated services; multimedia traffic; Internet traffic; Web-based management; enterprise network design, implementation, operation and management; antenna and array processing; channel estimation; communication signal processors; interference cancellation; signal detection; multiple antenna systems; iterative decoding; multiuser communications; source and channel coding; modulation and demodulation; fading channels; radio link scheduling for multimedia; IMT-2000 systems; network access and handoff",1999,0, 1493,DFT and on-line test of high-performance data converters: a practical case,This paper discusses a Design-for-Testability (DFT) technique applicable to pipelined Analog-to-Digital Converters (ADC). The objective of this DFT is to improve both the on- and off-line testability of these important mixed-signal ICs,1999,0, 1494,Autonomous agents for online diagnosis of a safety-critical system based on probabilistic causal reasoning,"The goal of decentralization in failure detection, identification, and recovery of high-assurance systems is to focus diagnosis on safety-critical components. The goal of probabilistic causal reasoning in diagnosis is to improve performance of fault isolation, However, this reasoning method is dependent on prior reliability knowledge. Our approach aims at focusing the overall diagnostic cycle in two independent ways: first, autonomous agents diagnose high-consequence appliances of a modular manufacturing system and second prior reliability data needed is derived from a quality assurance program. Therefore we present a hybrid technique in combining quality assurance data, failure mode and effects analysis and probabilistic causal reasoning. We develop a dynamic Bayesian network which, given evidence from sensor observations, is able to learn and reason over time. We successfully apply autonomous diagnosis agents in concert with reliability assessment to improve online diagnosis of a pneumatic-mechanical device",1999,0, 1495,Polymorphism measures for early risk prediction,"Polymorphism is an essential feature of the object-oriented paradigm. However, polymorphism induces hidden forms of class dependencies, which may impact software quality. In this paper, we define and empirically investigate the quality impact of polymorphism on OO design. We define measures of two main aspects of polymorphic behaviors provided by the C++ language: polymorphism based on compile time linking decisions (overloading functions for example) and polymorphism based on run-time binding decisions (virtual functions for example). Then, we validate our measures by evaluating their impact on class fault-proneness, a software quality attribute. The results show that our measures are capturing different dimensions than LOC a size measure, as well as they are significant predictors of fault proneness. In fact, we show that they constitute a good complement to the existing OO design measures.",1999,0, 1496,Evaluation of telemedical services,"With the rapidly increasing development of telemedicine technology, the evaluation of telemedical services is becoming more and more important. However, professional views of the aims and methods of evaluation are different from the perspective of computer science and engineering compared to that of medicine and health policy. We propose that a continuous evaluation strategy should be chosen which guides the development and implementation of telemedicine technologies and applications. The evaluation strategy is divided into four phases, in which the focus of evaluation is shifted from technical performance of the system in the early phases to medical outcome criteria and economical aspects in the later phases. We review the study design methodology established for clinical trials assessing therapeutic effectiveness and diagnostic accuracy, and discuss how it can be adapted to evaluation studies in telemedicine. As an example, we describe our approach to evaluation in a teleconsultation network in ophthalmology",1999,0, 1497,Applying a scalable CORBA event service to large-scale distributed interactive simulations,"Next-generation distributed interactive simulations (DISs) will have stringent quality of service (QoS) requirements for throughput, latency and scalability, as well as requirements for a flexible communication infrastructure to reduce software lifecycle costs. The CORBA event service provides a flexible model for asynchronous communication among distributed and collocated objects. However, the standard CORBA event service specification lacks important features and QoS optimizations required by DIS systems. This paper makes five contributions to the design, implementation and performance measurement of DIS systems. First, it describes how the CORBA event service can be implemented to support key QoS features. Second, it illustrates how to extend the CORBA event service so that it is better suited for DISs. Third, it describes how to develop efficient event dispatching and scheduling mechanisms that can sustain high throughput. Fourth, it describes how to use multicast protocols to reduce network traffic transparently and to improve system scalability. Finally, it illustrates how an event service framework can be strategized to support configurations that facilitate high throughput, predictable bounded latency, or some combination of each",1999,0, 1498,High-level Integrated Design Environment for dependability (HIDE),"For most systems, especially dependable real-time systems for critical applications, an effective design process requires an early validation of the concepts and architectural choices, without wasting time and resources prior of checking whether the system fulfils its objectives or needs some re-design. Although a thorough system specification surely increases the level of confidence that can be put on a system, it is insufficient to guarantee that the system will adequately perform its tasks during its entire life-cycle. The early evaluation of system characteristics like dependability, timeliness and correctness is thus necessary to assess the conformance of the system under development to its targets. This paper presents some activities currently being performed towards an integrated environment for the design and the validation of dependable systems",1999,0, 1499,Flexible processing framework for online event data and software triggering,"The BABAR experiment is the particle detector at the new PEP-II “B factoryâ€?facility at the Stanford Linear Accelerator Center. The experiment's goal is the detailed study of the fundamental phenomenon of charge-parity symmetry breaking in B meson decays. Within the detector's online system, the Online Event Processing subsystem (OEP) provides for near-real-time processing of data delivered from the data acquisition system's event builder. It supports four principal tasks: the final “Level 3â€?software trigger, rapid-feedback fast data quality monitoring, event displays, and the final steps in online calibrations. In order to accommodate the expected 2000 Hz event rate, the OEP subsystem runs on computing farm, receiving events on each node into a shared memory buffer system and coordinating processes performing analysis on this data. Analysis tasks are assigned administrative characteristics providing for protection of data integrity and ensuring that indispensable tasks such as triggering are always performed, while others are performed as resources are available. Data access in OEP uses BABAR's standard offline reconstruction framework. Thus, notably, the Level 3 trigger software could be developed in advance in the full offline simulation environment and yet be suitable for near-real-time online running with minimal changes",1999,0, 1500,"Performance driven architecture development: issues, methods, tools and paradigms","This paper addresses the process of deriving, implementing, and analyzing complex systems architecture. The top-level concerns that take precedence when performance constraints must be met invert the traditional order and concerns within the design process. This paper presents a non-traditional view of systems architecture development which is motivated by the requirement to produce an architecture capable of predictably meeting its time constraints in addition to all the traditional correctness, quality, and maintainability concerns",1999,0, 1501,Solving problems with advanced technology,"Ten Years of Air Force experience has demonstrated that advanced NDE makes a quantum improvement in these areas. In NDE, as in many other industrial process control applications the trend is clearly to computers, information technology (IT), and matching of materials with testing methods. This leaves only the strategic decisions to the human. The inspection and information management technologies are particularly effective for rapid fault detection and fault assessment when hidden structural corrosion, hidden structural cracks, composite manufacturing anomalies or disbonding occurs. Fault detection and standardized repair information is readily available in such documentation as the structural manual for standardized repair and the NDI Manual. The use of advanced technology provides many benefits such as: improved aircraft safety, reduction in total operating costs, improved aircraft design and manufacturing processes, improved maintenance practices, and significant reduction of aircraft accident rates. This paper describes the use of X-ray radiography, laser ultrasonics, and neutron radiography as the methods of choice to inspect helicopter rotor blades or any composite part",1999,0, 1502,Performance evaluation of power line filter. A case study,"One of the basic controls of electromagnetic interference (EMI) is a power line filter. The performance of a filter depends on various factors, such as, voltage and current ratings, environmental conditions, mechanical considerations and the like. Filter performance is required to be evaluated in full load conditions rather than no load conditions, since the former can predict the actual loss levels, which the latter cannot. Another factor, which dominates filter performance, is the type of load to which the filter will be exposed in real practice. In this paper an attempt has been made to show that different types of load may result in different loss levels for the filter over the desired frequency spectrum. Comparison of loss levels obtained during the test and also during actual use has also been dealt with.",1999,0, 1503,A resource allocation mechanism to provide guaranteed service to mobile multimedia applications,"We introduce an extended RSVP protocol called Mobile RSVP which support multimedia and real-time traffic in a Mobile IP based environment. Our proposed protocol is appropriate for those mobile nodes that can not a-prior determine their mobility behaviour. However, we have also analysed the effect of a deterministic mobility specification. We also describe the conceptual design of the reservation mechanism of Mobile RSVP and discuss the hand-over procedure. Efforts have been made to analyse the performance of Mobile RSVP and to assess its overhead and control traffic.",1999,0, 1504,Predicting fault incidence using software change history,"This paper is an attempt to understand the processes by which software ages. We define code to be aged or decayed if its structure makes it unnecessarily difficult to understand or change and we measure the extent of decay by counting the number of faults in code in a period of time. Using change management data from a very large, long-lived software system, we explore the extent to which measurements from the change history are successful in predicting the distribution over modules of these incidences of faults. In general, process measures based on the change history are more useful in predicting fault rates than product metrics of the code: For instance, the number of times code has been changed is a better indication of how many faults it will contain than is its length. We also compare the fault rates of code of various ages, finding that if a module is, on the average, a year older than an otherwise similar module, the older module will have roughly a third fewer faults. Our most successful model measures the fault potential of a module as the sum of contributions from all of the times the module has been changed, with large, recent changes receiving the most weight",2000,1, 1505,An application of fuzzy clustering to software quality prediction,"The ever increasing demand for high software reliability requires more robust modeling techniques for software quality prediction. The paper presents a modeling technique that integrates fuzzy subtractive clustering with module-order modeling for software quality prediction. First fuzzy subtractive clustering is used to predict the number of faults, then module-order modeling is used to predict whether modules are fault-prone or not. Note that multiple linear regression is a special case of fuzzy subtractive clustering. We conducted a case study of a large legacy telecommunication system to predict whether each module will be considered fault-prone. The case study found that using fuzzy subtractive clustering and module-order modeling, one can classify modules which will likely have faults discovered by customers with useful accuracy prior to release",2000,1, 1506,"Using product, process, and execution metrics to predict fault-prone software modules with classification trees","Software-quality classification models can make predictions to guide improvement efforts to those modules that need it the most. Based on software metrics, a model can predict which modules will be considered fault-prone, or not. We consider a module fault-prone if any faults were discovered by customers. Useful predictions are contingent on the availability of candidate predictors that are actually related to faults discovered by customers. With a diverse set of candidate predictors in hand, classification-tree modeling is a robust technique for building such software quality models. This paper presents an empirical case study of four releases of a very large telecommunications system. The case study used the regression-tree algorithm in the S-Plus package and then applied our general decision rule to classify modules. Results showed that in addition to product metrics, process metrics and execution metrics were significant predictors of faults discovered by customers",2000,1, 1507,3-D Imaging using Novel Techniques in Ultra-Wideband Radar,"The RadioStar programme at the Norwegian University of Science and Technology is focused on understanding the limiting factors in subsurface imaging using electromagnetic waves. A prototype radar system using state-of-the-art technology is developed as a platform for research in ultra-wideband 3-D radar imaging methods. A particular focus is placed in the problem of microwave rendering, or how to make use of microwaves to produce images that go beyond the quality of traditional speckled radar images. The programme aims at developing new and improved hardware and software components that will ultimately lead to more capable and easy to use radar imagers.",2000,0, 1508,A Software Development Process for Small Projects,"The authors' development process integrates portions of an iterative, incremental process model with a quality assurance process that assesses each process phase's quality and a measurement process that collects data to guide process improvement. The process's goal is to produce the high quality and timely results required for today's market without imposing a large overhead on a small project.",2000,0, 1509,A model for the sizing of software updates,"The quality and reliability of software updates (SUs) are critical to a system vendor and its customers. As a result, it is important that SUs shipped to customers be successfully integrated into the field generic. A large amount of code must be shipped in an SU because customers want as many fixes and features as possible without compromising the reliability of their systems. However, as the size of an SU increases, so does its probability of field failure, thus making larger SUs riskier. The fundamental question is: How large should an SU be to keep the risk under control? This paper studies the tradeoff between the desire to ship large SUs and the failure risk carried with them. We formulate the problem as a nonlinear programming (NLP) problem, investigate it under various conditions, and derive sizing strategies for the SU. In particular, we derive a formula for the maximal SU size. We make a connection between software reliability and linear programming which, to the best of our knowledge, appears here for the first time. We also introduce some basic ideas related to the customer operational environment and explain the importance of the environment to software performance using an interesting analogy.",2000,0, 1510,Reducing the complexity of wireless Q3 adapter development,"The AUTOPLEX® Q3 adapter was designed to create a telecommunication management network (TMN) agent for the AUTOPLEX System 1000. The agent, which carries out directives and emits notification, provides network management capability for an operations support system (OSS) â€?also known as a manager â€?via standard TMN communication protocol and services. The Q3 adapter provides a full range of TMN system management functions, which consist of fault management, configuration management, performance management, and security management. The development of the Q3 adapter was considered very costly, based on the feature estimate by the development organization. This paper describes an overlay development approach and a flexible software architecture that have been incorporated into the Q3 adapter design to make it more cost effective. These factors reduce the complexity of its development and make it easier to maintain different software releases and to modify different TMN information models. The design also creates an independent software platform that enables many other wireless applications to easily build their interfaces to the AUTOPLEX System 1000.",2000,0, 1511,Use of fault tree analysis for evaluation of system-reliability improvements in design phase,"Traditional failure mode and effects analysis is applied as a bottom-up analytical technique to identify component failure modes and their causes and effects on the system performance, estimate their likelihood, severity and criticality or priority for mitigation. Failure modes and their causes, other than those associated with hardware, primarily electronic, remained poorly addressed or not addressed at all. Likelihood of occurrence was determined on the basis of component failure rates or by applying engineering judgement in their estimation. Resultant prioritization is consequently difficult so that only the apparent safety-related or highly critical issues were addressed. When thoroughly done, traditional FMEA or FMECA were too involved to be used as a effective tool for reliability improvement of the product design. Fault tree analysis applied to the product as a top down in view of its functionality, failure definition, architecture and stress and operational profiles provides a methodical way of following products functional flow down to the low level assemblies, components, failure modes and respective causes and their combination. Flexibility of modeling of various functional conditions and interaction such as enabling events, events with specific priority of occurrence, etc., using FTA, provides for accurate representation of their functionality interdependence. In addition to being capable of accounting for mixed reliability attributes (failure rates mixed with failure probabilities), fault trees are easy to construct and change for quick tradeoffs as roll up of unreliability values is automatic for instant evaluation of the final quantitative reliability results. Failure mode analysis using fault tree technique that is described in this paper allows for real, in-depth engineering evaluation of each individual cause of a failure mode regarding software and hardware components, their functions, stresses, operability and interactions",2000,0, 1512,A Petri-net approach for early-stage system-level software reliability estimation,"This paper addresses software reliability modeling issues at the early stage of a software development for large-scale software products. The hierarchical view of a software product provides the modeling framework at the system level. This paper shows how to take the hierarchical description and perform the system-level software reliability estimation using Petri net mechanisms. The Petri net modeling techniques are proposed for handling the dependency among software modules. Furthermore, this paper addresses issues of early-stage software reliability modeling when failure data is not available",2000,0, 1513,Enhancing the predictive performance of the Goel-Okumoto software reliability growth model,"In this paper, enhancement of the performance of the Goel-Okumoto Reliability Growth model is investigated using various smoothing techniques. The method of parameter estimation for the model is the maximum likelihood method. The evaluation of the performance of the model is judged by the relative error of the predicted number of failures over future time intervals relative to the number of failures eventually observed during the interval. The use of data analysis procedures utilizing the Laplace trend test are investigated. These methods test for reliability growth throughout the data and establish ""windows"" that censor early failure data and provide better model fits. The research showed conclusively that the data analysis procedures resulted in improvement in the models' predictive performance for 41 different sets of software failure data collected from software development labs in the United States and Europe",2000,0, 1514,A practical software-reliability measurement framework based on failure data,"Software-reliability measurement based on failure data can provide estimations of software reliability at its present state and predictions of software reliability in the future as testing goes on. This paper analyzes the problems existed in the measurement process and proposes a practical software-reliability measurement framework based on state of the art to give relatively better software reliability predictions. Study on the validity of the modification methods i.e. the recalibration technique and the combination method also has been done. The results show that the modification methods could give better prediction results but not always, so predictive quality analysis still needs to be done",2000,0, 1515,Modeling and analysis for multiple stress-type accelerated life data,"This paper describes a model for multiple stress-type accelerated life data. In addition, the use of an algorithm, which was specifically developed for this model is illustrated. The model is based on the widely known log-linear model (1990) and is formulated for the Weibull and lognormal distributions for a variety of censoring schemes using likelihood theory. An algorithm has been developed for the solution of this model, and implemented in a recently released software package, ALTA ProTM, specific to accelerated life data analysis. The algorithm has been specifically designed to be very flexible and has the capability of simultaneously solving for up to eight different stress-types. The advantage of this formulation is that it combines in one model most of the known life-stress relationships for one or two types of stresses (such as the temperature-nonthermal model), as well as the multivariable proportional hazards model. This yields a single general likelihood function (for a given distribution) whose solution is independent of both the chosen life-stress relationship and the number of stress-types. In addition, this model allows for simultaneous analysis of continuous, categorical and indicator variables. The solution to this model provides the engineers with an opportunity to expand their selection of types of stresses and test conditions when testing products",2000,0, 1516,Confidence limits on the inherent availability of equipment,"The inherent availability, is an important performance index for a repairable system, and is usually estimated from the times-between-failures and the times-to-restore data. The formula for calculating a point estimate of the inherent availability from collected data is well known. But the quality of the calculated inherent availability is suspect because of small data sample sizes. The solution is to use the confidence limits on the inherent availability at a given confidence level, in addition to the point estimator. However, there is no easy way to compute the confidence limits on the calculated availability. Actually, no adequate approach to compute the confidence interval for the inherent availability, based on sample data, is available. In this paper, the uncertainties of small random samples are taken into account. The estimated mean times between failures, mean times to restore and the estimated inherent availability are treated as random variables. When the distributions of both times-between-failures and times-to-restore are exponential, the exact confidence limits on the inherent availability are derived. Based on reasonable assumptions, a nonparametric method of determining the approximate confidence limits on the inherent availability from data are proposed, without assuming any times-between-failures and times-to-restore distributions. Numerical examples are provided to demonstrate the validity of the proposed solution, which are compared with the results obtained from Monte Carlo simulations. It turns out that the proposed method yields satisfactory accuracy for engineering applications",2000,0, 1517,Applying adaptation spaces to support quality of service and survivability,"Adaptation is a key technique in constructing survivable information systems. Allowing a system to continue running, albeit with reduced functionality or performance, in the face of reduced resources, attacks, or broken components is often preferable to either complete shutdown or continued normal operation in compromised mode. However, unpredictable adaptation can sometimes be worse than the problem it seeks to cope with. In this paper we introduce adaptation spaces, which precisely and predictably specify the adaptation of a software component. We then present two survivable systems that have been specified and implemented using adaptation spaces. The first example uses user preferences regarding quality in an audio application to guide the adaptation when available bandwidth decreases. The second trades off performance overhead with intrusion resistance for “stack-smashingâ€?attacks. We formally define an adaptation space and show briefly how it enables certain kinds of reasoning about adaptive applications. We conclude with related work and future plans",2000,0, 1518,Evaluation of fault tolerance latency from real-time application's perspectives,"Information on Fault Tolerance Latency (FTL), which is defined as the total time required by all sequential steps taken to recover from an error, is important to the design and evaluation of fault-tolerant computers used in safety-critical real-time control systems with deadline information. In this paper, we evaluate FTL in terms of several random and deterministic variables accounting for fault behaviors and/or the capability and performance of error-handling mechanisms, while considering various fault tolerance mechanisms based on the trade-off between temporal and spatial redundancy, and use the evaluated FTL to check if an error-handling policy can meet the Control System Deadline (CSD) for a given real-time application",2000,0,1175 1519,Software reliability models with time-dependent hazard function based on Bayesian approach,"In this paper, two models predicting mean time until next failure based on Bayesian approach are presented. Times between failures follow Weibull distributions with stochastically decreasing ordering on the hazard functions of successive failure time intervals, reflecting the tester's intent to improve the software quality with each corrective action. We apply the proposed models to actual software failure data and show they give better results under sum of square errors criteria as compared to previous Bayesian models and other existing times between failures models. Finally, we utilize likelihood ratios criterion to compare new model's predictive performance",2000,0, 1520,Survivability through customization and adaptability: the Cactus approach,"Survivability, the ability of a system to tolerate intentional attacks or accidental failures or errors, is becoming increasingly important with the extended use of computer systems in society. While techniques such as cryptographic methods, intrusion detection, and traditional fault tolerance are currently being used to improve the survivability of such systems, new approaches are needed to help reach the levels that will be required in the near future. This paper proposes the use of fine-grain customization and dynamic adaptation as key enabling technologies in a new approach designed to achieve this goal. Customization not only supports software diversity, but also allows customized tradeoffs to be made between different QoS attributes including performance, security, reliability and survivability. Dynamic adaptation allows survivable services to change their behavior at runtime as a reaction to anticipated or detected intrusions or failures. The Cactus system provides support for both fine-grain customization and dynamic adaptation, thereby offering a potential solution for building survivable software in networked systems",2000,0, 1521,The effectiveness of software development technical reviews: a behaviorally motivated program of research,"Software engineers use a number of different types of software development technical review (SDTR) for the purpose of detecting defects in software products. This paper applies the behavioral theory of group performance to explain the outcomes of software reviews. A program of empirical research is developed, including propositions to both explain review performance and identify ways of improving review performance based on the specific strengths of individuals and groups. Its contributions are to clarify our understanding of what drives defect detection performance in SDTRs and to set an agenda for future research. In identifying individuals' task expertise as the primary driver of review performance, the research program suggests specific points of leverage for substantially improving review performance. It points to the importance of understanding software reading expertise and implies the need for a reconsideration of existing approaches to managing reviews",2000,0, 1522,Design properties and object-oriented software changeability,"The assessment of the changeability of software systems is of major concern for buyers of the large systems found in fast-moving domains such as telecommunications. One way of approaching this problem is to investigate the dependency between the changeability of the software and its design, with the goal of finding design properties that can be used as changeability indicators. In our research, we defined a model of software changes and change impacts, and implemented it for the C++ language. Furthermore, we identified a set of nine object-oriented (OO) design metrics, four of which are specifically geared towards changeability detection. The model and the metrics were applied to three test systems of industrial size. The experiment showed a high correlation, across systems and across changes, between changeability and the access to a class by other classes through method invocation or variable access. On the other hand, no result could support the hypothesis that the depth of the inheritance tree has some influence on changeability. Furthermore, our results confirm the observation of others that the use of inheritance is rather limited in industrial systems",2000,0, 1523,Analyzing Java software by combining metrics and program visualization,"Shimba, a prototype reverse engineering environment, has been built to support the understanding of Java software. Shimba uses Rigi and SCED to analyze, visualize, and explore the static and dynamic aspects, respectively, of the subject system. The static software artifacts and their dependencies are extracted from Java byte code and viewed as directed graphs using the Rigi reverse engineering environment. The static dependency graphs of a subject system can be annotated with attributes, such as software quality measures, and then be analyzed and visualised using scripts through the end user programmable interface. Shimba has recently been extended with the Chidamber and Kemerer suite of object oriented metrics. The metrics measure properties of the classes, the inheritance hierarchy, and the interaction among classes of a subject system. Since Shimba is primarily intended for the analysis and exploration of Java software, the metrics have been tailored to measure properties of software systems using a reverse engineering environment. The static dependency graphs of the system under investigation are decorated with measures obtained by applying the object oriented metrics to selected software components. Shimba provides tools to examine these measures, to find software artifacts that have values that are in a given range, and to detect correlations among different measures. The object oriented analysis of the subject Java system can be investigated further by exporting the measures to a spreadsheet",2000,0, 1524,On the dynamic quality of service in wireless computing environments,"This paper considers adaptive and reliable quality of service in a dynamic wireless computing environment. We first develop a quality of service architecture that spans the networking and end-to-end systems software layers. Then, we design resource management algorithms for adaptively sharing the networking resources among applications. Finally, we develop a seamless networking infrastructure integrating the underlying networks",2000,0, 1525,A software supervision approach for distributed system performance modelling,"Large software systems, such as telecom applications, are often built on reused components. Such systems are often developed using components from previous similar projects, or simply using a reconfiguration of the same set of components from a previous similar project. Performance prediction of such software system architectural design provides a quantified measurement for better design quality. The design is specified in a communicating extended finite state machine model. The model is extended with stochastic information and simulated for performance prediction. The stochastic extension requires performance data for each component and load information of the system environment. This paper addresses the problem of abstracting stochastic performance model of a component to be reused in a software architectural design. We use a software supervision approach to monitor the performance of a deployed component and collect its execution trace, including individual time stamps of the externally observable signals. We then derive a stochastic performance model of the component from the trace. The model can be used later in performance prediction when the component is reused. We applied this method to a control program of a small telephone exchange. We were able to reuse a component and its performance data in a new exchange design",2000,0, 1526,Proactive maintenance tools for transaction oriented wide area networks,"The motivation of the work presented in this paper comes from a real network management center in charge of supervising a very large hybrid telecommunications/data transaction-oriented network. We present a set of tools that we have developed and implemented in the AT&T Transaction Access Services (TAS) network, in order to automate and facilitate the process of diagnosing network faults and identifying the potentially affected elements, resources and customers. Specifically in this paper we describe the development implementation and use of the following systems: (a) the TAS Information and Tracking System (TIMATS) that provides a common framework for the storage and retrieval of provisioning, capacity management and maintenance data; (b) the Transactions Event Viewer (TEVIEW) system that generates, filters, and presents diagnostic events that indicate system occurrences or conditions that may cause a degradation of the service; and (c) the Transaction Instantaneous Anomaly Notification (TRISTAN) system which implements an adaptive network anomaly detection software that detects network and service anomalies of TAS as dynamically defined violations of the base-lined performance characteristics and profiles",2000,0, 1527,A network measurement architecture for adaptive applications,"The quality of network connectivity between a pair of Internet hosts can vary greatly. Adaptive applications can cope with these differences in connectivity by choosing alternate representations of objects or streams or by downloading the objects from alternate locations. In order to effectively adapt, applications must discover the condition of the network before communicating with distant hosts. Unfortunately, the ability to predict or report the quality of connectivity is missing in today's suite of Internet services. To address this limitation, we have developed SPAND (shared passive network performance discovery), a system that facilitates the development of adaptive network applications. In each domain, applications make passive application specific measurements of the network and store them in a local centralized repository of network performance information. Other applications may retrieve this information from the repository and use the shared experiences of all hosts in a domain to predict future performance. In this way, applications can make informed decisions about adaptation choices as they communicate with distant hosts. In this paper, we describe and evaluate the SPAND architecture and implementation. We show how the architecture makes it easy to integrate new applications into our system and how the architecture has been used with specifics types of data transport. Finally, we describe LookingGlass, a WWW mirror site selection tool that uses SPAND. LookingGlass meets the conflicting goals of collecting passive network performance measurements and maintaining good client response times. In addition, LookingGlass's server selection algorithms based on application level measurements perform much better than techniques that rely on geographic location or route metrics",2000,0, 1528,Quality of electronic design: from architectural level to test coverage,"The purpose of this paper is to present a design methodology that complements existing methodologies by addressing the upper and the lower extremes of the design flow. The aim of the methodology is to increase design and product quality. At system level, emphasis is given to architecture generation, reconfiguration and quality assessment. Quality metrics and criteria, focused on design and test issues, are used for the purpose. At physical level, a Defect-Oriented Test (DOT) approach and test reuse are the basis of the methodology to estimate test effectiveness, or defects coverage. Tools which implement the methodology are presented. Results are shown for a public domain PIC processor, used as a SOC embedded core",2000,0, 1529,Achieving the quality of verification for behavioral models with minimum effort,"When designing a system in the behavioral level, one of the most important steps to be taken is verifying its functionality before it is released to the logic/PD design phase. One may consider behavioral models as oracles in industries to test against when the final chip is produced. In this work, we use branch coverage as a measure for the quality of verifying/testing behavioral models. Minimum effort for achieving a given quality level can be realized by using the proposed stopping rule. The stopping rule guides the process to switch to a different testing strategy using different types of patterns, i.e. random vs. functional, or using different set of parameters to generate patterns/test cases, when the current strategy is expected not to increase the coverage. We demonstrate the use of the stopping rule on two complex behavioral level VHDL models that were tested for branch coverage with 4 different testing phases. We compare savings of the number of applied testing patterns and quality of testing both with and without using the stopping rule, and show that switching phases at certain points guided by the stopping rule would yield to the same or even better coverage with less number of testing patterns",2000,0, 1530,Advancing customer-perceived quality in the EDA industry,"This paper describes the initial efforts of two quality groups, one representing EDA software suppliers, and the other representing their major customers, to understand and systematically improve EDA industry quality, as perceived by the customers. The groups continue to work toward this goal, and have reached agreement on several issues, including a defect classification system by severity, eight quality metrics, and a plan for reporting and publishing industry average quality trends using these metrics",2000,0, 1531,Realistic worst-case modeling by performance level principal component analysis,A new algorithm to determine the number and value of realistic worst-case models for the performance of module library components is presented in this paper. The proposed algorithm employs principal components analysis (PCA) at the performance level to identify the main independent sources of variance for the performance of a set of library modules. Response surfaces methodology (RSM) and propagation of variance (POV) based algorithms are used to efficiently compute the performance level covariance matrix and nonlinear maximum likelihood optimization to trace back worst case models at the SPICE level. The effectiveness of the proposed methodology has been demonstrated by determining a realistic set of worst case models for a 0.25 μm CMOS standard cell library,2000,0, 1532,Sensitivity analysis of modular dynamic fault trees,"Dynamic fault tree analysis, as currently supported by the Galileo software package, provides an effective means for assessing the reliability of embedded computer-based systems. Dynamic fault trees extend traditional fault trees by defining special gates to capture sequential and functional dependency characteristics. A modular approach to the solution of dynamic fault trees effectively applies Binary Decision Diagram (BOD) and Markov model solution techniques to different parts of the dynamic fault tree model. Reliability analysis of a computer-based system tells only part of the story, however. Follow-up questions such as “Where are the weak links in the system?â€? “How do the results change if my input parameters change?â€?and “What is the most cost effective way to improve reliability?â€?require a sensitivity analysis of the reliability analysis. Sensitivity analysis (often called Importance Analysis) is not a new concept, but the calculation of sensitivity measures within the modular solution methodology for dynamic and static fault trees raises some interesting issues. In this paper we address several of these issues, and present a modular technique for evaluating sensitivity, a single traversal solution to sensitivity analysis for BOD, a simplified methodology for estimating sensitivity for Markov models, and a discussion of the use of sensitivity measures in system design. The sensitivity measures for both the Binary Decision Diagram and Markov approach presented in this paper is implemented in Galileo, a software package for reliability analysis of complex computer-based systems",2000,0, 1533,Reaching efficient fault-tolerance for cooperative applications,"Cooperative applications are widely used, e.g. as parallel calculations or distributed information processing systems. Whereby such applications meet the users demand and offer a performance improvement, the susceptibility to faults of any used computer node is raised. Often a single fault may cause a complete application failure. On the other hand, the redundancy in distributed systems can be utilized for fast fault detection and recovery. So, we followed an approach that is based an duplication of each application process to detect crashes and faulty functions of single computer nodes. We concentrate on two aspects of efficient fault-tolerance-fast fault detection and recovery without delaying the application progress significantly. The contribution of this work is first a new fault detecting protocol for duplicated processes. Secondly, we enhance a roll forward recovery scheme so that it is applicable to a set of cooperative processes in conformity to the protocol",2000,0, 1534,NFTAPE: a framework for assessing dependability in distributed systems with lightweight fault injectors,"Many fault injection tools are available for dependability assessment. Although these tools are good at injecting a single fault model into a single system, they suffer from two main limitations for use in distributed systems: (1) no single tool is sufficient for injecting all necessary fault models; (2) it is difficult to port these tools to new systems. NFTAPE, a tool for composing automated fault injection experiments from available lightweight fault injectors, triggers, monitors, and other components, helps to solve these problems. We have conducted experiments using NFTAPE with several types of lightweight fault injectors, including driver-based, debugger-based, target-specific, simulation-based, hardware-based, and performance-fault injections. Two example experiments are described in this paper. The first uses a hardware fault injector with a Myrinet LAN; the other uses a Software Implemented Fault Injection (SWIFI) fault injector to target a space-imaging application",2000,0, 1535,Scheduling solutions for supporting dependable real-time applications,"This paper deals with tolerance to timing faults in time-constrained systems. TAFT (Time Aware Fault-Tolerant) is a recently devised approach which applies tolerance to timing violations. According to TAFT, a task is structured in a pair, to guarantee that deadlines are met (although possibly offering a degraded service) without requiring the knowledge of task attributes difficult to estimate in practice. Wide margin of actions is left by the TAFT approach in scheduling the task pairs, leading to disparate performances; up to now, poor attention has been devoted to analyse this aspect. The goal of this work is to investigate on the most appropriate scheduling policies to adopt in a system structured in the TAFT fashion, in accordance with system conditions and application requirements. To this end, all experimental evaluation will be conducted based on a variety of scheduling policies, to derive useful indications for the system designer about the most rewarding policies to apply",2000,0, 1536,End-to-end predictability in real-time push-pull communications,"Push-pull communications is a real-time middleware service that has been implemented on top of a resource kernel operating system. It is a many-to-many communication model that can support multi-participant real-time applications. It covers both “pushâ€?(publisher/subscriber model) and “pullâ€?(data transfer initiated by a receiver) communications. Unlike the publisher/subscriber model, different publishers and subscribers can operate at different data rates and also can choose another (intermediate) node to act as their proxy and deliver data at their desired frequency. We specifically address end-to-end predictability of the push-pull model. The scheduling mechanisms in the OS, the middleware architecture and the underlying network QoS support can impact the timeliness of data. We obtain our end-to-end timeliness and bandwidth guarantees by using a resource kernel offering CPU reservations and the use of a guaranteed bandwidth network (DARWIN) between push-pull end-points. We formally analyze the problem of choosing an optimal proxy location within a network. We discuss our implementation of this system and carry out a detailed performance evaluation on an integrated RT-Mach-Darwin testbed at Carnegie Mellon. Our results open up interesting research directions for the scheduling of computation and communication resources for the applications using the push-pull service. The push-pull framework can easily be incorporated in an RT-CORBA Event Service model",2000,0, 1537,SAABNet: Managing qualitative knowledge in software architecture assessment,"Quantitative techniques have traditionally been used to assess software architectures. We have found that early in development process there is often insufficient quantitative information to perform such assessments. So far the only way to make qualitative assessments about an architecture, is to use qualitative assessment techniques such as peer reviews. The problem with this type of assessment is that they depend on the techniques knowledge of the expert designers who use them. In this paper we introduce a technique, SAABNet (Software Architecture Assessment Belief Network), that provides support to make qualitative assessments of software architectures",2000,0, 1538,Diagnosing rediscovered software problems using symptoms,"This paper presents an approach to automatically diagnosing rediscovered software failures using symptoms, in environments in which many users run the same procedural software system. The approach is based on the observation that the great majority of field software failures are rediscoveries of previously reported problems and that failures caused by the same defect often share common symptoms. Based on actual data, the paper develops a small software failure fingerprint, which consists of the procedure call trace, problem detection location, and the identification of the executing software. The paper demonstrates that over 60 percent of rediscoveries can be automatically diagnosed based on fingerprints; less than 10 percent of defects are misdiagnosed. The paper also discusses a pilot that implements the approach. Using the approach not only saves service resources by eliminating repeated data collection for and diagnosis of reoccurring problems, but it can also improve service response time for rediscoveries",2000,0, 1539,Efficient free-energy calculations by the simulation of nonequilibrium processes,"Discusses a particular application of computational physics: the calculation of thermodynamic properties by computer simulation methods. More specifically, we focus on one of the most difficult tasks in this context: the estimation of free energies in problems such as the study of phase transformations in solids and liquids, the influence of lattice defects on the mechanical properties of technological materials, the kinetics of chemical reactions, and protein-folding mechanisms in biological processes. Because the free energy plays a fundamental role, the development of efficient and accurate techniques for its calculation has attracted considerable attention during the past 15 years and is still an active field of research. We discuss some general aspects of equilibrium and nonequilibrium approaches to free-energy measurement and present the “reversible scalingâ€?technique which we have recently developed",2000,0, 1540,Application of genetic algorithms to pattern recognition of defects in GIS,"A computerized pattern recognition system based on the analysis of phase resolved partial discharge (PRPD) measurements, and utilizing genetic algorithms, is presented. The recognition system was trained to distinguish between basic types of defects appearing in gas-insulated system (GIS), such as voids in spacers, moving metallic particles, protrusions on electrodes, and floating electrodes. The classification of defects is based on 60 measurement parameters extracted from PRPD patterns. Classification of defects appearing in GIS installations is performed using the Bayes classifier combined with genetic algorithms and is compared to the performance of the other classifiers, including minimal-distance, percent score and polynomial classifiers. Tests with a reference database of more than 600 individual measurements collected during laboratory experiments gave satisfactory results of the classification process",2000,0, 1541,A novel fuzzy logic system based on N-version programming,"For the consideration of different application systems, modeling the fuzzy logic rule, and deciding the shape of membership functions are very critical issues due to they play key roles in the design of fuzzy logic control system. This paper proposes a novel design methodology of fuzzy logic control system using the neural network and fault-tolerant approaches. The connectionist architecture with the learning capability of neural network and N-version programming development of a fault-tolerant technique are implemented in the proposed fuzzy logic control system. In other words, this research involves the modeling of parameterized membership functions and the partition of fuzzy linguistic variables using neural networks trained by the unsupervised learning algorithms. Based on the self-organizing algorithm, the membership function and partition of fuzzy class are not only derived automatically, but also the preconditions of fuzzy IF-THEN rules are organized. We also provide two examples, pattern recognition and tendency prediction, to demonstrate that the proposed system has a higher computational performance and its parallel architecture supports noise-tolerant capability. This generalized scheme is very satisfactory for pattern recognition and tendency prediction problems",2000,0, 1542,Real-time image on QoS Web,"Digital images have become a dominant information source on the Web. Emerging Web-enabled applications, such as global collaboration in environmental studies, point and click manufacturing, and electronic commerce, to name a few, push for timely processing and transmission of images. This real-time imaging is much more than variations on image processing without regard to time. Its performance requirement on networking is beyond the capability of the current Internet, which provides a best-effort service model compromised with end-system traffic management. For the Web to support real-time images, we need to understand real-time image features, design a new Web service model, and convert a user's application requirement into a system service agreement. This paper presents a Web architecture that provides predictable and different service levels for specific quality of service (QoS) requirements, called QoS Web. We examine the Web image features in depth, which leads to a generic formula describing a user's preferences for image quality and timing constraints. It then maps onto the QoS requirements as resource parameters expected from the Web. The influence of the QoS settings on the perceived image performance is analyzed in theory and tested with experimental simulation. We present a procedure for users to specify their requirements to the QoS Web for real-time image transfer. In addition, an inverse mapping mechanism is included for re-negotiation when there is a shortage of network resources or the user's requirements change",2000,0, 1543,An analytical model for loop tiling and its solution,"The authors address the problem of estimating the performance of loop tiling, an important program transformation for improved memory hierarchy utilization. We introduce an analytical model for estimating the memory cost of a loop nest as a rational polynomial in tile size variables. We also present a constant-time algorithm for finding an optimal solution to the model (i.e., for selecting optimal tile sizes) for the case of doubly nested loops. This solution can be applied to tiling of three loops by performing an iterative search on the value of the first tile size variable, and using the constant-time algorithm at each point in the search to obtain optimal tile size values for the remaining two loops. Our solution is efficient enough to be used in production-quality optimizing compilers, and has been implemented in the IBM XL Fortran product compilers. This solution can also be used by processor designers to efficiently predict the performance of a set of tiled loops for a range of memory hierarchy parameters",2000,0, 1544,"Adaptive and automated detection of service anomalies in transaction-oriented WANs: network analysis, algorithms, implementation, and deployment","Algorithms and software for proactive and adaptive detection of network/service anomalies (i.e., performance degradations) have been developed, implemented, deployed, and field-tested for transaction-oriented wide area networks (WANs). A real-time anomaly detection system called TRISTAN (transaction instantaneous anomaly notification) has been implemented, and is deployed in the commercially important AT&T transaction access services (TAS) network. TAS is a high volume, multiple service classes, hybrid telecom and data WAN that services transaction traffic in the U.S. and neighboring countries. TRISTAN adaptively and preactively detects network/service performance anomalies in multiple-service-class-based and transaction-oriented networks, where performances of service classes are mutually dependent and correlated, where environmental factors (e.g., nonmanaged or nonmonitored equipment within customer premises) can strongly impact network and service performances. Specifically, TRISTAN implements algorithms that: 1) sample and convert raw transaction records to service-class based performance data in which potential network anomalies are highlighted; 2) automatically construct adaptive and service-class-based performance thresholds from historical transaction records for detecting network and service anomalies; and 3) perform real-time network/service anomaly detection. TRISTAN is demonstrated to be capable of proactively detecting network/service anomalies, which easily elude detection by the traditional alarm-based network monitoring systems.",2000,0, 1545,A comparison of mobile agent and client-server paradigms for information retrieval tasks in virtual enterprises,"In next-generation enterprises it will become increasingly important to retrieve information efficiently and rapidly from widely dispersed sites in a virtual enterprise, and the number of users who wish to do using wireless and portable devices will increase significantly. We consider the use of mobile agent technology rather than traditional client-server computing for information retrieval by mobile and wireless users in a virtual enterprise. We argue that to be successful mobile agent platforms must coexist with, and be presented to the applications programmer side-by-side with, traditional client-server middleware like CORBA and DOOM, and we sketch a middleware architecture for doing so. We then develop an analytical model that examines the claimed performance benefits of mobile agents over client-server computing for a mobile information retrieval scenario. Our evaluation of the model shows that mobile agents are not always better than client-server calls in terms of average response times; they are only beneficial if the space overhead of the mobile agent code is not too large or if the wireless link connecting the mobile user to the fixed servers of the virtual enterprise is error-prone",2000,0, 1546,Modeling and analysis of software aging and rejuvenation,"Software systems are known to suffer from outages due to transient errors. Recently, the phenomenon of “software agingâ€? one in which the state of the software system degrades with time, has been reported. To counteract this phenomenon, a proactive approach of fault management, called “software rejuvenationâ€? has been proposed. This essentially involves gracefully terminating an application or a system and restarting it in a clean internal state. We discuss stochastic models to evaluate the effectiveness of proactive fault management in operational software systems and determine optimal times to perform rejuvenation, for different scenarios. The latter part of the paper deals with measurement-based methodologies to detect software aging and estimate its effect on various system resources. Models are constructed using workload and resource usage data collected from the UNIX operating system over a period of time. The measurement-based models are intended to help development of strategies for software rejuvenation triggered by actual measurements",2000,0, 1547,Flexible processing framework for online event data and software triggering,"The BABAR experiment is the particle detector at the new PEP-II “B factoryâ€?facility at the Stanford Linear Accelerator Center. The experiment's goal is the detailed study of the fundamental phenomenon of charge-parity symmetry breaking in B meson decays. Within the detector's online system, the Online Event Processing subsystem (OEP) provides for near real-time processing of data delivered from the data acquisition system's event builder. It supports four principal tasks: the final “Level 3â€?software trigger, rapid-feedback fast data quality monitoring, event displays, and the final steps in online calibrations. In order to accommodate the expected 2000 Hz event rate, the OEP subsystem runs on computing farm, receiving events on each node into a shared memory buffer system and coordinating processes performing analysis on this data. Analysis tasks are assigned administrative characteristics providing for protection of data integrity and ensuring that indispensable tasks such as triggering are always performed, while others are performed as resources are available. Data access in OEP uses BABAR's standard offline reconstruction framework. Thus, notably, the Level 3 trigger software could be developed in advance in the full offline simulation environment and yet be suitable for near-real-time online running with minimal changes",2000,0,1499 1548,A quality control method for nuclear instrumentation and control systems based on software safety prediction,"In the case of safety-related applications like nuclear instrumentation and control (NI&C), safety-oriented quality control is required. The objective of this paper is to present a software safety classification method as a safety-oriented quality control tool. Based on this method, we predict the risk (and thus safety) of software items that are at the core of NI&C systems. Then we classify the software items according to the degree of the risk. The method can be used earlier than at the detailed design phase. Furthermore, the method can also be used in all the development phases without major changes. The proposed method seeks to utilize the measures that can be obtained from the safety analysis and requirements analysis. Using the measures proved to be desirable in a few aspects. The authors have introduced fuzzy approximate reasoning to the classification method because experts' knowledge covers the vague frontiers between good quality and bad quality with linguistic uncertainty and fuzziness. Fuzzy Colored Petri Net (FCPN) is introduced in order to offer a formal framework for the classification method and facilitate the knowledge representation, modification, or verification. Through the proposed quality control method, high-quality NI&C systems can be developed effectively and used safely",2000,0, 1549,A skeleton-based approach for the design and implementation of distributed virtual environments,"It has long been argued that developing distributed software is a difficult and error-prone activity. Based on previous work on design patterns and skeletons, this paper proposes a template-based approach for the high-level design and implementation of distributed virtual environments (DVEs). It describes a methodology and its associated tool, which includes a user interface, a performance analyser and an automatic code generation facility. It also discusses some preliminary results on a surgical training system",2000,0, 1550,A novel automated measurement and control system based on VXIbus for I&D logic analysis,"This paper describes an instrument system which can be easily applied in instrument and data (I&D) simulation and faults detection, especially tracing some minute errors within hardware or software. This novel automated measurement and testing system mainly includes: the control and DSP module, data emulation and generation module and logic analysis module. The set of simulation and testing system is based on latest VXIbus standard and designed as message based and register based C-size modules covering several sophisticated techniques of direct digital synthesis in the secondary interface. After the frame description of test and measurement architectures, this paper details the hardware techniques and software strategies as well as simulation results",2000,0, 1551,Diagnosis of a continuous dynamic system from distributed measurements,A diagnosis application has been built for a three-tank fluid system. The tank system is equiped with a distributed measurement and control system based on smart transducer nodes with embedded computing and networking capabilities that use the IEEE 1451.1 object model to provide high level sensing and actuation abstractions. The diagnosis system operates on-line on a workstation that-appears on the network as another transducer node. The diagnosis methodology has several aspects that allow distribution of the monitoring and diagnosis functionality on a network of embedded processors. The current application represents the initial phase in building a truly distributed monitoring and diagnosis application,2000,0, 1552,A short circuit current study for the power supply system of Taiwan railway,"The western Taiwan railway consists mainly of a mountain route and ocean route. Taiwan Railway Administration (TRA) has conducted a series of experiments on the ocean route to identify the possible causes of unknown events which frequently cause trolley contact wires meltdown. The tests conducted include the short circuit fault test within the power supply zone of the Ho Long Substation (Zhu Nan to Tong Xiao) that had the highest probability for the meltdown events. Those test results based on the actual measured maximum short circuit current provide a valuable reference for TRA when comparing against the said events. Le Blanc transformer is the main transformer of the Taiwan railway electrification system. The Le Blanc transformer mainly transforms the Taiwan Power Company (TPC) generated three phase alternating power supply system (69 kV, 60 Hz) into two single phase alternating power distribution systems (M phase and T phase) (26 kV, 60 Hz) needed for the trolley traction. As a unique winding connection transformer, the conventional software for fault analysis will not be able to simulate its internal current and phase difference between each phase currents. Therefore, besides extracts of the short circuit test results, this work presents an EMTP model based on Taiwan Railway Substation equivalent circuit model with Le Blanc transformer. The proposed circuit model can simulate the same short circuit test to verify the actual fault current and accuracy of the equivalent circuit model. Moreover, the maximum short circuit current is further evaluated with reference to the proposed equivalent circuit",2000,0,1730 1553,The future growth trend of neutral currents in three-phase computer power systems caused by voltage sags,"This paper presents a summary of the power quality related concerns associated with the applications of the future growth trend of neutral currents in three-phase computer power systems. These concerns include power system harmonics and neutral current caused by voltage sags and short interruption. The neutral current characteristics under that condition are described. Methods for identifying these problems, analysis, determining their impact on utility and customer are also described. The EMTP simulation, field measurement and experimental results are used to verify the disturbance problems. The applications of regression model to predict neutral current growth trend during power line disturbances from measured data is proposed",2000,0, 1554,Grading and predicting networked application behaviour,"The user-perceived quality of an application operating over a communication network has a considerable influence on the usefulness of that application. Intuitively, it may be assumed that this quality will be related to the current network performance, but in practice the relationship is often complex and difficult to determine. A scheme has been developed whereby the performance of network applications can be assessed and empirically graded for various controlled network loading conditions. Given the current network loading conditions on an operational network and information generated by the grading process, it is possible to predict the performance of the network application before the application is actually run. This has the potential for reducing the amount of traffic forced on a network as a consequence of aborted connections.",2000,0, 1555,A MAC protocol with priority splitting algorithm for wireless ATM networks,"This paper deals proposes a medium access control (MAC) protocol for ensuring the quality of service (QoS) of integrated multimedia services on wireless links. The wireless ATM MAC protocol which incorporates dynamical polling, idle uplink (UL) data channel conversion, piggybacking and an interruptable priority splitting algorithm (named THBPSA) for resolving random access collisions is proposed and named TPICC. The effect of the priority splitting algorithm on the performance of the TPICC is simulated and compared with a counterpart of the TPICC which uses an unprioritised binary splitting algorithm (UBSA) in the RA slots. The effect of an invalid polling detecting (IPD) mechanism on the UL bandwidth efficiency is also simulated. The simulation results show that the THBPSA algorithm ensures a smaller medium access delay for realtime traffic classes than for non-realtime traffic classes. Comparing THBPSA with a priority scheduling scheme which is used in the base station (BS) and features packet time stamps, THBPSA provides realtime traffic classes with a much less UL packet delay than non-realtime traffic classes. The UL bandwidth used by the dynamic polling of realtime traffic classes is tolerable",2000,0, 1556,An automated profiling subsystem for QoS-aware services,"The advent of QoS-sensitive Internet applications such as multimedia and the proliferation of priced performance-critical applications such as online trading raise a need for building server systems with guaranteed performance. The new applications must run on different heterogeneous platforms, provide soft performance guarantees commensurate with platform capacity and adapt efficiently to upgrades in platform resources over the system's lifetime. Profiling the application for the purposes of providing QoS guarantees on each new platform becomes a significant undertaking. Automated profiling mechanisms must be built to enable efficient computing of QoS guarantees tailored to platform capacity and facilitate wide deployment of soft performance-guaranteed systems on heterogeneous platforms. The article investigates the design of the automated profiling subsystem: an essential component of future “general-purposeâ€?QoS-sensitive systems. The subsystem estimates application resource requirements and adapts the software transparently to the resource capacity of the underlying platform. A novel aspect of the proposed profiling subsystem is its use of estimation theory for profiling. Resource requirements are estimated by correlating applied workload with online resource utilization measurements. We focus explicitly on profiling server software. The convergence and accuracy of our online profiling techniques are evaluated in the context of an Apache Web server serving both static Web pages and dynamic content. Results show the viability of using estimation theory for automated online profiling and for achieving QoS guarantees",2000,0, 1557,Understanding the sources of software defects: a filtering approach,"The paper presents a method proposal of how to use product measures and defect data to enable understanding and identification of design and programming constructs that contribute more than expected to the defect statistics. The paper describes a method that can be used to identify the most defect-prone design and programming constructs and the method proposal is illustrated on data collected from a large software project in the telecommunication domain. The example indicates that it is feasible, based on defect data and product measures, to identify the main sources of defects in terms of design and programming constructs. Potential actions to be taken include less usage of particular design and programming constructs, additional resources for verification of the constructs and further education into how to use the constructs",2000,0, 1558,Validation of an approach for improving existing measurement frameworks,"Software organizations are in need of methods to understand, structure, and improve the data their are collecting. We have developed an approach for use when a large number of diverse metrics are already being collected by a software organization (M.G. Mendonca et al., 1998; M.G. Mendonca, 1997). The approach combines two methods. One looks at an organization's measurement framework in a top-down goal-oriented fashion and the other looks at it in a bottom-up data-driven fashion. The top-down method is based on a measurement paradigm called Goal-Question-Metric (GQM). The bottom-up method is based on a data mining technique called Attribute Focusing (AF). A case study was executed to validate this approach and to assess its usefulness in an industrial environment. The top-down and bottom-up methods were applied in the customer satisfaction measurement framework at the IBM Toronto Laboratory. The top-down method was applied to improve the customer satisfaction (CUSTSAT) measurement from the point of view of three data user groups. It identified several new metrics for the interviewed groups, and also contributed to better understanding of the data user needs. The bottom-up method was used to gain new insights into the existing CUSTSAT data. Unexpected associations between key variables prompted new business insights, and revealed problems with the process used to collect and analyze the CUSTSAT data. The paper uses the case study and its results to qualitatively compare our approach against current ad hoc practices used to improve existing measurement frameworks.",2000,0, 1559,A comprehensive evaluation of capture-recapture models for estimating software defect content,"An important requirement to control the inspection of software artifacts is to be able to decide, based on more objective information, whether the inspection can stop or whether it should continue to achieve a suitable level of artifact quality. A prediction of the number of remaining defects in an inspected artifact can be used for decision making. Several studies in software engineering have considered capture-recapture models to make a prediction. However, few studies compare the actual number of remaining defects to the one predicted by a capture-recapture model on real software engineering artifacts. The authors focus on traditional inspections and estimate, based on actual inspections data, the degree of accuracy of relevant state-of-the-art capture-recapture models for which statistical estimators exist. In order to assess their robustness, we look at the impact of the number of inspectors and the number of actual defects on the estimators' accuracy based on actual inspection data. Our results show that models are strongly affected by the number of inspectors, and therefore one must consider this factor before using capture-recapture models. When the number of inspectors is too small, no model is sufficiently accurate and underestimation may be substantial. In addition, some models perform better than others in a large number of conditions and plausible reasons are discussed. Based on our analyses, we recommend using a model taking into account that defects have different probabilities of being detected and the corresponding Jackknife Estimator. Furthermore, we calibrate the prediction models based on their relative error, as previously computed on other inspections. We identified theoretical limitations to this approach which were then confirmed by the data",2000,0, 1560,Validating the ISO/IEC 15504 measure of software requirements analysis process capability,"ISO/IEC 15504 is an emerging international standard on software process assessment. It defines a number of software engineering processes and a scale for measuring their capability. One of the defined processes is software requirements analysis (SRA). A basic premise of the measurement scale is that higher process capability is associated with better project performance (i.e., predictive validity). The paper describes an empirical study that evaluates the predictive validity of SRA process capability. Assessments using ISO/IEC 15504 were conducted on 56 projects world-wide over a period of two years. Performance measures on each project were also collected using questionnaires, such as the ability to meet budget commitments and staff productivity. The results provide strong evidence of predictive validity for the SRA process capability measure used in ISO/IEC 15504, but only for organizations with more than 50 IT staff. Specifically, a strong relationship was found between the implementation of requirements analysis practices as defined in ISO/IEC 15504 and the productivity of software projects. For smaller organizations, evidence of predictive validity was rather weak. This can be interpreted in a number of different ways: that the measure of capability is not suitable for small organizations or that the SRA process capability has less effect on project performance for small organizations",2000,0, 1561,Digital fault location for parallel double-circuit multi-terminal transmission lines,"Two new methods are proposed for fault point location in parallel double-circuit multi-terminal transmission lines by using voltages and currents information from CCVTs and CTs at all terminal. These algorithms take advantage of the fact that the sum of currents flowing into a fault section equals the sum of the currents at all terminals. Algorithm 1 employs an impedance calculation and algorithm 2 employs the current diversion ratio method. Computer simulations are carried out and applications of the proposed methods are discussed. Both algorithms can be applied to all types of fault such as phase-to-ground and phase-to-phase faults. As one equation can be used for all types of fault, classification of fault types and selection of faulted phase are not required. Phase components of the line impedance are used directly, so compensation of unbalanced line impedance is not required",2000,0, 1562,2000 IEEE International Conference on Communications. ICC 2000. Global Convergence Through Communications. Conference Record,"The following topics were dealt with: communication theory; equalization and sequence estimation; fading channels; QoS management in wireless networks; signal processing and coding for storage; satellite communication system performance; communications software; network operations and management; multimedia wireless communication; video communications; radio communication channels and systems; space-time coding and processing; synchronization; channel resource allocation; digital signal processing for receivers; multiple access techniques; SATCOM signal design; packet scheduling and discarding; computer communications; turbo codes and iterative decoding; signal detection and estimation; mobility management; advanced signal processing for communications; traffic engineering and modeling; multimedia communications infrastructure; enterprise networking; multiuser detection and interference suppression; personal communications; multiple antennas; CDMA and optical FDM systems; digital subscriber line technology; wireless ATM networks; OFDM and multicarrier systems; network architecture; coding and coded modulation; interference cancellation techniques; WDM networks; resource control in ATM-based networks; routing; communications switching; power control/management; communication quality and reliability; teletraffic and MPLS networks; adaptive channel estimation and equalization; ad hoc networks; switch architecture, design and performance",2000,0, 1563,Implementing enhanced network maintenance for transaction access services: tools and applications,"We describe the design and implementation of automated methodologies and tools that facilitate the implementation of an enhanced maintenance process in order to provide improved quality of service offered to the customers for the transaction access services (TAS) network. The emphasis of our work is placed on the identification, filtering and correlation of event occurrences that indicate problems that do not necessarily generate alarmed conditions on the conventional network management systems, and on the proactive service/fault management and detection based on dynamically defined violations of the base-lined performance profiles. Such an approach enhances considerably the network management and maintenance process, by identifying possible situations/incidents that may affect the network performance, and by correlating those incidents to the potentially affected elements, services and customer applications",2000,0, 1564,An efficient wire sweep analysis solution for practical applications,"Wire sweep is one of the IC packaging defects that CAE tools have been applied for years by many researchers. Meanwhile, various methodologies have been introduced to get better prediction and matching to the experimental measurements. These studies mostly emphasized on the accuracy of the wire sweep calculation. The discussions about analysis efficiency, especially for the high pin-count package (100, 208 leads or more), are seldom touched. This study introduces a practical wire sweep analysis solution not only to meet the need of accuracy but also enhance the efficiency for actual applications. Coupling the design philosophy of these two needs, an in-house wire sweep analysis software (InPack) was developed. This software combines global flow analysis (C-MOLD) and structure analysis (ANSYS) to become a solution for general wire sweep evaluation. The wire geometry modeling procedure is neglected to improve the analytical efficiency. In addition, the multi-layer data requisition criteria is captured to simulate the actual flow conditions around a wire inside the mold cavity. With the Select (Input)-Execute-and-See user interface, the cycle time of wire sweep problem analysis is significantly reduced in practical application",2000,0, 1565,QoS mapping along the protocol stack: discussion and preliminary results,"Quality of service (QoS) mechanisms are needed in integrated networks so that the performance requirements of heterogeneous applications are met. We outline a framework for predicting end-to-end QoS at the application layer based on mapping of QoS guarantees across layers in the protocol stack and concatenation of guarantees across multiple dissimilar sub-networks. Mapping is needed to translate QoS guarantees provided in the lower layers into their effects on upper-layer performance indicators. We illustrate the process with some preliminary results for QoS mapping due to segmentation of packets; we conduct a worst-case analysis, generating a closed-form mapping of delay and losses between two layers. Future extensions of this work will take into account flow aggregation processes and the combination of quantitative and qualitative guarantees",2000,0, 1566,Worst-case execution times analysis of MPEG-2 decoding,"Presents the first worst-case execution times (WCET) analysis of MPEG decoding. Solutions for two scenarios-video-on-demand (VoD) and live-are presented, serving as examples for a variety of real-world applications. A significant reduction of over-estimations (down to 17%, including overheads) during WCET analysis of the live scenario can be achieved by using our new two-phase decoder with built-in WCET analysis, which can be universally applied. It is even possible to predict the exact execution times in the VoD scenario. This work is motivated by the fact that media streaming service providers are under great pressure to fulfil the quality of service promised to their customers, preferably in an efficient way",2000,0, 1567,Scheduling algorithms for dynamic message streams with distance constraints in TDMA protocol,"In many real-time communication applications, predictable and guaranteed timeliness is one of the critical components of the quality of service (QoS) requirements. In this paper, we propose a new real-time message model with both rate requirements and distance constraints. Two algorithms are presented to schedule dynamic real-time message streams in a TDMA (time division multiple access) frame based on different scheduling policies by making greedy choices or optimization choices. The performance of the two algorithms is evaluated and compared in terms of time complexity, acceptance ratio and scheduling jitter via simulation",2000,0, 1568,Second generation DSP software for picture rate conversion,"Great progress has been made in motion estimation (ME). This has led to a high-quality second-generation scan-rate conversion (SRC) algorithm which runs on a commercially available programmable platform. The new system has a higher quality of motion estimation/compensation and de-interlacing than the first generation. The paper presents the new algorithm, and highlights the advances in performance.",2000,0, 1569,Classification-tree models of software-quality over multiple releases,"This paper presents an empirical study that evaluates software-quality models over several releases, to address the question, “How long will a model yield useful predictions?â€?The classification and regression trees (CART) algorithm is introduced, CART can achieve a preferred balance between the two types of misclassification rates. This is desirable because misclassification of fault-prone modules often has much more severe consequences than misclassification of those that are not fault-prone. The case-study developed 2 classification-tree models based on 4 consecutive releases of a very large legacy telecommunication system. Forty-two software product, process and execution metrics were candidate predictors. Model 1 used measurements of the first release as the training data set; this model had 11 important predictors. Model 2 used measurements of the second release as the training data set; this model had 15 important predictors. Measurements of subsequent releases were evaluation data sets. Analysis of the models' predictors yielded insights into various software development practices. Both models had accuracy that would be useful to developers. One might suppose that software-quality models lose their value very quickly over successive releases due to evolution of the product and the underlying development processes. The authors found the models remained useful over all the releases studied",2000,0,1449 1570,Failure correlation in software reliability models,"Perhaps the most stringent restriction in most software reliability models is the assumption of statistical independence among successive software failures. The authors research was motivated by the fact that although there are practical situations in which this assumption could be easily violated, much of the published literature on software reliability modeling does not seriously address this issue. The research work in this paper is devoted to developing the software reliability modeling framework that can consider the phenomena of failure correlation and to study its effects on the software reliability measures. The important property of the developed Markov renewal modeling approach is its flexibility. It allows construction of the software reliability model in both discrete time and continuous time, and (depending on the goals) to base the analysis either on Markov chain theory or on renewal process theory. Thus, their modeling approach is an important step toward more consistent and realistic modeling of software reliability. It can be related to existing software reliability growth models. Many input-domain and time-domain models can be derived as special cases under the assumption of failure s-independence. This paper aims at showing that the classical software reliability theory can be extended to consider a sequence of possibly s-dependent software runs, viz, failure correlation. It does not deal with inference nor with predictions, per se. For the model to be fully specified and applied to estimations and predictions in real software development projects, we need to address many research issues, e.g., the detailed assumptions about the nature of the overall reliability growth, way modeling-parameters change as a result of the fault-removal attempts",2000,0,1453 1571,Using simulation for assessing the real impact of test-coverage on defect-coverage,"The use of test-coverage measures (e.g., block-coverage) to control the software test process has become an increasingly common practice. This is justified by the assumption that higher test-coverage helps achieve higher defect-coverage and therefore improves software quality. In practice, data often show that defect-coverage and test-coverage grow over time, as additional testing is performed. However, it is unclear whether this phenomenon of concurrent growth can be attributed to a causal dependency, or if it is coincidental, simply due to the cumulative nature of both measures. Answering such a question is important as it determines whether a given test-coverage measure should be monitored for quality control and used to drive testing. Although it is no general answer to this problem, a procedure is proposed to investigate whether any test-coverage criterion has a genuine additional impact on defect-coverage when compared to the impact of just running additional test cases. This procedure applies in typical testing conditions where the software is tested once, according to a given strategy, coverage measures are collected as well as defect data. This procedure is tested on published data, and the results are compared with the original findings. The study outcomes do not support the assumption of a causal dependency between test-coverage and defect-coverage, a result for which several plausible explanations are provided",2000,0,1452 1572,A low-cost power quality meter for utility and consumer assessments,"Power quality has become a major concern, in particular, to the domestic consumers because the reliability, efficiency and liability of their devices very much rely on the quality of the electric supply. However, up to now, there is no cost-effective instrument for power quality measurement. It is partially due to the lacking of a world-wide accepted indicator, power performance index (PPI) in the authors' case, and partially due to the conventionally high cost of power quality measurement equipment. In this paper, a good performance index, PPI, is proposed for the consideration by the power industry. Individual components arriving at the PPI are defined and the method of calculation is explained in detail. Such PPI will also be useful to assess the supply quality of independent power providers, especially for deregulation purposes. All algorithms have been implemented on a cost-effective power quality meter (PQM) while both hardware and software designs are described in this paper",2000,0, 1573,Micro-checkpointing: checkpointing for multithreaded applications,"In this paper we introduce an efficient technique for checkpointing multithreaded applications. Our approach makes use of processes constructed around the ARMOR (Adaptive Reconfigurable Mobile Objects of Reliability) paradigm implemented in our Chameleon testbed. ARMOR processes are composed of disjoint elements (objects) with controlled manipulation of element state. These characteristics of ARMORS allow the process state to be collected during runtime in an efficient manner and saved to disk when necessary. We call this approach micro-checkpointing. We demonstrate micro-checkpointing in the Chameleon testbed, an environment for developing reliable distributed applications. Our results show that the overhead ranges from between 39% to 141% with an aggressive checkpointing policy, depending upon the degree to which the process conforms to our ARMOR paradigm",2000,0, 1574,Evaluating the effectiveness of a software fault-tolerance technique on RISC- and CISC-based architectures,"This paper deals with a method able to provide a microprocessor-based system with safety capabilities by modifying the source code of the executed application, only. The method exploits a set of transformations which can automatically be applied, thus greatly reducing the cost of designing a safe system, and increasing the confidence in its correctness. Fault Injection experiments have been performed on a sample application using two different systems based on CISC and RISC processors. Results demonstrate that the method effectiveness is rather independent of the adopted platform",2000,0, 1575,Rerouting for handover in mobile networks with connection-oriented backbones: an experimental testbed,"The rerouting of connections for handover in a broadband mobile cellular network is investigated. We address networks with a connection-oriented backbone, which supports quality-of-service (QoS). Moreover it is assumed an IP-style multicast on top of the connection-oriented network. We advocate to utilize the IP-style multicast in order to reroute connections for handover. Three rerouting schemes are proposed, which are based on the IP-style multicast. One of the schemes realizes a predictive approach, where a call is pre-established to potential new base stations by setting up a multicast tree with neighboring base stations as leaf nodes. This approach reduces the handover latency caused by rerouting to an absolute minimum and improves communication with strict time constraints and frequent handover. For experimental investigations of the rerouting schemes a testbed is set up, which is based on an open, nonproprietary, experimental networking environment. Its signaling software offers a direct many-to-many communication, rather than requiring one-to-many overlays and the switching hardware supports scalable and efficient multicast by cell-recycling. The testbed is used to prove the feasibility of the approach and for performance evaluation of the rerouting scheme",2000,0, 1576,Assessment of the applicability of COTS microprocessors in high-confidence computing systems: a case study,Commercial-off-the-shelf (COTS) components are increasingly used in building high-confidence systems to assure their dependability in an affordable way. The effectiveness of such a COTS-based design critically depends on the design of the COTS components. Their dependability attributes need a thorough understanding and rigorous assessment yet has received relatively little attention. The research presented in this paper investigates the error detection and recovery features of contemporary high-performance COTS microprocessors. A method of assessment is proposed and two state-of-the-art microprocessors are studied and compared,2000,0, 1577,Software-implemented fault detection for high-performance space applications,"We describe and test a software approach to overcoming radiation-induced errors in spaceborne applications running on commercial off-the-shelf components. The approach uses checksum methods to validate results returned by a numerical subroutine operating subject to unpredictable errors in data. We can treat subroutines that return results satisfying a necessary condition having a linear form; the checksum tests compliance with this condition. We discuss the theory and practice of setting numerical tolerances to separate errors caused by a fault from those inherent infinite-precision numerical calculations. We test both the general effectiveness of the linear fault tolerant schemes we propose, and the correct behavior of our parallel implementation of them",2000,0, 1578,Fault-tolerant execution of mobile agents,"In this paper, we will address the list of problems that have to be solved in mobile agent systems and we will present a set of fault-tolerance techniques that can increase the robustness of agent-based applications without introducing a high performance overhead. The framework includes a set of schemes for failure detection, checkpointing and restart, software rejuvenation, a resource-aware atomic migration protocol, a reconfigurable itinerary, a protocol that avoids agents to get caught in node failures and a simple scheme to deal with network partitions. At the end, we will present some performance results that show the effectiveness of these fault-tolerance techniques",2000,0, 1579,Loki: a state-driven fault injector for distributed systems,"Distributed applications can fail in subtle ways that depend on the state of multiple parts of a system. This complicates the validation of such systems via fault injection, since it suggests that faults should be injected based on the global state of the system. In Loki, fault injection is performed based on a partial view of the global state of a distributed system, i.e. faults injected in one node of the system can depend on the state of other nodes. Once faults are injected, a post-runtime analysis, using off-line clock synchronization, is used to place events and injections on a single global timeline and to determine whether the intended faults were properly injected. Finally, experiments containing successful fault injections are used to estimate the specified measures. In addition to briefly reviewing the concepts behind Loki and its organization, we detail Loki's user interface. In particular, we describe the graphical user interfaces for specifying state machines and faults, for executing a campaign and for verifying whether the faults were properly injected",2000,0, 1580,Joint evaluation of performance and robustness of a COTS DBMS through fault-injection,"Presents and discusses observed failure modes of a commercial off-the-shelf (COTS) database management system (DBMS) under the presence of transient operational faults induced by SWIFI (software-implemented fault injection). The Transaction Processing Performance Council (TPC) standard TPC-C benchmark and its associated environment is used, together with fault-injection technology, building a framework that discloses both dependability and performance figures. Over 1600 faults were injected in the database server of a client/server computing environment built on the Oracle 8.1.5 database engine and Windows NT running on COTS machines with Intel Pentium processors. A macroscopic view on the impact of faults revealed that: (1) a large majority of the faults caused no observable abnormal impact in the database server (in 96% of hardware faults and 80% of software faults, the database server behaved normally); (2) software faults are more prone to letting the database server hang or to causing abnormal terminations; (3) up to 51% of software faults lead to observable failures in the client processes",2000,0, 1581,Designing high-performance and reliable superscalar architectures-the out of order reliable superscalar (O3RS) approach,"As VLSI geometry continues to shrink and the level of integration increases, it is expected that the probability of faults, particularly transient faults, will increase in future microprocessors. So far, fault tolerance has chiefly been considered for special purpose or safety critical systems, but future technology will likely require integrating fault tolerance techniques into commercial systems. Such systems require low cost solutions that are transparent to the system operation and do not degrade overall performance. This paper introduces a new superscalar architecture, termed as 03RS that aims to incorporate such simple fault tolerance mechanisms as part of the basic architecture",2000,0, 1582,A fault tolerance infrastructure for dependable computing with high-performance COTS components,"The failure rates of current COTS processors have dropped to 100 FITs (failures per 109 hours), indicating a potential MTTF of over 1100 years. However our recent study of Intel P6 family processors has shown that they have very limited error detection and recovery capabilities and contain numerous design faults (“errataâ€?. Other limitations are susceptibility to transient faults and uncertainty about “wearoutâ€?that could increase the failure rate in time. Because of these limitations, an external fault tolerance infrastructure is needed to assure the dependability of a system with such COTS components. The paper describes a fault-tolerant “infrastructureâ€?system of fault tolerance functions that makes possible the use of low-coverage COTS processors in a fault-tolerant, self-repairing system. The custom hardware supports transient recovery design fault tolerance, and self-repair by scaring and replacement. Fault tolerance functions are implemented by four types of hardware are processors of low complexity that are fault-tolerant. High error detection coverage, including design faults, is attained by diversity and replication",2000,0, 1583,Benchmarking anomaly-based detection systems,"Anomaly detection is a key element of intrusion detection and other detection systems in which perturbations of normal behavior suggest the presence of intentionally or unintentionally induced attacks, faults, defects, etc. Because most anomaly detectors are based on probabilistic algorithms that exploit the intrinsic structure (or regularity) embedded in data logs, a fundamental question is whether or not such structure influences detection performance. If detector performance is indeed a function of environmental regularity, it would be critical to match detectors to environmental characteristics. In intrusion-detection settings, however, this is not done, possibly because such characteristics are not easily ascertained. This paper introduces a metric for characterizing structure in data environments, and tests the hypothesis that intrinsic structure influences probabilistic detection. In a series of experiments, an anomaly detection algorithm was applied to a benchmark suite of 165 carefully calibrated, anomaly-injected data sets of varying structure. The results showed performance differences of as much as an order of magnitude, indicating that current approaches to anomaly detection may not be universally dependable",2000,0, 1584,Formal modeling and analysis of atomic commitment protocols,"The formal specification and mechanical verification of an atomic commitment protocol (ACP) for distributed real-time and fault-tolerant databases is presented. As an example, the non-blocking ACP of Babaoglu and Toueg (1993) is analyzed. An error in their termination protocol for recovered participants has been detected. We propose a new termination protocol which has been proved correct formally. To stay close to the original formulation of the protocol, timed state machines are used to specify the processes, whereas the communication mechanism between processes is defined using assertions. Formal verification has been performed incrementally: adding recovery from crashes only after having proved the basic protocol. The verification system PVS was used to deal with the complexity of this fault-tolerant protocol",2000,0, 1585,Evaluation of gradient descent learning algorithms with an adaptive local rate technique for hierarchical feedforward architectures,"Gradient descent learning algorithms (namely backpropagation and weight perturbation) can significantly increase their classification performances by adopting a local and adaptive learning rate management approach. We present the results of the comparison of the classification performance of the two algorithms in a tough application: quality control analysis in the steel industry. The feedforward network is hierarchically organized (i.e. tree of multilayer perceptrons). The comparison has been performed starting from the same operating conditions (i.e. network topology, stopping criterion, etc.): the results show that the probability of correct classification is significantly better for the weight perturbation algorithm",2000,0, 1586,A study in dynamic neural control of semiconductor fabrication processes,"This paper describes a generic dynamic control system designed for use in semiconductor fabrication process control. The controller is designed for any batch silicon wafer process that is run on equipment having a high number of variables that are under operator control. These controlled variables include both equipment state variables such as power, temperature, etc., and the repair, replacement, or maintenance of equipment parts, which cause parameter drift of the machine over time. The controller consists of three principal components: 1) an automatically updating database, 2) a neural-network prediction model for the prediction of process quality based on both equipment state variables and parts usage, and 3) an optimization algorithm designed to determine the optimal change of controllable inputs that yield a reduced operation cost, in-control solution. The optimizer suggests a set of least cost and least effort alternatives for the equipment engineer or operator. The controller is a PC-driven software solution that resides outside the equipment and does not mandate implementation of recommendations in order to function correctly. The neural model base continues to learn and improve over time. An example of the dynamic process control tool performance is presented retrospectively for a plasma etch system. In this study, the neural networks exhibited overall accuracy to within 20% of the observed values of .986, .938, and .87 for the output quality variables of etch rate, standard deviation, and selectivity, respectively, based on a total sample size of 148 records. The control unit was able to accurately detect the need for parts replacements and wet clean operations in 34 of 40 operations. The controller suggested chamber state variable changes which either improved performance of the output quality variables or adjusted the input variable to a lower cost level without impairment of output quality",2000,0, 1587,An analysis of a supply chain management agent architecture,"The authors illustrate a methodology for early agent systems architecture analysis and evaluation with the focus on risk identification, evaluation, and mitigation. Architectural decisions on software qualities such as performance, modifiability, and security can be assessed. The illustration is drawn from the supply chain management application domain",2000,0, 1588,Simplex minimisation for multiple-reference motion estimation,"This paper investigates the properties of the multiple-reference block motion field. Guided by the results of this investigation, the paper proposes three fast multiple-reference block matching motion estimation algorithms. The proposed algorithms are extensions of the single-reference simplex minimisation search (SMS) algorithm. The algorithms provide different degrees of compromise between prediction quality and computational complexity. Simulation results using a multi-frame memory of 50 frames indicate that the proposed multiple-reference SMS algorithms have a computational complexity comparable to that of single-reference full-search while still maintaining the prediction gain of multiple-reference motion estimation",2000,0, 1589,Finding faces in wavelet domain for content-based coding of color images,"Human face images form the important database in police departments, banks, security kiosks, and they are also found in abundance in day-to-day life. In these databases the important content, of course, is the face region. We present a highly efficient system that detects the human faces in the wavelet transform for discriminative quantization to achieve high perceptual quality content-based image coding technique. The proposed method gives superior subjective performance over JPEG without sacrificing the performance in the rate-distortion spectrum",2000,0, 1590,Microprocessor-based busbar protection system,"The design, implementation and testing of a microprocessor-based busbar protection system is presented. The proposed system employs a protection technique that uses the positive- and negative-sequence models of the power system. The system is composed of delta-impedance relays which implement a fault-detection algorithm, based on the developed technique. The algorithm, hardware and software of the relays are described. Performance of the proposed system was checked in the laboratory; the testing procedure and test results are presented",2000,0, 1591,Multiphysics modelling for electronics design,"The future of many companies will depend to a large extent on their ability to initiate techniques that bring schedules, performance, tests, support, production, life-cycle-costs, reliability prediction and quality control into the earliest stages of the product creation process. Important questions for an engineer who is responsible for the quality of electronic parts such as printed circuit boards (PCBs) during design, production, assembly and after-sales support are: What is the impact of temperature? What is the impact of this temperature on the stress produced in the components? What is the electromagnetic compatibility (EMC) associated with such a design? At present, thermal, stress and EMC calculations are undertaken using different software tools that each require model build and meshing. This leads to a large investment in time, and hence cost, to undertake each of these simulations. This paper discusses the progression towards a fully integrated software environment, based on a common data model and user interface, having the capability to predict temperature, stress and EMC fields in a coupled manner. Such a modelling environment used early within the design stage of an electronic product will provide engineers with fast solutions to questions regarding thermal, stress and EMC issues. The paper concentrates on recent developments in creating such an integrated modeling environment with preliminary results from the analyses conducted. Further research into the thermal and stress related aspects of the paper is being conducted under a nationally funded project, while their application in reliability prediction will be addressed in a new European project called PROFIT",2000,0, 1592,A comparison of using icepak and flotherm in electronic cooling,"Simulation software is a good tool to reduce design cycles and prototypes. The accuracy of the results depend on the users' inputs. Software can provide very accurate results when the inputs are reliable. Software can provide very poor results if the model, boundary conditions, or solution parameters are not represented properly. The user, not the software, is responsible for the accuracy of the results. To adequately use software, the user must become familiar with the built in hnctions and their limits of applicability. You must have a complete understanding of thermal theory so you can judge when a calculated result is in error based on an error in your input. Both Flothenn and Icepak are popular thermal analysis computational fluid dynamics (CFD) software packages in electronic cooling. The scope of this comparison is not to be used by the reader as the only basis for selecting a CFD tool. The frst thermal study was conducted using Icepak and Flotherm to calculate heat transfer fiom an aluminum plate due to natural convection and forced convection (turbulent flow, 300 fpm) environment. Three different types of mesh sizes were compared to theory and experimental data. The second study was to simulate the thermal performance of a RF power amplifier in a steady state forced convection environment. Both Icepak and Flotherm can provide almost the same level of performance. The quality of the prediction depends on the users' background and experience.",2000,0, 1593,A fault locator for radial subtransmission and distribution lines,"This paper presents the design and development of a prototype fault locator that estimates the location of shunt faults on radial subtransmission and distribution lines. The fault location technique is based on the fundamental frequency component of voltages and currents measured at the line terminal. Extensive sensitivity studies of the fault location technique have been done-some of the results are included in the paper. Hardware and software of the fault locator are described. The fault locator was tested in the laboratory using simulated fault data and a real-time playback simulator. Some results from the tests are presented. Results indicate that the fault location technique has acceptable accuracy, is robust and can be implemented with the available technology",2000,0, 1594,Bypass: a tool for building split execution systems,"Split execution is a common model for providing a friendly environment on a foreign machine. In this model, a remotely executing process sends some or all of its system calls back to a home environment for execution. Unfortunately, hand-coding split execution systems for experimentation and research is difficult and error-prone. We have built a tool, called Bypass, for quickly producing portable and correct split execution systems for unmodified legacy applications. We demonstrate Bypass by using it to transparently connect a POSIX application to a simple data staging system based on the Globus toolkit",2000,0, 1595,Using idle workstations to implement predictive prefetching,"The benefits of Markov-based predictive prefetching have been largely overshadowed by the overhead required to produce high-quality predictions. While both theoretical and simulation results for prediction algorithms appear promising, substantial limitations exist in practice. This outcome can be partially attributed to the fact that practical implementations ultimately make compromises in order to reduce overhead. These compromises limit the level of algorithm complexity, the variety of access patterns and the granularity of trace data that the implementation supports. This paper describes the design and implementation of GMS-3P (Global Memory System with Parallel Predictive Prefetching), an operating system kernel extension that offloads prediction overhead to idle network nodes. GMS-3P builds on the GMS global memory system, which pages to and from remote workstation memory. In GMS-3P, the target node sends an online trace of an application's page faults to an idle node that is running a Markov-based prediction algorithm. The prediction node then uses GMS to prefetch pages to the target node from the memory of other workstations in the network. Our preliminary results show that predictive prefetching can reduce the remote-memory page fault time by 60% or more and that, by offloading prediction overhead to an idle node, GMS-3P can reduce this improved latency by between 24% and 44%, depending on the Markov model order",2000,0, 1596,A monitoring sensor management system for grid environments,"Large distributed systems, such as computational grids, require a large amount of monitoring data be collected for a variety of tasks, such as fault detection, performance analysis, performance tuning, performance prediction and scheduling. Ensuring that all necessary monitoring is turned on and that the data is being collected can be a very tedious and error-prone task. We have developed an agent-based system to automate the execution of monitoring sensors and the collection of event data",2000,0, 1597,Optimal and suboptimal reliable scheduling of precedence-constrained tasks in heterogeneous distributed computing,"Introduces algorithms which can produce both optimal and suboptimal task assignments to minimize the probability of failure of an application executing on a heterogeneous distributed computing system. A cost function which defines this probability under a given task assignment is derived. To find optimal and suboptimal task assignments efficiently, a reliable matching and scheduling problem is converted into a state-space search problem in which the cost function derived is used to guide the search. The A* algorithm for finding optimal task assignments and the A*m and hill-climbing algorithms for finding suboptimal task assignments are presented. Simulation results are provided to confirm the performance of the proposed algorithms",2000,0, 1598,How perspective-based reading can improve requirements inspections,"Because defects constitute an unavoidable aspect of software development, discovering and removing them early is crucial. Overlooked defects (like faults in the software system requirements, design, or code) propagate to subsequent development phases where detecting and correcting them becomes more difficult. At best, developers will eventually catch the defects, but at the expense of schedule delays and additional product-development costs. At worst, the defects will remain, and customers will receive a faulty product. The authors explain their perspective based reading (PBR) technique that provides a set of procedures to help developers solve software requirements inspection problems. PBR reviewers stand in for specific stakeholders in the document to verify the quality of requirements specifications. The authors show how PBR leads to improved defect detection rates for both individual reviewers and review teams working with unfamiliar application domains.",2000,0, 1599,Wireless communications based system to monitor performance of rail vehicles,"This paper describes a recently developed remote monitoring system, based on a combination of embedded computing, digital signal processing, wireless communications, GPS, and GIS technologies. The system includes onboard platforms installed on each monitored vehicle and a central station located in an office. Each onboard platform detects various events onboard a moving vehicle, tags them with time and location information, and delivers the data to an office through wireless communications channels. The central station logs the data into a database and displays the location and status of each vehicle, as well as detected events, on a map. Waveform traces from all sensor channels can be sent with each event and can be viewed by the central station operator. The system provides two-way wireless communication between the central station and mobile onboard platforms. Depending on coverage requirements and customer preferences, communication can be provided through satellite, circuit-switch cellular, digital wireless communication links or a combination of these methods. Settings and software changes may be made remotely from the central station, eliminating the need to capture the monitored vehicle. The onboard platform can be configured for installation on any rail vehicle, including locomotives, passenger cars and freight cars. Depending on the application, the onboard platform can monitor either its own sensors or existing onboard sensors. The system has been used for several railroad applications including ride quality measurement, high cant deficiency monitoring, truck hunting detection, and locomotive health monitoring. The paper describes the system, these applications, and discusses some of the results",2000,0, 1600,A replicated assessment and comparison of common software cost modeling techniques,"Delivering a software product on time, within budget, and to an agreed level of quality is a critical concern for many software organizations. Underestimating software costs can have detrimental effects on the quality of the delivered software and thus on a company's business reputation and competitiveness. On the other hand, overestimation of software cost can result in missed opportunities to funds in other projects. In response to industry demand, a myriad of estimation techniques has been proposed during the last three decades. In order to assess the suitability of a technique from a diverse selection, its performance and relative merits must be compared. The current study replicates a comprehensive comparison of common estimation techniques within different organizational contexts, using data from the European Space Agency. Our study is motivated by the challenge to assess the feasibility of using multi-organization data to build cost models and the benefits gained from company-specific data collection. Using the European Space Agency data set, we investigated a yet unexplored application domain, including military and space projects. The results showed that traditional techniques, namely, ordinary least-squares regression and analysis of variance outperformed analogy-based estimation and regression trees. Consistent with the results of the replicated study no significant difference was found in accuracy between estimates derived from company-specific data and estimates derived from multi-organizational data.",2000,0, 1601,A case study in root cause defect analysis,"There are three interdependent factors that drive our software development processes: interval, quality and cost. As market pressures continue to demand new features ever more rapidly, the challenge is to meet those demands while increasing, or at least not sacrificing, quality. One advantage of defect prevention as an upstream quality improvement practice is the beneficial effect it can have on interval: higher quality early in the process results in fewer defects to be found and repaired in the later parts of the process, thus causing an indirect interval reduction. We report a retrospective root cause defect analysis study of the defect Modification Requests (MRs) discovered while building, testing, and deploying a release of a transmission network element product. We subsequently introduced this analysis methodology into new development projects as an in-process measurement collection requirement for each major defect MR. We present the experimental design of our case study discussing the novel approach we have taken to defect and root cause classification and the mechanisms we have used for randomly selecting the MRs to analyze and collecting the analyses via a Web interface. We then present the results of our analyses of the MRs and describe the defects and root causes that we found, and delineate the countermeasures created to either prevent those defects and their root causes or detect them at the earliest possible point in the development process. We conclude with lessons learned from the case study and resulting ongoing improvement activities.",2000,0, 1602,Characterizing implicit information during peer review meetings,"Disciplines like software engineering evolve over time by studying some practices and feeding back those results to improve the practice. The empirical approach presented in the paper is used to analyze the nature of the information shared during peer review meetings (PRMs) held in an industrial software engineering project. The results obtained show that although a PRM is categorized as a verification practice, it is also a golden opportunity for project personnel to share information about technical solutions, a decision's rationale or quality guidelines. The contribution of PRMs is not restricted to the rapid detection of anomalies; they also provide the opportunity for project team members to share implicit information. PRM efficiency cannot solely be measured through an anomaly detection rate.",2000,0, 1603,Lessons learned from teaching reflective software engineering using the Leap toolkit,"After using and teaching the Personal Software Process (PSP) (W.S. Humphrey, 1995) for over four years, the author came to appreciate the insights and benefits that it produced. However, there were some general problems with the PSP. These problems led him to begin work on an alternative software process improvement method called reflective software engineering. Like the PSP, reflective software engineering is based upon a simple idea: people learn best from their own experience. Reflective software engineering supports experience based improvement in developers' professional activities by helping the developer structure their experience, record it, and analyze it. Unlike the PSP, reflective software engineering is designed around the presence of extensive automated support. The support is provided by a Java based toolkit called ""Leap"" . The kinds of structured insights and experiences users can record with Leap include: the size of the work product; the time it takes to develop it; the defects that the user or others find in it; the patterns that they discover during the development of it; checklists that they use during or design as a result of the project; estimates for time or size that they generate during the project; and the goals, questions, and measures that the user uses to motivate the data recording.",2000,0, 1604,Estimating software fault-proneness for tuning testing activities,"The article investigates whether a correlation exists between the fault-proneness of software and the measurable attributes of the code (i.e. the static metrics) and of the testing (i.e. the dynamic metrics). The article also studies how to use such data for tuning the testing process. The goal is not to find a general solution to the problem (a solution may not even exist), but to investigate the scope of specific solutions, i.e., to what extent homogeneity of the development process, organization, environment and application domain allows data computed on past projects to be projected onto new projects. A suitable variety of case studies is selected to investigate a methodology applicable to classes of homogeneous products, rather than investigating if a specific solution exists for few cases.",2000,0, 1605,Automatic image event segmentation and quality screening for albuming applications,"In this paper, a system for automatic albuming of consumer photographs is described and its specific core components of event segmentation and screening of low quality images are discussed. A novel event segmentation algorithm was created to automatically cluster pictures into events and sub-events for albuming, based on date/time meta data information as well as color content of the pictures. A new quality-screening is developed based on object quality to detect problematic images due to underexposure, low contrast, and camera defocus or movement. Performance testing of these algorithms was conducted using a database of real consumer photos and showed that these functions provide a useful first-cut album layout for typical rolls of consumer pictures. A first version of the automatic albuming application software was rested through a consumer trial in the United States from August to December 1999",2000,0, 1606,Induction machine condition monitoring with higher order spectra,"This paper describes a novel method of detecting and unambiguously diagnosing the type and magnitude of three induction machine fault conditions from the single sensor measurement of the radial electromagnetic machine vibration. The detection mechanism is based on the hypothesis that the induction machine can be considered as a simple system, and that the action of the fault conditions are to alter the output of the system in a characteristic and predictable fashion. Further, the change in output and fault condition can be correlated allowing explicit fault identification. Using this technique, there is no requirement for a priori data describing machine fault conditions, the method is equally applicable to both sinusoidally and inverter-fed induction machines and is generally invariant of both the induction machine load and speed. The detection mechanisms are rigorously examined theoretically and experimentally, and it is shown that a robust and reliable induction machine condition-monitoring system has been produced. Further, this technique is developed into a software-based automated commercially applicable system",2000,0, 1607,Comparative investigation of diagnostic media for induction motors: a case of rotor cage faults,"Results of a comparative experimental investigation of various media for noninvasive diagnosis of rotor faults in induction motors are presented. Stator voltages and currents in an induction motor were measured, recorded, and employed for computation of the partial and total input powers and of the estimated torque. Waveforms of the current, partial powers pAB and pCB, total power, and estimated torque were subsequently analyzed using the fast Fourier transform. Several rotor cage faults of increasing severity were studied with various load levels. The partial input power pCB was observed to exhibit the highest sensitivity to rotor faults. This medium is also the most reliable, as it includes a multiplicity of fault-induced spectral components",2000,0, 1608,Automated security checking and patching using TestTalk,"In many computer system security incidents, attackers successfully intruded computer systems by exploiting known weaknesses. Those computer systems remained vulnerable even after the vulnerabilities were known because it requires constant attention to stay on top of security updates. It is often both time-consuming and error-prone to manually apply security patches to deployed systems. To solve this problem, we propose to develop a framework for automated security checking and patching. The framework, named Securibot, provides a self-operating mechanism for security checking and patching. Securibot performs security testing using security profiles and security updates. It can also detect compromised systems using attack signatures. Most important, the Securibot framework allows system vendors to publish recently discovered security weaknesses and new patches in a machine-readable form so that the Securibot system running on deployed systems can automatically check out security updates and apply the patches.",2000,0, 1609,Toward the automatic assessment of evolvability for reusable class libraries,"Many sources agree that managing the evolution of an OO system constitutes a complex and resource-consuming task. This is particularly true for reusable class libraries, as the user interface must be preserved to allow for version compatibility. Thus, the symptomatic detection of potential instabilities during the design phase of such libraries may serve to avoid later problems. This paper presents a fuzzy logic-based approach for evaluating the interface stability of a reusable class library, by using structural metrics as stability indicators.",2000,0, 1610,System-level test bench generation in a co-design framework,"Co-design tools represent an effective solution for reducing costs and shortening time-to-market, when System-on-Chip design is considered. In a top-down design flow, designers would greatly benefit from the availability of tools able to automatically generate test benches, which can be used during every design step, from the system-level specification to the gate-level description. This would significantly increase the chance of identifying design bugs early in the design flow, thus reducing the costs and increasing the final product quality. The paper proposes an approach for integrating the ability to generate test benches into an existing co-design tool. Suitable metrics are proposed to guide the generation, and preliminary experimental results are reported, assessing the effectiveness of the proposed technique",2000,0, 1611,DOORS: towards high-performance fault tolerant CORBA,"An increasing number of applications are being developed using distributed object computing middleware, such as CORBA. Many of these applications require the underlying middleware, operating systems, and networks to provide end-to-end quality of service (QoS) support to enhance their efficiency, predictability, scalability, and fault tolerance. The Object Management Group (OMG), which standardizes CORBA, has addressed many of these application requirements in the Real-time CORBA and Fault-Tolerant CORBA specifications. We provide four contributions to the study of fault-tolerant CORBA middleware for performance-sensitive applications. First, we provide an overview of the Fault Tolerant CORBA specification. Second, we describe a framework called DOORS, which is implemented as a CORBA service to provide end-to-end application-level fault tolerance. Third, we outline how the DOORS' reliability and fault-tolerance model has been incorporated into the standard OMG Fault-tolerant CORBA specification. Finally, we outline the requirements for CORBA ORB core and higher-level services to support the Fault Tolerant CORBA specification efficiently",2000,0, 1612,Software product improvement with inspection. A large-scale experiment on the influence of inspection processes on defect detection in software requirements documents,"In the early stages of software development, inspection of software documents is the most effective quality assurance measure to detect defects and provides timely feedback on quality to developers and managers. The paper reports on a controlled experiment that investigates the effect of defect detection techniques on software product and inspection process quality. The experiment compares defect detection effectiveness and efficiency of a general reading technique that uses checklist based reading, and a systematic reading technique, scenario based reading, for requirements documents. On the individual level, effectiveness was found to be higher for the general reading technique, while the focus of the systematic reading technique led to a higher yield of severe defects compared to the general reading technique. On a group level, which combined inspectors' contributions, the advantage of a reading technique regarding defect detection effectiveness depended on the size of the group, while the systematic reading technique generally exhibited better defect detection efficiency",2000,0, 1613,Scalable hardware-algorithm for mark-sweep garbage collection,"The memory-intensive nature of object-oriented languages such as C++ and Java has created the need for high-performance dynamic memory management. Object-oriented applications often generate higher memory intensity in the heap region. Thus, a high-performance memory manager is needed to cope with such applications. As today's VLSI technology advances, it becomes increasingly attractive to map software algorithms such as malloc(), free() and garbage collection into hardware. This paper presents a hardware design of a sweeping function (for mark-and-sweep garbage collection) that fully utilizes the advantages of combinational logic. In our scheme, the bit sweep can detect and sweep the garbage in a constant time. Bit-map marking in software can improve the cache performance and reduce number of page faults; however, it often requires several instructions to perform a single mark. In our scheme, only one hardware instruction is required per mark. Moreover, since the complexity of the sweeping phase is often higher than the marking phase, the garbage collection time may be substantially improved. The hardware complexity of the proposed scheme (bit-sweeper) is O(n), where n represents the size of the bit map",2000,0, 1614,New fast binary pyramid motion estimation for MPEG2 and HDTV encoding,"A novel fast binary pyramid motion estimation (FBPME) algorithm is presented in this paper. The proposed FBPME scheme is based on binary multiresolution layers, exclusive-or (XOR) Boolean block matching, and a N-scale tiling search scheme. Each video frame is converted into a pyramid structure of K-1 binary layers with resolution decimation, plus one integer layer at the lowest resolution. At the lowest resolution layer, the N-scale tiling search is performed to select initial motion vector candidates. Motion vector fields are gradually refined with the XOR Boolean block-matching criterion and the N-scale tiling search schemes in higher binary layers. FBPME performs several thousands times faster than the conventional full-search block-matching scheme at the same PSNR performance and visual quality. It also dramatically reduces the bus bandwidth and on-chip memory requirement. Moreover, hardware complexity is low due to its binary nature. Fully functional software MPEG-2 MP@ML encoders and Advanced Television Standard Committee high definition television encoders based on the FBPME algorithm have been implemented. FBPME hardware architecture has been developed and is being incorporated into single-chip MPEG encoders. A wide range of video sequences at various resolutions has been tested. The proposed algorithm is also applicable to other digital video compression standards such as H.261, H.263, and MPEG4",2000,0, 1615,The use of PSA for fault detection and characterization in electrical apparatus,"The monitoring of the actual condition of high voltage apparatus has become more and more important in the last years. One well established tool to characterize the actual condition of electric equipment is the measurement and evaluation of partial discharge data. Immense effort has been put into sophisticated statistic software-tools, to extract meaningful analyses out of data sets, without taking care of relevant correlations between consecutive discharge pulses. In contrast to these classical methods of partial discharge analysis the application of the Pulse Sequence Analysis allows a far better insight into the local defects. The detailed analysis of sequences of discharges in a voltage- and a current-transformer shows that the sequence of the partial discharge signals may change with time, because either different defects are active at different measuring times or a local defect may change with time as a consequence of the discharge activity. Hence for the evaluation of the state of degradation or the classification of the type of defect the analysis of short `homogeneous' sequences or sequence correlated data is much more meaningful than just the evaluation of a set of independently accumulated discharge data. This is demonstrated by the evaluation of measurements performed on different commercial hv-apparatus",2000,0, 1616,Methods based on Petri net for resource sharing estimation,"This work presents two approaches for computing the number of functional units in hardware/software codesign context. The proposed hardware/software codesign framework uses Petri net as common formalism for performing quantitative and qualitative analysis. The use of Petri net as an intermediate format allows to analyze properties of the specification and formally compute performance indices which are used in the partitioning process. This paper is devoted to describe the algorithms for functional unit estimation. This work also proposes a method of extending the Petri net model in order to take into account causal constraints provided by the designers. However, an overview of the general hardware/software codesign method is also presented",2000,0, 1617,A practical classification-rule for software-quality models,"A practical classification rule for a SQ (software quality) model considers the needs of the project to use a model to guide targeting software RE (reliability enhancement) efforts, such as extra reviews early in development. Such a rule is often more useful than alternative rules. This paper discusses several classification rules for SQ models, and recommends a generalized classification rule, where the effectiveness and efficiency of the model for guiding software RE efforts can be explicitly considered. This is the first application of this rule to SQ modeling that we know of. Two case studies illustrate application of the generalized classification rule. A telecommunication-system case-study models membership in the class of fault-prone modules as a function of the number of interfaces to other modules. A military-system case-study models membership in the class of fault-prone modules as a function of a set of process metrics that depict the development history of a module. These case studies are examples where balanced misclassification rates resulted in more useful and practical SQ models than other classification rules",2000,0, 1618,Signal processing and fault detection with application to CH-46 helicopter data,"Central to Qualtech Systems mission is its testability and maintenance software (TEAMS) and derivatives. Many systems comprise components equipped with self-testing capability; but, if the system is complex (and involves feedback and if the self-testing itself may occasionally be faulty) tracing faults to a single or multiple causes is difficult. However, even for systems involving many thousands of components the PC-based TEAMS provides essentially real-time system-state diagnosis. Until recently TEAMS operation was passive: its diagnoses were based on whatever data sensors could provide. Now, however, a signal-processing (SP) “frontendâ€?matched to inference needs is available. Many standard signal processing primitives, such as filtering, spectrum analysis and multi-resolution decomposition are available; the SP toolbox is also equipped with a (supervised) classification capability based on a number of decision-making paradigms. This paper is about the SP toolbox. We show its capabilities, and demonstrate its performance on the CH-46 “Westlandâ€?data set",2000,0, 1619,Multi-agent diagnosis and control of an air revitalization system for life support in space,"An architecture of inter-operating agents has been developed to provide control and fault management for advanced life support systems in space. In this multi-agent architecture, cooperating autonomous software agents coordinate with human agents, to provide support in novel fault management situations. This architecture combines the Livingstone model-based mode identification and reconfiguration (MIR) system with elements of the 3T architecture for autonomous flexible command and control. The MIR software agent performs model-based state identification and fault diagnosis. MIR also identifies novel recovery configurations and the set of commands required to accomplish the recovery. The 3T procedural executive and the human operator use the diagnoses and recovery recommendations, and provide command sequencing. Human interface extensions have been developed to support human monitoring and control of both 3T and MIR data and activities. This architecture has been exercised for control and fault management of an oxygen production system for air revitalization in space. The software operates in a dynamic simulation testbed",2000,0, 1620,"A reusable state-based guidance, navigation and control architecture for planetary missions","JPL has embarked on the Mission Data System (MDS) project to produce a reusable, integrated flight and ground software architecture. This architecture will then be adapted by future JPL planetary projects to form the basis of their flight and ground software. The architecture is based on identifying the states of the system under consideration. States include aspects of the system that must be controlled to accomplish mission objectives, as well as aspects that are uncontrollable but must be known. The architecture identifies methods to measure, estimate, model, and control some of these states. Some states are controlled by goals, and the natural hierarchy of the system is employed by recursively elaborating goals until primitive control actions are reached. Fault tolerance emerges naturally from this architecture. Failures are detected as discrepancies between state and model-based predictions of state. Fault responses are handled either by re-elaboration of goals, or by failures of goals invoking re-elaboration at higher levels. Failure modes an modelled as possible behaviors of the system, with corresponding state estimation processes. Architectural patterns are defined for concepts such as states, goals, and measurements. Aspects of state are captured in a state-analysis database. UML is used to capture mission requirements as Use Cases and Scenarios. Application of the state-based concepts to specific states is also captured in UML, achieving architectural consistency by adapting base classes for all architectural patterns",2000,0, 1621,Protection of gate movement in hydroelectric power stations,"Movement of the gates is an everyday task performed on a dam of hydroelectric a power station. This operation is often controlled remotely by measuring the positioning of the gates and a level of the current in the driving motors. This method of control cannot, however, detect anomalies, such as asymmetric movement of gates, faults in drive gearwheel etc. We therefore decided to devise a new improved system for the protection of gate movement which is described in our paper. It is based on measuring the forces applied to the transmission construction. The transducers with resistive strain gauges are mounted on the frame bearings and the strains are subsequently measured. The output signal from the transducer is proportional to a force applied to the frame. The transducers are installed at the points of the largest strain. For the protection against uneven movement of the left and right chains, the strain transmitters are inserted in the bearings of the main gearwheel, to measure the compression. The whole system is controlled by the microprocessor. The details on sensors, the electronic instrumentation needed, and the software of the controlling computer, are also described in the paper. This system has been tested, and regularly used, on the power stations of Drava river in Slovenia.",2000,0, 1622,An empirical investigation of an object-oriented software system,"The paper describes an empirical investigation into an industrial object oriented (OO) system comprised of 133000 lines of C++. The system was a subsystem of a telecommunications product and was developed using the Shlaer-Mellor method (S. Shlaer and S.J. Mellor, 1988; 1992). From this study, we found that there was little use of OO constructs such as inheritance, and therefore polymorphism. It was also found that there was a significant difference in the defect densities between those classes that participated in inheritance structures and those that did not, with the former being approximately three times more defect-prone. We were able to construct useful prediction systems for size and number of defects based upon simple counts such as the number of states and events per class. Although these prediction systems are only likely to have local significance, there is a more general principle that software developers can consider building their own local prediction systems. Moreover, we believe this is possible, even in the absence of the suites of metrics that have been advocated by researchers into OO technology. As a consequence, measurement technology may be accessible to a wider group of potential users",2000,0, 1623,Computer analysis of LOS microwaves links clear air performance,"Microwave links have to be designed such that propagation effects do not reduce the quality of the transmitted signals. Measurements and the derived propagation parameters are analysed and discussed, for Cluj-Napoca county, in order to improve future planning of the radio links",2000,0, 1624,Predicting and measuring quality of service for mobile multimedia,We show how an understanding of human perception and the simulation of a mobile IP network may be used to tackle the relevant issues of providing acceptable quality of service for mobile multimedia,2000,0, 1625,Integrating reliability and timing analysis of CAN-based systems,"The paper outlines and illustrates a reliability analysis method developed with a focus on CAN based automotive systems. The method considers the effect of faults on schedulability analysis and its impact on the reliability estimation of the system, and attempts to integrate both to aid system developers. We illustrate the method by modeling a simple distributed antilock braking system, showing that even in cases where the worst-case analysis deem the system unschedulable, it may be proven to satisfy its timing requirements with a sufficiently high probability. From a reliability and cost perspective, the paper underlines the tradeoffs between timing guarantees, the level of hardware and software faults, and per-unit cost",2000,0, 1626,"Utilizing third harmonic 100% stator ground fault protection, a cogeneration experience","Third harmonic, 100% stator ground fault protection schemes are becoming economically viable for small and mid size generators used in cogeneration applications. Practical considerations must be observed in order that such schemes are successfully applied for different machines. This paper introduces an experience with the applications of 3rd harmonic schemes in cogeneration applications and depicts their advantages and limitations. For a 50 MVA generator, actual measurements of produced third harmonics are portrayed and analyzed. In light of the experience and analysis, applications of certain third harmonic scheme configurations are contemplated",2000,0, 1627,Can metrics help to bridge the gap between the improvement of OO design quality and its automation?,"During the evolution of object-oriented (OO) systems, the preservation of a correct design should be a permanent quest. However, for systems involving a large number of classes and that are subject to frequent modifications, the detection and correction of design flaws may be a complex and resource-consuming task. The use of automatic detection and correction tools can be helpful for this task. Various works have proposed transformations that improve the quality of an OO system while preserving its behavior. In this paper, we investigate whether some OO metrics can be used as indicators for automatically detecting situations where a particular transformation can be applied to improve the quality of a system. The detection process is based on analyzing the impact of various transformations on these OO metrics using quality estimation models",2000,0, 1628,A formal mechanism for assessing polymorphism in object-oriented systems,"Although quality is not easy to evaluate since it is a complex concept compound by different aspects, several properties that make a good object-oriented design have been recognized and widely accepted by the software engineering community. We agree that both the traditional and the new object-oriented properties should be analyzed in assessing the quality of object-oriented design. However, we believe that it is necessary to pay special attention to the polymorphism concept and metric, since they should be considered one of the key concerns in determining the quality of an object-oriented system. In this paper, we have given a rigorous definition of polymorphism. On top of this formalization we propose a metric that provides an objective and precise mechanism to detect and quantify dynamic polymorphism. The metric takes information coming from the first stages of the development process giving developers the opportunity to early evaluate and improve the quality of the software product. Finally, a first approach to the theoretical validation of the metric is presented",2000,0, 1629,Software quality prediction using mixture models with EM algorithm,"The use of the statistical technique of mixture model analysis as a tool for early prediction of fault-prone program modules is investigated. The expectation-maximum likelihood (EM) algorithm is engaged to build the model. By only employing software size and complexity metrics, this technique can be used to develop a model for predicting software quality even without the prior knowledge of the number of faults in the modules. In addition, Akaike Information Criterion (AIC) is used to select the model number which is assumed to be the class number the program modules should be classified. The technique is successful in classifying software into fault-prone and non fault-prone modules with a relatively low error rate, providing a reliable indicator for software quality prediction",2000,0, 1630,On the determination of an appropriate time for ending the software testing process,"Software testing is widely used as a means of increasing software reliability. The prohibitive nature of exhaustive testing has given rise to the problem of determining when a system has reached an acceptable reliability slate and can be released. This has probably become the hardest problem facing a project manager. In this paper, a stopping rule that indicates the appropriate time at which to stop testing is presented. The rule automatically adapts to modifications in the assumptions, since it can be applied under any software error-counting model. An investigation of the properties of the rule is described and the results obtained after applying it to a set of real data in conjunction with two statistical models are presented",2000,0, 1631,Object oriented design function points,"Estimating different characteristics viz., size, cost, etc. of software during different phases of software development is required to manage the resources effectively. Function points measure can be used as an input to estimate these characteristics of software. The Traditional Function Point Counting Procedure (TFPCP) can not be used to measure the functionality of an Object Oriented (OO) system. This paper suggests a counting procedure to measure the functionality of an OO system during the design phase from a designers' perspective. It is adapted from the TFPCP. The main aim of this paper is to use all the available information during the OO design phase to estimate Object Oriented Design Function Points (OODFP). The novel feature of this approach is that it considers all the basic concepts of OO systems such as inheritance, aggregation, association and polymorphism",2000,0, 1632,Quality metrics of object oriented design for software development and re-development,"The quality of software has an important bearing on the financial and safety aspects in our daily life. Assessing quality of software at the design level will provide ease and higher accuracy for users. However, there is a great gap between the rapid adoption of Object Oriented (OO) techniques and the slow speed of developing corresponding object oriented metric measures, especially object oriented design measures. To tackle this issue, we look into measuring the quality of Object Oriented designs during both software development and re-development processes. A set of OO design metrics has been derived from the existing work found in literature and been further extended. The paper also presents software tools for assisting software re-engineering using the metric measures developed; this has been illustrated with an example of the experiments conducted during our research, finally concluded with the lessons learned and intended further work",2000,0, 1633,The 9 quadrant model for code reviews,"Discusses a decision-making model which can be used to determine the efficiency of a code review process. This model is based on statistical techniques such as control charts. The model has nine quadrants, each of which depicts a range of values of the cost and yield of a code review. The efficiency of the code review in detecting defects is determined by taking inputs from past data, in terms of the costs and yields of those code reviews. This estimate also provides an in-process decision-making tool. Other tools can be used effectively, in conjunction with this model, to plan for code reviews and to forecast the number of defects that could be expected in the reviews. This model can be successfully used to decide what the next step of the operational process should be. The decisions taken using this model help to reduce the number of defects present in the software delivered to the customer",2000,0, 1634,Identifying key attributes of projects that affect the field quality of communication software,"The authors identify key attributes of projects in which the number of problem reports after release is remarkable in the communication software. After several interviews with software project managers, we derived candidate attributes of importance to the projects. To find out the most influential attributes of the projects, we conducted statistical analysis using a set of metrics data related to the candidate attributes and the number of problem reports after release. As a result we successfully found that two metrics concerning the origin's quality and the changes in specification are useful to estimate the field quality",2000,0, 1635,Inserting software fault measurement techniques into development efforts,"Over the past several years, techniques for estimating software fault content based on measurements of a system's structural evolution during its implementation have been developed. Proper application of the techniques will yield a detailed map of the faults that have been inserted into the system. This information can be used by development organizations to better control the number of residual faults in the operational system. There are several issues that must be resolved if these techniques are to be successfully inserted into a development effort. These issues are identified, as are possibilities for their resolution",2000,0, 1636,A phase-based approach to creating highly reliable software,"Software reliability engineering includes: software reliability measurement, which includes estimation and prediction, with the help of software reliability models established in the literature; the attributes and metrics of product design, development process, system architecture, software operational environment, and their implications on reliability; and the application of this knowledge in specifying and guiding system software architecture, development, testing, acquisition, use, and maintenance. My position is that we should attack the problem of software reliability engineering in three phases: modeling and analysis phase; design and implementation phase; and testing and measurement phase. All these phases deal with the management of software faults and failures",2000,0, 1637,Defect-based reliability analysis for mission-critical software,"Most software reliability methods have been developed to predict the reliability of a program using only data gathered during the resting and validation of a specific program. Hence, the confidence that can be attained in the reliability estimate is limited since practical resource constraints can result in a statistically small sample set. One exception is the Orthogonal Defect Classification (ODC) method, which uses data gathered from several projects to track the reliability of a new program, Combining ODC with root-cause analysis can be useful in many applications where it is important to know the reliability of a program for a specific type of a fault. By focusing on specific classes of defects, it becomes possible to (a) construct a detailed model of the defect and (b) use data from a large number of programs. In this paper, we develop one such approach and demonstrate its application to modeling Y2K defects",2000,0, 1638,Effort-index-based software reliability growth models and performance assessment,"The authors show that the logistic testing effort function is practically acceptable/helpful for modeling the software reliability growth and providing a reasonable description of resource consumption. Therefore, in addition to the exponential shaped models, we integrate the logistic testing effort function into an S-shaped model for further analysis. The model is designated as the Yamada Delayed S-shaped model. A realistic failure data set is used in the experiments to demonstrate the estimation procedures and results. Furthermore, the analysis of the proposed model under an imperfect debugging environment is investigated. In fact, from these experimental results and discussions, it is apparent that the logistic testing-effort function is very suitable for making estimations of resource consumption during the software development/testing phase",2000,0, 1639,Design of an improved watchdog circuit for microcontroller-based systems,"This paper presents an improved design for a watchdog circuit. Previously, watchdog timers detected refresh inputs that were slower than usual. If a failure causes the microcontroller to produce faster than usual refresh inputs, the watchdog will not detect it. This new design detects failures that produce faster than usual as well as slower than usual refresh inputs. This will greatly improve the reliability of the system protected by this new design.",2000,0, 1640,IEEE 1232 and P1522 standards,"The 1232 family of standards were developed to provide standard exchange formats and software services for reasoning systems used in system test and diagnosis. The exchange formats and services am based on a model of information required to support test and diagnosis. The standards were developed by the Diagnostic and Maintenance Control (D&MC) subcommittee of IEEE SCC20. The current efforts by the D&MC are a combined standard made up of the 1232 family, and a standard on Testability and Diagnosability Metrics, P1522. The 1232 standards describe a neutral exchange format so one diagnostic reasoner can exchange model information with another diagnostic reasoner. In addition, software interfaces are defined whereby diagnostic tools can be developed to process the diagnostic information in a consistent and reliable way. The objective of the Testability and Diagnosability Metrics standard is to provide notionally correct and mathematically precise definitions of testability measures that may be used to either measure the testability characteristics of a system, or predict the testability of a system. The end purpose is to provide an unambiguous source for definitions of common and uncommon testability and diagnosability terms such that each individual encountering it can know precisely what that term means. This paper describes the 1232 and P1522 standards and details the recent changes in the Information models, restructured higher order services and simplified conformance requirements",2000,0, 1641,Using databases to streamline Test Program Set development,"Databases can be used to improve efficiency and accuracy during Test Program Set (TPS) development. Key elements of the development process such as requirements analysis, establishing a fault universe, failure modes, standardization, Operational Test Program Set (OTPS) groupings, fault detection and isolation requirements, hardware design, and TPS performance, can easily be controlled and monitored using a Rapid Application Development (RAD) environment. This paper describes the guidelines, using RAD, to establish and design a group of Units Under Test (UUTs) resulting in improvements in quality, efficiency, and accuracy that are incorporated into each TPS",2000,0, 1642,FastSplats: optimized splatting on rectilinear grids,"Splatting is widely applied in many areas, including volume, point-based and image-based rendering. Improvements to splatting, such as eliminating popping and color bleeding, occasion-based acceleration, post-rendering classification and shading, have all been recently accomplished. These improvements share a common need for efficient frame-buffer accesses. We present an optimized software splatting package, using a newly designed primitive, called FastSplat, to scan-convert footprints. Our approach does not use texture mapping hardware, but supports the whole pipeline in memory. In such an integrated pipeline, we are then able to study the optimization strategies and address image quality issues. While this research is meant for a study of the inherent trade-off of splatting, our renderer, purely in software, achieves 3- to 5-fold speedups over a top-end texture hardware implementation (for opaque data sets). We further propose a method of efficient occlusion culling using a summed area table of opacity. 3D solid texturing and bump mapping capabilities are demonstrated to show the flexibility of such an integrated rendering pipeline. A detailed numerical error analysis, in addition to the performance and storage issues, is also presented. Our approach requires low storage and uses simple operations. Thus, it is easily implementable in hardware.",2000,0, 1643,Thresholds for object-oriented measures,"A practical application of object oriented measures is to predict which classes are likely to contain a fault. This is contended to be meaningful because object oriented measures are believed to be indicators of psychological complexity, and classes that are more complex are likely to be faulty. Recently, a cognitive theory was proposed suggesting that there are threshold effects for many object oriented measures. This means that object oriented classes are easy to understand as long as their complexity is below a threshold. Above that threshold their understandability decreases rapidly, leading to an increased probability of a fault. This occurs, according to the theory, due to an overflow of short-term human memory. If this theory is confirmed, then it would provide a mechanism that would explain the introduction of faults into object oriented systems, and would also provide some practical guidance on how to design object oriented programs. The authors empirically test this theory on two C++ telecommunications systems. They test for threshold effects in a subset of the Chidamber and Kemerer (CK) suite of measures (S. Chidamber and C. Kemerer, 1994). The dependent variable was the incidence of faults that lead to field failures. The results indicate that there are no threshold effects for any of the measures studied. This means that there is no value for the studied CK measures where the fault-proneness changes from being steady to rapidly increasing. The results are consistent across the two systems. Therefore, we can provide no support to the posited cognitive theory",2000,0, 1644,Quantitative software reliability modeling from testing to operation,"We first describe how several existing software reliability growth models based on nonhomogeneous Poisson processes (NHPPs) can be derived based on a unified theory for NHPP models. Under this general framework, we can verify existing NHPP models and derive new NHPP models. The approach covers a number of known models under different conditions. Based on these approaches, we show a method of estimating and computing software reliability growth during the operational phase. We can use this method to describe the transitions from the testing phase to operational phase. That is, we propose a method of predicting the fault detection rate to reflect changes in the user's operational environments. The proposed method offers a quantitative analysis on software failure behavior in field operation and provides useful feedback information to the development process",2000,0, 1645,Assessing the cost-effectiveness of inspections by combining project data and expert opinion,"There is a general agreement among software engineering practitioners that sofware inspections are an important technique to achieve high software quality at a reasonable cost. However, there are many ways to perform such inspections and many factors that affect their cost-effectiveness. It is therefore important to be able to estimate this cost-effectiveness in order to monitor it, improve it, and convince developers and management that the technology and related investments are worthwhile. This work proposes a rigorous but practical way to do so. In particular, a meaningful model to measure cost-effectiveness is selected and a method to determine the cost-effectiveness by combining project data and expert opinion is proposed. To demonstrate the feasibility of the proposed approach, the results of a large-scale industrial case study are presented",2000,0, 1646,Contributing to the bottom line: optimizing reliability cost schedule tradeoff and architecture scalability through test technology,"A challenging problem in software testing is finding the optimal point at which costs justify the stop-test decision. We first present an economic model that can be used to evaluate the consequences of various stop-test decisions. We then discuss two approaches for assessing performance, automated load test generation in the context of empirical testing and performance modeling, and illustrate how these techniques can affect the stop-test decision. We then illustrate the application of these two techniques to evaluating the performance of Web servers that performs significant server-side processing through object-oriented (OO) computing. Implications of our work for Web server performance evaluation in general are discussed",2000,0, 1647,How to measure the impact of specific development practices on fielded defect density,"This author has mathematically correlated specific development practices to defect density and probability of on time delivery. She summarizes the results of this ongoing study that has evolved into a software prediction modeling and management technique. She has collected data from 45 organizations developing software primarily for equipment or electronic systems. Of these 45 organizations, complete and unbiased delivered defect data and actual schedule delivery data was available for 17 organizations. She presents the mathematical correlation between the practices employed by these organizations and defect density. This correlation can and is used to: predict defect density; and improve software development practices for the best return on investment",2000,0, 1648,Analyzing testability on data flow designs,"High testability is a strongly desired feature of software, since it tends to make the validation phase more efficient in exposing faults during testing, and consequently it increases the quality of the end-product. Furthermore, testability is a criterion of crucial importance to software developers, since the sooner it can be estimated, the better the software architecture will be organized to improve subsequent maintenance. This paper is concerned with the testability of data flow software designs, its definition, and the axiomatization of its expected behavior. This behavior is expressed in relation to basic operations that are applicable on designs, and to the dedicated test strategies which are selected. Measurements are proposed which are consistent with the stated axioms. The whole approach is demonstrated using design specifications of embedded software developed in the avionics industry",2000,0, 1649,A software falsifier,"A falsifier is a tool for discovering errors by static source-code analysis. Its goal is to discover them while requiring minimal programmer effort. In contrast to lint-like tools or verifiers, which try to maximize the number of errors reported at the expense of allowing “false errorsâ€? a falsifier's goal is to guarantee no false errors. To further minimize programmer effort, no specification or extra information about the program is required. That, however, does not preclude project-specific information from being built in. The class of errors that are detectable without any specification is important not only because of the low cost of detection, but also because it includes errors of portability, irreproducible behavior, etc., which are very expensive to detect by testing. This paper describes the design and implementation of such a falsifier, and reports on experience with its use for design automation software. The main contribution of this work lies in combining data-flow analysis with symbolic execution to take advantage of their relative advantages",2000,0, 1650,Improving tree-based models of software quality with principal components analysis,"Software quality classification models can predict which modules are to be considered fault-prone, and which are not, based on software product metrics, process metrics and execution metrics. Such predictions can be used to target improvement efforts to those modules that need them the most. Classification-tree modeling is a robust technique for building such software quality models. However, the model structure may be unstable, and accuracy may suffer when the predictors are highly correlated. This paper presents an empirical case study of four releases of a very large telecommunications system, which shows that the tree-based models can be improved by transforming the predictors with principal components analysis, so that the transformed predictors are not correlated. The case study used the regression-tree algorithm in the S-Plus package and then applied a general decision rule to classify the modules",2000,0, 1651,A methodology for architectural-level risk assessment using dynamic metrics,"Risk assessment is an essential process of every software risk management plan. Several risk assessment techniques are based on the subjective judgement of domain experts. Subjective risk assessment techniques are human-intensive and error-prone. Risk assessment should be based on product attributes that we can quantitatively measure using product metrics. This paper presents a methodology for risk assessment at the early stages of the development lifecycle, namely the architecture level. We describe a heuristic risk assessment methodology that is based on dynamic metrics obtained from UML specifications. The methodology uses dynamic complexity and dynamic coupling metrics to define complexity factors for the architecture elements (components and connectors). Severity analysis is performed using FMEA (failure mode and effect analysis), as applied to architecture simulation models. We combine severity and complexity factors to develop heuristic risk factors for the architecture components and connectors. Based on component dependency graphs that were developed earlier for reliability analysis, and using analysis scenarios, we develop a risk assessment model and a risk analysis algorithm that aggregates the risk factors of components and connectors to the architectural level. We show how to analyze the overall risk factor of the architecture as the function of the risk factors of its constituting components and connectors. A case study of a pacemaker is used to illustrate the application of the methodology",2000,0, 1652,On the repeatability of metric models and metrics across software builds,"We have developed various software metrics models over the years: Boolean discriminate functions (BDFs); the Kolmogorov-Smirnov distance; derivative calculations for assessing achievable quality; a stopping rule; point and confidence interval estimates of quality; relative critical value deviation metrics; and nonlinear regression functions. We would like these models and metrics to be repeatable across the n builds of a software system. The advantage of repeatability is that models and metrics only need to be developed and validated once on build, and then applied n-1 times without modification to subsequent builds, with considerable savings in analysis and computational effort. In practical terms, this approach involves using the same model parameters that were validated and applying them unchanged on subsequent builds. The disadvantage is that the quality and metrics data of builds 2, ..., n, which varies across builds, is not utilized. We make a comparison of this approach with one that involves validating models and metrics on each build i and applying them only on build i+1, and then repeating the process. The advantage of this approach is that all available data are used in the models and analysis but at considerable cost in effort. We report on experiments involving large sets of discrepancy reports and metrics data on the Space Shuttle flight software, where we compare the predictive accuracy and effort of the two approaches for BDFs, critical values, derivative quality and inspection calculations, and the stopping rule",2000,0, 1653,Modeling fault-prone modules of subsystems,"Software developers are very interested in targeting software enhancement activities prior to release, so that reworking of faulty modules can be avoided. Credible predictions of which modules are likely to have faults discovered by customers can be the basis for selecting modules for enhancement. Many case studies in the literature build models to predict which modules will be fault-prone without regard to the subsystems defined by the system's functional architecture. Our hypothesis is this: models that are specially built for subsystems will be more accurate than a system-wide model applied to each subsystem's modules. In other words, the subsystem that a module belongs to can be valuable information in software quality modeling. This paper presents an empirical case study which compared software quality models of an entire system to models of a major functional subsystem. The study, modeled a very large telecommunications system with classification trees built by the CART (classification and regression trees) algorithm. For predicting subsystem quality, we found that a model built with training data on the subsystem alone was more accurate than a similar model built with training data on the entire system. We concluded that the characteristics of the subsystem's modules were not similar to those of the system as a whole, and thus, information on subsystems can be valuable",2000,0, 1654,Software reliability and maintenance concept used for automatic call distributor MEDIO ACD,"The authors present the software reliability and maintenance concept, which is used in the software development, testing, and maintenance process, for automatic call distributor MEDIO ACD. The concept has been successfully applied on systems, which are installed and fully operational in Moscow and Saint Petersburg, Russia. The authors concentrate on two main issues: (i) set of fault-tolerant mechanisms needed for the system exploitation (error logging, checkpoint-restart, overload protection and tandem configuration support); (ii) MEDIO ACD software maintenance concept, in which the quality of the new software update is predicted on the basis of the current update's metrics and quality, and the new update's metrics. This forecast aids software maintenance efficiency, and cost reduction",2000,0, 1655,Resource sharing estimation by Petri nets in PISH hardware/software co-design system,"The article presents two approaches for computing the number of functional units in a hardware/software codesign context. The proposed hardware/software codesign framework uses the Petri net as a common formalism for performing quantitative and qualitative analysis. The use of the Petri net as an intermediate format allows us to analyze properties of the specification and formally compute performance indices which are used in the partitioning process. The paper is devoted to describing the algorithms for functional unit estimation. The work also proposes a method of extending the Petri net model in order to take into account causal constraints provided by the designers. However, an overview of the general hardware/software codesign method is also presented",2000,0, 1656,The feasibility of applying object-oriented technologies to operational flight software,"As object-oriented technologies move from the laboratory to the mainstream, companies are beginning to realize cost benefits in terms of software reuse and reduced development time. These benefits have been elusive to developers of real-time flight software. Issues such as processing latencies, validated performance of mission critical functions, and integration of legacy code have inhibited the effective use of object-oriented technologies in this domain. Emerging design languages, development tools, and processes offer the potential to address the application of object technologies to real-time operational flight software development. This paper examines emerging object-based technologies and assess their applicability to operational flight software. It includes an analysis that compares and contrasts the current Comanche software development process with one based on object-oriented concepts",2000,0, 1657,Single byte error control codes with double bit within a block error correcting capability for semiconductor memory systems,"Computer memory systems when exposed to strong electromagnetic waves or radiation are highly vulnerable to multiple random bit errors. Under this situation, we cannot apply existing SEC-DED or SbEC capable codes because they provide insufficient error control performance. This correspondence considers the situation where two random bits in a memory chip are corrupted by strong electromagnetic waves or radioactive particles and proposes two classes of codes that are capable of correcting random double bit errors occurring within a chip. The proposed codes, called Double bit within a block Error Correcting-Single byte Error Detecting ((DEC)B-SbED) code and Double bit within a block Error Correcting-Single byte Error Correcting ((DEC)B-Sb EC) code, are suitable for recent computer memory systems",2000,0, 1658,An experimental evaluation of the effectiveness of automatic rule-based transformations for safety-critical applications,"Over the last years, an increasing number of safety-critical tasks have been demanded of computer systems. In particular, safety-critical computer-based applications are hitting markets where costs is a major issue, and thus solutions are required which conjugate fault tolerance with low costs. In this paper, a software-based approach for developing safety-critical applications is analyzed. By exploiting an ad-hoc tool implementing the proposed technique, several benchmark applications have been hardened against transient errors. Fault injection campaigns have been performed to evaluate the fault detection capability of the hardened applications. Moreover, a comparison of the proposed techniques with the Algorithm-Based Fault Tolerance (ABFT) approach is proposed. Experimental results show that the proposed approach is far more effective than ABFT in terms of fault detection capability when injecting transient faults in data and code memory, at a cost of an increased memory overhead. Moreover, the performance penalty introduced by the proposed technique is comparable, and sometimes lower, than that ABFT requires",2000,0, 1659,Fuzzy techniques evaluate sausage quality,"The purpose of this study is to reproduce, with software, the crusting evaluation provided by experts of an image of a slice acquired by a camera. The software gives promising results that are very close to the evaluations provided by inspection experts. It appears possible to use this software as a tool to assist operators' evaluation of sausage quality by reducing the evaluation time.",2000,0, 1660,Spiral interpolation algorithms for multislice spiral CT. II. Measurement and evaluation of slice sensitivity profiles and noise at a clinical multislice system,"For pt. I see ibid., vol. 19, no. 9, p. 822-34 (2000). The recently introduced multislice data acquisition for computed tomography (CT) is based on multirow detector design, increased rotation speed, and advanced z-interpolation and z-filtering algorithms. The authors evaluated slice sensitivity profiles (SSPs) and noise of a clinical multislice spiral CT (MSCT) scanner with M=4 simultaneously acquired slices and adaptive axial interpolator (AAI) reconstruction software. SSPs were measured with a small gold disk of 50 μm thickness and 2-mm diameter located at the center of rotation (COR) and 100 mm off center. The standard deviation of CT values within a 20-cm water phantom was used as a measure of image noise. With a detector slice collimation of S=1.0 mm, the authors varied spiral pitch p from 0.25 to 2.0 in steps of 0.025. Nominal reconstructed slice thicknesses were 1.25, 1.5, and 2.0 mm. For all possible pitch values, the authors found the full-width at half maximum (FWHM) of the respective sensitivity profile at the COR equivalent to the selected nominal slice thickness. The profiles at 100 mm off center are broadened less than 7% on the average compared with the FWHM at the COR. In addition, variation of the full-width at tenth maximum (FWTM) at the COR was below 10% for pâ‰?.75. Within this range, image noise varied less than 10% with respect to the mean noise level. The slight increase in measured slice-width above p=1.75 for nominal slice-widths of 1.25 and 1.50 mm is accompanied by a decrease of noise according to the inverse square root relationship. The MSCT system that the authors scrutinized provides reconstructed slice-widths and image noise, which can be regarded as constant within a wide range of table speeds. With respect to this, MSCT is superior to single-slice spiral CT. These facts can be made use of when defining and optimizing clinical protocols: the spiral pitch can be selected almost freely, and scan protocols can follow the diagnostic requirements without technical restrictions. In summary, MSCT offers constant image quality while scan times are reduced drastically. Volume scans with three-dimensional (3-D) isotropic resolution are routinely feasible for complete anatomical regions.",2000,0, 1661,"Correctly assessing the -ilities"" requires more than marketing hype","Understanding key system qualities can better equip you to correctly assess the technologies you manage. Dot-coms and enterprise systems often use terms like scalability, reliability and availability to describe how well they meet current and future service-level expectations. These ilities characterize an IT solution's architectural and engineering qualities. They collectively provide a vocabulary for discussing an IT solution's performance potential amid ever-changing IT requirements. The paper considers the role that ilities play in the solution architecture. They generally fall into four categories: strategic, systemic, service and user.",2000,0, 1662,Predicting testability of program modules using a neural network,"J.M. Voas (1992) defines testability as the probability that a test case will fail if the program has a fault. It is defined in the context of an oracle for the test, and a distribution of test cases, usually emulating operations. Because testability is a dynamic attribute of software, it is very computation-intensive to measure directly. The paper presents a case study of real time avionics software to predict the testability of each module from static measurements of source code. The static software metrics take much less computation than direct measurement of testability. Thus, a model based on inexpensive measurements could be an economical way to take advantage of testability attributes during software development. We found that neural networks are a promising technique for building such predictive models, because they are able to model nonlinearities in relationships. Our goal is to predict a quantity between zero and one whose distribution is highly skewed toward zero. This is very difficult for standard statistical techniques. In other words, high testability modules present a challenging prediction problem that is appropriate for neural networks",2000,0, 1663,Improved testing using failure cost and intensity profiles,"The cost of field failures is arguably a very important factor in judging the quality of software. It would be nice if field failure costs could be used before the software is released so as to improve its quality. We introduced the expected cost of field failures (Ntafos, 1997) as a measure of test effectiveness and demonstrated that significant benefits are possible even with rather crude cost and failure rate estimates. We discuss the main issues that arise in trying to reduce the cost of field failures through better test effort allocation. We introduce improved models and strategies and present analytical and experimental (simulation) evaluations",2000,0, 1664,Modeling the effects of combining diverse software fault detection techniques,"Considers what happens when several different fault-finding techniques are used together. The effectiveness of such multi-technique approaches depends upon a quite subtle interplay between their individual efficacies. The modeling tool we use to study this problem is closely related to earlier work on software design diversity which showed that it would be unreasonable even to expect software versions that were developed truly independently to fail independently of one another. The key idea was a “difficulty functionâ€?over the input space. Later work extended these ideas to introduce a notion of “forcedâ€?diversity. In this paper, we show that many of these results for design diversity have counterparts in diverse fault detection in a single software version. We define measures of fault-finding effectiveness and diversity, and show how these might be used to give guidance for the optimal application of different fault-finding procedures to a particular program. The effects on reliability of repeated applications of a particular fault-finding procedure are not statistically independent; such an incorrect assumption of independence will always give results that are too optimistic. For diverse fault-finding procedures, it is possible for effectiveness to be even greater than it would be under an assumption of statistical independence. Diversity of fault-finding procedures is a good thing and should be applied as widely as possible. The model is illustrated using some data from an experimental investigation into diverse fault-finding on a railway signalling application",2000,0, 1665,"A statistical, nonparametric methodology for document degradation model validation","Printing, photocopying, and scanning processes degrade the image quality of a document. Statistical models of these degradation processes are crucial for document image understanding research. In this paper, we present a statistical methodology that can be used to validate local degradation models. This method is based on a nonparametric, two-sample permutation test. Another standard statistical device, the power function, is then used to choose between algorithm variables such as distance functions. Since the validation and the power function procedures are independent of the model, they can be used to validate any other degradation model. A method for comparing any two models is also described. It uses p-values associated with the estimated models to select the model that is closer to the real world.",2000,0, 1666,Pro-active QoS scheduling algorithms for real-time multimedia applications in communication system,"This paper studies the performance of the OccuPancy Adjusting (OCP A) dynamic QoS scheduling algorithm and two newly proposed dynamic QoS scheduling algorithm, known as the dynamic QoS scheduling algorithm with delay estimation and the hybrid dynamic QoS scheduling algorithm. The newly designed algorithms are intended to reduce the propagated delay caused by transient packets that will eventually be dropped due to expired delay. The essence of these new algorithms is to apply a pro-active approach in accessing the admission of packets into the buffer, which is based on the estimated delay that is likely to be experienced by the respective packets. The estimation algorithm is driven by a recursive calculation of the estimated delay, which is influenced by the buffer occupancy. The results obtained through the simulation model have indicated that the two newly proposed algorithms are able to improve the average delay, buffer requirements and packet loss ratio for delay sensitive traffic, as compared to the OCP A algorithm",2000,0, 1667,Design and construction of a PC-based partial discharge analyzer using FPGA,"This paper presents the implementation of the PC-based PD (partial discharge) analyzer. This PD analyzer is capable of measuring both PD level and IEC integrated quantities and analyzing PD data stored in a 3-dimensional distribution. FPGA (field programmable gate arrays) is the main IC in digital part for circuits implementation to perform real-time data processing, controlling and interfacing to a PC. In the analysis process, the Hn(φ,q) distribution is derived from the recorded PD data stream. Fractal features are calculated from the surface of the distribution. Finally, automatic classification of defects is performed. Real-time measurement and display on a Windows98 platform are achieved by using the VxD device driver and the visual-style software developed in C++ language. The test result shows that the characteristic of the PD analyzer complies with IEC Publ. 60270",2000,0, 1668,JavaSymphony: a system for development of locality-oriented distributed and parallel Java applications,"Most Java-based systems that support portable parallel and distributed computing either require the programmer to deal with intricate low-level details of Java which can be a tedious, time-consuming and error-prone task, or prevent the programmer from controlling locality of data. In this paper we describe JavaSymphony, a programming paradigm for distributed and parallel computing that provides a software infrastructure for wide classes of heterogeneous systems ranging from small-scale cluster computing to large scale wide-area meta-computing. The software infrastructure is written entirely in Java and runs on any standard compliant Java virtual machine. In contrast to most existing systems, JavaSymphony provides the programmer with the flexibility to control data locality and load balancing by explicit mapping of objects to computing nodes. Virtual architectures are specified to impose a virtual hierarchy on a distributed system of physical computing nodes. Objects can be mapped and dynamically migrated to arbitrary components of virtual architectures. A high-level API to hardware/software system parameters is provided to control mapping, migration, and load balancing of objects. Objects can interact through synchronous asynchronous and one-sided method invocation. Selective remote class loading may reduce the overall memory requirement of an application. Moreover; objects can be made persistent by explicitly storing and loading objects to/from external storage. A prototype of the JavaSymphony software infrastructure has been implemented. Preliminary experiments on a heterogeneous cluster of workstations are described that demonstrate reasonable performance values",2000,0, 1669,Toward a more reliable theory of software reliability,"The notions of time and the operational profile incorporated into software reliability are incomplete. Reliability should be redefined as a function of application complexity, test effectiveness, and operating environment. We do not yet have a reliability equation that application complexity, test effectiveness, test suite diversity, and a fuller definition of the operational profile. We challenge the software reliability community to consider these ideas in future models. The reward for successfully doing so likely will be the widespread adoption of software reliability prediction by the majority of software publishers.",2000,0, 1670,The nonredundant error correction differential detection in soft-decision decoding,"This paper proposes the application of nonredundant error correction (NEC) in soft-decision decoding. Combined with differential demodulation of π/4DQPSK, we emphasize the analysis of the new algorithm. A practical implementation scheme is presented. The BER performance and correction capability of NEC is investigated by computer simulation in AWGN and Rician fading channels. The simulation results show that the performance improvement of the proposed algorithm is superior to conventional differential demodulation by 1.4 dB at a BER of 10-4 in the AWGN channel. In the Rician channel there is the same notable performance improvement. The algorithm makes NEC able to be used in soft-decision decoding, which is widely applied in practical satellite communication systems. It overcomes the limitation that NEC could only be used in hard-decision systems. Therefore, the system capacity and communication quality is notably improved. It is worthwhile for satellite communication for which power and spectrum are limited",2000,0, 1671,Modeling software quality: the Software Measurement Analysis and Reliability Toolkit,"The paper presents the Software Measurement Analysis and Reliability Toolkit (SMART) which is a research tool for software quality modeling using case based reasoning (CBR) and other modeling techniques. Modern software systems must have high reliability. Software quality models are tools for guiding reliability enhancement activities to high risk modules for maximum effectiveness and efficiency. A software quality model predicts a quality factor, such as the number of faults in a module, early in the life cycle in time for effective action. Software product and process metrics can be the basis for such fault predictions. Moreover, classification models can identify fault prone modules. CBR is an attractive modeling method based on automated reasoning processes. However, to our knowledge, few CBR systems for software quality modeling have been developed. SMART addresses this area. There are currently three types of models supported by SMART: classification based on CBR, CBR classification extended with cluster analysis, and module-order models, which predict the rank-order of modules according to a quality factor. An empirical case study of a military command, control, and communications applied SMART at the end of coding. The models built by SMART had a level of accuracy that could be very useful to software developers",2000,0, 1672,A genetic algorithm-based system for generating test programs for microprocessor IP cores,"The current digital systems design trend is quickly moving toward a design-and-reuse paradigm. In particular, intellectual property cores are becoming widely used. Since the cores are usually provided as encrypted gate-level netlist, they raise several testability problems. The authors propose an automatic approach targeting processor cores that, by resorting to genetic algorithms, computes a test program able to attain high fault coverage figures. Preliminary results are reported to assess the effectiveness of our approach with respect to a random approach",2000,0, 1673,Transforming supervised classifiers for feature extraction,"Supervised feature extraction is used in data classification and (unlike unsupervised feature extraction) it uses class labels to evaluate the quality of the extracted features. It can be computationally inefficient to perform exhaustive searches to find optimal subsets of features. This article proposes a supervised linear feature extraction algorithm based on the use of multivariate decision trees. The main motivation in proposing this new approach to feature extraction is to reduce the computation time required to induce new classifiers which are required to evaluate every new subset of features. The new feature extraction algorithm proposed uses an approach that is similar to the wrapper model method used in feature selection. In order to evaluate the performance of the proposed algorithm, several tests with real-world data have been performed. The fundamental importance of this new feature extraction method is found in its ability to significantly reduce the computational time required to extract features from large databases",2000,0, 1674,VerifyESD: a tool for efficient circuit level ESD simulations of mixed-signal ICs,"For many classes of technologies and circuits, it is beneficial to perform circuit simulations for ESD design, verification, and performance prediction. This is particularly true for mixed-signal ICs, where complex interaction between I/Os and multiple power supplies make manual analysis difficult and error prone. Unfortunately, high node and component counts typically prohibit simulations of an entire circuit. Thus, a manual intervention by the designer is usually required to minimize the circuit size. This paper introduces a new tool which automatically reduces the number of voltage nodes per ESD simulation by including only those devices that are necessary. In addition, a simple method for modeling ESD device failure while maintaining compatibility with existing CAD tools and libraries is discussed.",2000,0, 1675,Fault-tolerant Ethernet middleware for IP-based process control networks,"We present an efficient middleware-based fault-tolerant Ethernet (FTE) developed for process control networks. Our approach is unique and practical in the sense that it requires no change to commercial off-the-shelf hardware (switch, hub, Ethernet physical link, and network interface card) and software (commercial Ethernet NIC card driver and standard protocol such as TCP/IP) yet it is transparent to IP-based applications. The FTE performs failure detection and recovery for handling multiple points of network faults and supports communications with non-FTE-capable devices. Our experimentation shows that FTE performs efficiently, achieving less than 1-ms end-to-end swap time and less than 2-sec failover time, regardless of the concurrent application and system loads. In this paper, we describe the FTE architecture, the challenging technical issues addressed, our performance evaluation results, and the lessons learned in design and development of such an open-network-based fault-tolerant network",2000,0, 1676,Globecom '00 - IEEE. Global Telecommunications Conference. Conference Record (Cat. No.00CH37137),The following topics were dealt with: next generation detection and channel estimation; third generation data communication; alternative multiple access techniques; CDMA system design and performance; smart antennas and software defined radio; wireless networks; wideband 3G systems; system adaptation and optimization; multimedia communications; active router queue management; transmission control protocol; Internet applications; multicast communications; Internet routing; network measurement and characterization; network routing; switch architectures; QoS and congestion control; performance analysis in long range dependence; OFDM; turbo codes and iterative decoding; coding; fading and diversity; space time codes; equalization and estimation; multiple antennas; satellite network protocols and IP QoS; LEO network architectures and analysis; optical networks performance; enabling technologies; WDM routing and wavelength assignment; WDM network protocols; multimedia systems; wireless and third generation network advances; future generation wireless data and voice systems; operations and performance of mobility management; advanced techniques for future radio systems; radio resource management; QoS issues in IP networks; modulation and coding; signal processing; wireless Internet; mobile ad hoc networks; congestion control and rate adaptation; performance evaluation; quality management and control of ATM networks; communication networks design and performance evaluation; space time systems; signal processing and coding for data storage,2000,0, 1677,Self-calibration of metrics of Java methods,"Self-calibration is a new technique for the study of internal product metrics, sometimes called “observationsâ€?and calibrating these against their frequency, or probability of occurring in common programming practice (CPP). Data gathering and analysis of the distribution of observations is an important prerequisite for predicting external qualities, and in particular software complexity. The main virtue of our technique is that it eliminates the use of absolute values in decision-making, and allows gauging local values in comparison with a scale computed from a standard and global database. Method profiles are introduced as a visual means to compare individual projects or categories of methods against the CPP. Although the techniques are general and could in principle be applied to traditional programming languages, the focus of the paper is on object oriented languages using Java. The techniques are employed in a suite of 17 metrics in a body of circa thirty thousand Java methods",2000,0, 1678,A maintainability model for industrial software systems using design level metrics,"Software maintenance is a time consuming and expensive phase of a software product's life-cycle. The paper investigates the use of software design metrics to statistically estimate the maintainability of large software systems, and to identify error prone modules. A methodology for assessing, evaluating and, selecting software metrics for predicting software maintainability is presented. In addition, a linear prediction model based on a minimal set of design level software metrics is proposed. The model is evaluated by applying it to industrial software systems",2000,0, 1679,An adaptive PMU based fault detection/location technique for transmission lines. II. PMU implementation and performance evaluation,"Part I of this paper set sets forth theory and algorithms for adaptive fault detection/location technique, which is based on phasor measurement unit (PMU). This paper is Part II of this paper set, A new timing device named “Global Synchronism Clock Generator, GSCGâ€?including its hardware and software design is described in this paper, Experimental results show that the synchronized error of rising edge between the two GSCGs clock is well within 1 ps when the clock frequency is below 2.499 MHz. The measurement results between Chung-Jeng and Chang-Te 161 kV substations of Taiwan Power company by PMU equipped with GSCG is presented and the accuracy for estimating parameters of line is verified. The new developed DFT based method (termed as smart discrete Fourier transform, SDFT) and line parameter estimation algorithm are combined with PMU configuration to form the adaptive fault detector/locator system. Simulation results have shown that SDFT method can extract exact phasors in the presence of frequency deviation and harmonics, The parameter estimation algorithm can also trace exact parameters very well, The SDFT method and parameter estimation algorithm can achieve accuracies of up to 99.999% and 99.99%, respectively. The EMTP is used to simulate a 345 kV transmission line of Taipower System. Results have shown that the proposed technique yields correct results independent of fault types and is insensitive to the variation of source impedance, fault impedance and line loading. The accuracy of fault location estimation achieved can be up to 99.9% for many simulated cases, The proposed technique will be very suitable for implementation in an integrated digital protection and control system for transmission substations",2000,0, 1680,Detecting a network failure,"Measuring the properties of a large, unstructured network can be difficult: one may not have full knowledge of the network topology, and detailed global measurements may be infeasible. A valuable approach to such problems is to take measurements from selected locations within the network and then aggregate them to infer large-scale properties. One sees this notion applied in settings that range from Internet topology discovery tools to remote software agents that estimate the download times of popular Web pages. Some of the most basic questions about this type of approach, however, are largely unresolved at an analytical level. How reliable are the results? How much does the choice of measurement locations affect the aggregate information one infers about the network? We describe algorithms that yield provable guarantees for a particular problem of this type: detecting a network failure. Suppose we want to detect events of the following form: an adversary destroys up to k nodes or edges, after which two subsets of the nodes, each at least an ε fraction of the network, are disconnected from one another. We call such an event an (ε,k) partition. One method for detecting such events would be to place “agentsâ€?at a set D of nodes, and record a fault whenever two of them become separated from each other. To be a good detection set, D should become disconnected whenever there is an (ε,k)-partition; in this way, it “witnessesâ€?all such events. We show that every graph has a detection set of size polynomial in k and ε-1, and independent of the size of the graph itself. Moreover, random sampling provides an effective way to construct such a set. Our analysis establishes a connection between graph separators and the notion of VC-dimension, using techniques based on matchings and disjoint paths",2000,0, 1681,Monitoring behavior in home using a smart fall sensor and position sensors,"The authors developed a system for remotely monitoring human behavior during daily life at home, to improve the security and the quality of life. The activity was monitored through infrared position sensors and magnetic switches. For the falls detection the authors had to develop a smart sensor. The local communications were performed using RF wireless links to reduce the cabling and to allow mobility of the person. An application software performs data exploitation locally but it also performs remote data transmission through the network. This project aim at expanding the telecare solution to a larger population of elderly people who are presently forced to live in hospices",2000,0, 1682,A domain coverage metric for the validation of behavioral VHDL descriptions,"During functional validation, corner cases and boundary conditions are known to be difficult to verify. We propose a functional fault coverage metric for behavioral VHDL descriptions which evaluates the coverage of faults which cause the behavior to execute an incorrect functional domain. Although domain faults are known to be a source of design errors, they are not targeted explicitly by previous work in functional validation. We propose an efficient method to compute domain coverage in VHDL descriptions and we generate domain coverage results for several benchmark VHDL circuits for comparison to other approaches",2000,0, 1683,Computer-aided fault to defect mapping (CAFDM) for defect diagnosis,"Defect diagnosis in random logic is currently done using the stuck-at fault model, while most defects seen in manufacturing result in bridging faults. In this work we use physical design and test failure information combined with bridging and stuck-at fault models to localize defects in random logic. We term this approach computer-aided fault to defect mapping (CAFDM). We build on top of the existing mature stuck-at diagnosis infrastructure. The performance of the CAFDM software was tested by injecting bridging faults into samples of a Streaming audio controller chip and comparing the predicted defect locations and layers with the actual values. The correct defect location and layer was predicted in all 9 samples for which scan-based diagnosis could be performed. The experiment was repeated on production samples that failed scan test, with promising results",2000,0, 1684,How to predict software defect density during proposal phase,"The author has developed a method to predict defect density based on empirical data. The author has evaluated the software development practices of 45 software organizations. Of those, 17 had complete actual observed defect density to correspond to the observed development practices. The author presents the correlation between these practices and defect density in this paper. This correlation can and is used to: (a) predict defect density as early as the proposal phase, (b) evaluate proposals from subcontractors, (c) perform tradeoffs so as to minimize software defect density. It is found that as practices improve, defect density decreases. Contrary to what many software engineers claim, the average probability of a late delivery is less on average for organizations with better practices. Furthermore, the margin of error in the event that a schedule is missed was smaller on average for organizations with better practices. It is also interesting that the average number of corrective action releases required is also smaller for the organizations with the best practices. This means less downtime for customers. It is not surprising that the average SEI CMM level is higher for the organizations with the better practices",2000,0, 1685,Empirically guided software effort guesstimation,"Project LEAP (lightweight, empirical, antimeasurement dysfunction, and portable toolkit) is investigating tools and methods to support low-cost, empirically based software developer improvement. LEAP contains tools to simplify the collection of size and effort data. The collected data serves as input to a set of estimation tools. These tools can produce over a dozen analytical estimates of the effort required for a new project given an estimate of its size. To do this, they use various estimation methods such as linear, logarithmic, or exponential regressions. During project planning, the developer can review these estimates and select one of them or substitute a guesstimate based on his or her experience. The authors' study provides evidence that guesstimates, when informed by low-cost analytical methods, might be the most accurate method",2000,0, 1686,Web development: estimating quick-to-market software,"Developers can use this new sizing metric called Web Objects and an adaptation of the Cocomo II model called WebMo to more accurately estimate Web based software development effort and duration. Based on work with over 40 projects, these estimation tools are especially useful for quick-to-market development efforts",2000,0, 1687,Constructions of behaviour observation schemes in software testing,"Software testing is a process in which a software system's dynamic behaviours are observed and analysed so that the system's properties can be inferred from the information revealed by test executions. While the existing theories of software testing might be adequate in describing the testing of sequential systems, they are not capable to describe the testing of concurrent systems that can exhibit different behaviours on the same test case due to non-determinism and concurrency. In our previous work, we proposed a theoretical framework that provides a unified foundation to study all testing methods for both sequential and concurrent systems. The main results of the framework include a formal definition of the notion and a set of the desirable properties of an observation scheme. In this paper, we provide several constructions of observation schemes that have direct implications in current software testing practice. We also study the properties of the constructions",2000,0, 1688,Determining the expected time to unsafe failure,"The number of applications requiring highly reliable and/or safety-critical computing is increasing. One emerging safety metric is the Mean Time To Unsafe Failure (MTTUF). This paper summarizes a novel technique for determining the MTTUF for a given architecture. The first step in determining the MTTUF for a system is to estimate system Mean Time To Failure (MTTF) and system fault coverage. Once these two parameters are known then the system MTTUF can be calculated. The presented technique allows MTTF and system coverage to be estimated from dependability models that incorporate time varying failure and/or repair rates. Existing techniques for the estimation of MTTUF require constant rate dependability models. For the sake of simplicity, this paper uses Markov models to calculate MTTUF. The presented approach greatly simplifies the calculation of system MTTUF. Finally a comparison is made between reliability expected time metrics (MTTF and MTBF) and safety expected time metrics (MTTUF and MTBUF)",2000,0, 1689,A high-assurance measurement repository system,"High-quality measurement data are very useful for assessing the efficacy of high-assurance system engineering techniques and tools. Given the rapidly evolving suite of modern tools and techniques, it is helpful to have a large repository of up-to-date measurement data that can be used to quantitatively assess the impact of state-of-the-art techniques on the quality of the resulting systems. For many types of defects, including Y2K failures, infinite loops, memory overflow, access violations, arithmetic overflow, divide-by-zero, off-by-one errors, timing errors, deadlocks, etc., it may be possible to combine data from a large number of projects and use these to make statistical inferences. This paper presents a highly secure and reliable measurement repository system for measurement data acquisition, storage and analysis. The system is being used by the QuEST Forum, which is a new industry forum consisting of over 100 leading telecommunications companies. The paper describes the decisions that were made in the design of the measurement repository system, as well as implementation strategies that were used in achieving a high-level of confidence in the security and reliability of the system",2000,0, 1690,Prediction of software faults using fuzzy nonlinear regression modeling,"Software quality models can predict the risk of faults in modules early enough for cost-effective prevention of problems. This paper introduces the fuzzy nonlinear regression (FNR) modeling technique as a method for predicting fault ranges in software modules. FNR modeling differs from classical linear regression in that the output of an FNR model is a fuzzy number. Predicting the exact number of faults in each program module is often not necessary. The FNR model can predict the interval that the number of faults of each module falls into with a certain probability. A case study of a full-scale industrial software system was used to illustrate the usefulness of FNR modeling. This case study included four historical software releases. The first release's data were used to build the FNR model, while the remaining three releases' data were used to evaluate the model. We found that FNR modeling gives useful results",2000,0, 1691,An exception handling software architecture for developing fault-tolerant software,"Fault-tolerant object-oriented software systems are inherently complex and have to cope with an increasing number of exceptional conditions in order to meet the system's dependability requirements. This work proposes a software architecture which integrates uniformly both concurrent and sequential exception handling. The exception handling architecture is independent of programming language or exception handling mechanism, and its use can minimize the complexity caused by the handling of abnormal behavior. Our architecture provides, during the architectural design stage, the context in which more detailed design decisions related to exception handling are made in later development stages. This work also presents a set of design patterns which describes the static and dynamic aspects of the components of our software architecture. The patterns allow a clear separation of concerns between the system's functionality and the exception handling facilities, applying the computational reflection technique",2000,0, 1692,Bayesian framework for reliability assurance of a deployed safety critical system,"The existence of software faults in safety-critical systems is not tolerable. Goals of software reliability assessment are estimating the failure probability of the program, θ, and gaining statistical confidence that θ is realistic. While in most cases reliability assessment is performed prior to the deployment of the system, there are circumstances when reliability assessment is needed in the process of (re)evaluation of the fielded (deployed) system. Post deployment reliability assessment provides reassurance that the expected dependability characteristics of the system have been achieved. It may be used as a basis of the recommendation for maintenance and further improvement, or the recommendation to discontinue the use of the system. The paper presents practical problems and challenges encountered in an effort to assess and quantify software reliability of NASA's Day-of-Launch I-Load Update (DOLILU II) system DOLILU II system has been in operational use for several years. A Bayesian framework is chosen for reliability assessment, because it allows incorporation of (in this specific case failure free) program executions observed in the operational environment. Furthermore, we outline the development of a probabilistic framework that allows accounting of rigorous verification and validation activities performed prior to a system's deployment into the reliability assessment",2000,0, 1693,Perception-based fast rendering and antialiasing of walkthrough sequences,"We consider accelerated rendering of high quality walkthrough animation sequences along predefined paths. To improve rendering performance, we use a combination of a hybrid ray tracing and image-based rendering (IBR) technique and a novel perception-based antialiasing technique. In our rendering solution, we derive as many pixels as possible using inexpensive IBR techniques without affecting the animation quality. A perception-based spatiotemporal animation quality metric (AQM) is used to automatically guide such a hybrid rendering. The image flow (IF) obtained as a byproduct of the IBR computation is an integral part of the AQM. The final animation quality is enhanced by an efficient spatiotemporal antialiasing which utilizes the IF to perform a motion-compensated filtering. The filter parameters have been tuned using the AQM predictions of animation quality as perceived by the human observer. These parameters adapt locally to the visual pattern velocity",2000,0, 1694,Computer aided design of fault-tolerant application specific programmable processors,"Application Specific Programmable Processors (ASPP) provide efficient implementation for any of m specified functionalities. Due to their flexibility and convenient performance-cost trade-offs, ASPPs are being developed by DSP, video, multimedia, and embedded lC manufacturers. In this paper, we present two low-cost approaches to graceful degradation-based permanent fault tolerance of ASPPs. ASPP fault tolerance constraints are incorporated during scheduling, allocation, and assignment phases of behavioral synthesis: Graceful degradation is supported by implementing multiple schedules of the ASPP applications, each with a different throughput constraint. In this paper, we do not consider concurrent error detection. The first ASPP fault tolerance technique minimizes the hardware resources while guaranteeing that the ASPP remains operational in the presence of all k-unit faults. On the other hand, the second fault tolerance technique maximizes the ASPP fault tolerance subject to constraints on the hardware resources. These ASPP fault tolerance techniques impose several unique tasks, such as fault-tolerant scheduling, hardware allocation, and application-to-faulty-unit assignment. We address each of them and demonstrate the effectiveness of the overall approach, the synthesis algorithms, and software implementations on a number of industrial-strength designs.",2000,0, 1695,Constructing real-time group communication middleware using the Resource Kernel,"Group communication is a widely studied paradigm which is often used in building real-time and fault-tolerant distributed systems. RTCAST is a real-time group communication protocol which has been designed to work with commercial, non-real-time, off-the-shelf hardware and operating systems, such as Solaris, Linux and Windows NT. RTCAST makes probabilistic real-time guarantees based on assumptions about the performance of the underlying system. Unfortunately, the high variability of the access to system resources that these operating systems provide may limit the predictability of the real-time guarantees provided by RTCAST. By taking advantage of a service that provides resource scheduling and reservation in these operating systems, both the hardness and timing granularity of RTCAST's real-time services can be greatly improved. This paper describes an implementation of RTCAST which makes use of the Resource Kernel to provide highly predictable, real-time communication guarantees",2000,0, 1696,Implementation and performance evaluation of a real-time e-brokerage system,"Timeliness is an important attribute for e-brokerage both in the detection of opportunities in a narrow time window and also in facilitating differentiation of end-to-end quality of service (QoS). In this paper, we demonstrate how real-time event monitoring techniques can be applied to time-critical e-brokerage. We start with a formal timed event model which provides the semantics for specifying complex timing correlation rules in composite events. The formal event model of the system is based on RTL (Real Time Logic), which is important for disambiguating informal specifications when multiple instances of the same event type may appear in a timing correlation rule involving composite events. This leads to our design of an e-brokerage, online stock monitoring and alerting system which enables users to express their complex preferences and provides an alert service to users in a timely manner. The user's preferences are monitored by a real-time event monitor. We report the design of this system and some performance data characterizing some key timing parameters. Our system is being used by a class of MBA students in a field test",2000,0, 1697,Practical applications of statistical process control [in software development projects],Applying quantitative methods such as statistical process control (SPC) to software development projects can provide a positive cost-benefit return. The authors used SPC on inspection and test data to assess product quality during testing and to predict post-shipment product quality for a major software release,2000,0, 1698,Eliminating annotations by automatic flow analysis of real-time programs,"There is an increasing demand for methods that calculate the worst case execution time (WCET) of real time programs. The calculations are typically based on path information for the program, such as the maximum number of iterations in loops and identification of infeasible paths. Most often, this information is given as manual annotations by the programmer. Our method calculates path information automatically for real time programs, thereby relieving the programmer from tedious and error-prone work. The method, based on abstract interpretation, generates a safe approximation of the path information. A trade-off between quality and calculation cost is made, since finding the exact information is a complex, often intractable problem for nontrivial programs. We describe the method by a simple, worked example. We show that our prototype tool is capable of analyzing a number of program examples from the WCET literature, without using any extra information or consideration of special cases needed in other approaches",2000,0, 1699,Problem solving skills,"This paper describes a three-year experiment to determine whether student performance can be improved by making their performance visible in quantitative ways and assisting them to reflect on deviations between desired and actual performance. The experiment focused on improving effort estimation for planning, improving effort allocation over the period of performance, and reducing the recurrence of previously reported defects in subsequent assignments. To accomplish this, a Web-based tool was implemented and used to capture numerous student performance parameters as well as planning rationales, postmortem reflections, and feedback to the students",2000,0, 1700,Bloodshot eyes: workload issues in computer science project courses,"Workload issues in computer science project courses are addressed. We briefly discuss why high workloads occur in project courses and the reasons they are a problem. We then describe some course changes we made to reduce the workload in a software engineering project course, without compromising course quality. The techniques include: adopting an iterative and incremental process, reducing the requirements for writing documents, and gathering accurate data on time spent on various activities. We conclude by assessing the techniques, providing good evidence for a dramatic change in the workload, and an increase in student satisfaction levels. We provide some evidence, and an argument, that learning has not been affected by the changes",2000,0, 1701,Evaluation of inspectors' defect estimation accuracy for a requirements document after individual inspection,"Project managers need timely feedback on the quality of development products to monitor and control project progress. Inspection is an effective method to identify defects and to measure product quality. Objective and subjective models can be used to estimate the total number of defects in a product based on defect data from inspection. This paper reports on a controlled experiment to evaluate the accuracy of individual subjective estimates of developers, who had just before inspected the document, on the number of defects in a software requirements specification. In the experiment most inspectors underestimated the total number of defects in the document. The number of defects reported and the number of (major) reference defects found were identified as factors that separated groups of inspectors who over- or underestimated on average",2000,0, 1702,Analysis of the impact of reading technique and inspector capability on individual inspection performance,"Inspection of software documents is an effective quality assurance measure to detect defects in the early stages of software development. It can provide timely feedback on product quality to both developers and managers. This paper reports on a controlled experiment that investigated the influence of reading techniques and inspector capability on individual effectiveness to find given sets of defects in a requirements specification document. Experimental results support the hypothesis that reading techniques can direct inspectors' attention towards inspection targets, i.e. on specific document parts or severity levels, which enables inspection planners to divide the inspection work among several inspectors. Further, they suggest a tradeoff between specific and general detection effectiveness regarding document coverage and inspection effort. Inspector capability plays a significant role in inspection performance, while the size of the effect varies with the reading technique employed and the inspected document part",2000,0, 1703,Predicting class libraries interface evolution: an investigation into machine learning approaches,"Managing the evolution of an OO system constitutes a complex and resource-consuming task. This is particularly true for reusable class libraries since the user interface must be preserved for version compatibility. Thus, the symptomatic detection of potential instabilities during the design phase of such libraries may help avoid later problems. This paper introduces a fuzzy logic-based approach for evaluating the stability of a reusable class library interface, using structural metrics as stability indicators. To evaluate this new approach, we conducted a preliminary study on a set of commercial C++ class libraries. The obtained results are very promising when compared to those of two classical machine learning approaches, top down induction of decision trees and Bayesian classifiers",2000,0, 1704,Advanced assessment of the power quality events,"This paper introduces a new concept of advanced power quality assessment. The concept is implemented using the software for event detection, classification and characterization. The role of the modeling and simulation in the power quality assessment is also discussed. The use of the field recorded data and simulated data in the equipment performance analysis is outlined. All the mentioned approaches are suggested for power quality assessment use when setting up and verifying power quality contracts",2000,0, 1705,Strategies for optimizing image processing by genetic and evolutionary computation,"We examine the results of major previous attempts to apply genetic and evolutionary computation (GEC) to image processing. In many problems, the accuracy (quality) of solutions obtained by GEC-based methods is better than that obtained by other methods such as neural networks and simulated annealing. However the computation time required is satisfactory in some problems, whereas it is unsatisfactory in other problems. We consider the current problems of GEC-based methods and present the following measures to achieve still better performance: (1) utilizing competent GEC, (2) incorporating other search algorithms such as local hill climbing algorithms, (3) hybridizing with conventional image processing algorithms; (4) modeling the given problem with as smaller parameters as possible, and (5) using parallel processors to evaluate the fitness function",2000,0, 1706,Implementation and evaluation for dependable bus control using CPLD,"Bus systems are used in computers as essential architecture, and dependability of bus systems should be accomplished reasonably for various applications. In this paper, we will present dependable bus operations with actual implementation and evaluation by CPLD. Most of the bus systems control transition of some classified phases with synchronous clock or guard time to avoid incorrect phase transition. However, these phase control methods may degrade system performance or cause incorrect operations. We design an asynchronous sequential circuit for bus phase control without clock or guard time. This circuit prevents incorrect phase transition at the time when large input delay or erroneous input occurs. We estimate probability of incorrect phase transition with single stuck-at fault on input signals. From the result of estimation, we also design checking system verifying outputs of initiator and target devices. Incorrect phase transition with single stuck-at fault occurred between both sequential circuits is inhibited completely by implementation of the system",2000,0, 1707,Winner take all experts network for sensor validation,"The validation of sensor measurements has become an integral part of the operation and control of modern industrial equipment. The sensor under a harsh environment must be shown to consistently provide the correct measurements. Analysis of the validation hardware or software should trigger an alarm when the sensor signals deviate appreciably from the correct values. Neural network based models can be used to estimate critical sensor values when neighboring sensor measurements are used as inputs. The discrepancy between the measured and predicted sensor values may then be used as an indicator for sensor health. The proposed winner take all experts (WTAE) network is based on a `divide and conquer' strategy. It employs a growing fuzzy clustering algorithm to divide a complicated problem into a series of simpler sub-problems and assigns an expert to each of them locally. After the sensor approximation, the outputs from the estimator and the real sensor value are compared both in the time domain and the frequency domain. Three fault indicators are used to provide analytical redundancy to detect the sensor failure. In the decision stage, the intersection of three fuzzy sets accomplishes a decision level fusion, which indicates the confidence level of the sensor health. Two data sets, the Spectra Quest Machinery Fault Simulator data set and the Westland vibration data set, were used in simulations to demonstrate the performance of the proposed WTAE network. The simulation results show the proposed WTAE is competitive with or even superior to the existing approaches",2000,0, 1708,Effort measurement in student software engineering projects,"Teaching software engineering by means of student involvement in the team development of a product is the most effective way to teach the main issues of software engineering. Some of its difficulties are those of coordinating their work, measuring the time spent by the students (both in individual work and in meetings) and making sure that meeting time will not be excessive. Starting in the academic year 1998/1999, we assessed, improved and documented the development process for the student projects and found that measurement is one of the outstanding issues to be considered. Each week, the students report the time spent on the different project activities. We present and analyze the measurement results for our 16 student teams (each one with around 6 students). It is interesting to note that the time spent in meetings is usually too long, ranging from 46% in the requirements analysis phase to 21% in coding, mainly due to problems of coordination. Results from previous years are analyzed and presented to the following year's students for feedback. In the present year (2000), we have decreased the amount of time spent by the student doing group work, and improved the effectiveness and coordination of the teams",2000,0, 1709,Identification of the harmonic currents drawn by an institutional building: application of a stochastic approach,"Considering harmonics as a time-dependent stochastic process, this paper proposes prediction tools to identify both the stochastic and the time varying behaviour of harmonics. The randomly operating conditions of several nonlinear loads justify the use of a stochastic approach based on a continuous-time model. This approach is applied to characterise the amplitude of the harmonic currents on the secondary of a distribution transformer feeding an institutional building",2000,0, 1710,Automatic measurements of the radius of curvature of the cornea in Slit Lamp,"We have developed an automatic optical system attached to the Slit Lamp in order to provide automatic measurements of the radius of curvature of the cornea at low cost. The system consists of projecting a light ring as a target at the patient's cornea and the further analysis of the deformation of the target in order to obtain the radius of curvature as well as the axis of the associated astigmatism. The reflected image of the target is displayed in a PC's monitor and a dedicated developed software performs the analysis of the image, that provides the corneal keratometry. Also, the system is able to measure irregular astigmatism and the clinician is able to understand the behavior of the cornea's curvature radius in these regions by comparison of the graphical analysis of regular astigmatism that is generated by the software. There is no system commercially available which is able to measure irregular astigmatism, which is a very important feature for keratoconeous detection. The system is easy to use and it has a friendly software interface for the user. Measurements in volunteer patients have been made and the results that were obtained in the system, for the standard target ring, are in good agreement with commercial automatic and manual systems, presenting a correlation coefficient of 0,99347 and 0,97637, respectively regarding the radius of curvature and 0,96841 and 0,9568, respectively, regarding the axis. The system's precision is 0,005 mm for the curvature radius and 2* for the axis component",2000,0, 1711,Australian Snowy Mountains Hydro Scheme earthing system safety assessment,"The task of determining the condition of the earthing in the Upper Tumut generation system, was undertaken as part of the Snowy Mountains Hydro Electric Authority's (SMHEA) safety risk assessment and asset condition monitoring programme. The testing programme to ascertain performance under earth fault and lightning conditions had to overcome considerable physical difficulties as well as the restrictions of 'close' proximity injection loops. The application of software, test instrumentation and testing procedures developed within Australia in collaboration between Energy Australia, Newcastle University, and SMHEA, to obtain real solutions are described in this paper. Also discussed are condition assessment processes that complement the current injection testing programme. This paper also provides a summary of the minimum requirements of an earthing system injection test to satisfactorily assess the condition of complex electrical power system installations",2000,0, 1712,EMS-vision: recognition of intersections on unmarked road networks,"The ability to recognize intersections enables an autonomous vehicle to navigate on road networks for performing complex missions. The paper gives the geometry model for intersections applied and their interaction with active viewing direction control. Quality measures indicate to performance monitoring processes the reliability of the estimation results. The perception module is integrated in the EMS-Vision system. Results from autonomous turn-off maneuvers, conducted on unmarked campus roads are discussed",2000,0, 1713,"Curve evolution, boundary-value stochastic processes, the Mumford-Shah problem, and missing data applications","We present an estimation-theoretic approach to curve evolution for the Mumford-Shah problem. By viewing an active contour as the set of discontinuities in the Mumford-Shah problem, we may use the corresponding functional to determine gradient descent evolution equations to deform the active contour. In each gradient descent step, we solve a corresponding optimal estimation problem, connecting the Mumford-Shah functional and curve evolution with the theory of boundary-value stochastic processes. In employing the Mumford-Shah functional, our active contour model inherits its attractive ability to generate, in a coupled manner, both a smooth reconstruction and a segmentation of the image. Next, by generalizing the data fidelity term of the original Mumford-Shah functional to incorporate a spatially varying penalty, we extend our method to problems in which data quality varies across the image and to images in which sets of pixel measurements are missing. This more general model leads us to a novel PDE-based approach for simultaneous image magnification, segmentation, and smoothing, thereby extending the traditional applications of the Mumford-Shah functional which only considers simultaneous segmentation and smoothing",2000,0, 1714,Well-defined intended uses: an explicit requirement for accreditation of modeling and simulation applications,A modeling and simulation (M&S) application is built for a specific purpose and its acceptability assessment is carried out with respect to that purpose. The accreditation decision for an M&S application is also made with respect to that purpose. The purpose is commonly expressed in terms of intended uses. The quality of expressing the intended uses significantly affects the quality of the acceptability assessment as well as the quality of making the accreditation decision. The purpose of this paper is to provide guidance in proper definition of the intended uses. It uses an M&S application for simulating the U.S. National Missile Defense (NMD) system design as an example to illustrate the definition of the intended uses,2000,0, 1715,A new portable electronic device for single exposure half-value layer measurement,A new device to be used for half-value layer (HVL) measurements in quality assurance procedures is presented. This device uses a silicon X-ray sensor coupled to an electronic circuit that is connected to a portable computer (notebook) for data storage and analysis. HVL measurements can be performed for a single X-ray exposure and real-time results are presented on the notebook screen,2000,0, 1716,Technology assessment of commercially available critical area extraction tools,"In the 1990s, the semiconductor industry witnessed a philosophical change in the subject of modeling wafer final test yields. The calculation of average faults per chip has been historically calculated as the product of chip area and fault density, and often incorrectly referred to as defect density. For more than 20 years, several pioneering researchers, representing both academia and industry, have advocated a more accurate estimation of average faults per chip by using critical area, a better metric of chip sensitivity to defect mechanisms. The implementation of this concept requires sophisticated software tools that interrogate the physical design data. At the request of the member companies, International SEMATECH launched a study in 1999 of four commercially available critical area extraction (CAE) tools, with a primary objective of providing an independent technical assessment of capability, performance, accuracy, ease of use, features, and other distinguishing characteristics. Each of the tools were run on several product designs from Agilent Technologies and IBM Microelectronics. The CAE tools were all installed and evaluated at a common member company location, and each of the tool suppliers were visited, providing detailed visibility into the supplier's environment. This paper will review the evaluation methodology, and summarize the findings and results of this International SEMATECH sponsored study",2000,0, 1717,Mobile agents for personalized information retrieval: when are they a good idea?,Mobile agent technology has been proposed as an alternative to traditional client-server computing for personalized information retrieval by mobile and wireless users from fixed wired servers. We develop a very simplified analytical model that examines the claimed performance benefits of mobile agents over client-server computing for a mobile information retrieval scenario. Our evaluation of this simple model shows that mobile agents are not necessarily better than client-server calls in terms of average response times; they are only beneficial if the space overhead of the mobile agent code is not too large or if the wireless link connecting the mobile user to the fixed servers of the virtual enterprise is error-prone. We quantify the tradeoffs involved for a variety of scenarios and point out issues for further research,2000,0, 1718,LACE frameworks and technique-identifying the legacy status of a business information system from the perspectives of its causes and effects,"This paper first presents a definition of the concept `legacy status' with a three-dimensional model. It then discusses LACE frameworks and techniques, which can be used to assess legacy status from the cause and effects perspectives. A method of applying the LACE frameworks is shown and a technique with a mathematical model and metric so that the legacy status of a system can be calculated. This paper describes a novel and practical way to identify legacy status of a system, and has pointed out a new direction for research in this area",2000,0, 1719,Effective testing and debugging methods and its supporting system with program deltas,"In the maintenance phase of software development, it is necessary to check that all features still perform correctly after some changes have been applied to existing software. However, it is not easy to debug the software when a defect is found in those features which have not changed during the modification, even using a regression test. Existing approaches employ program deltas to specify defects; they have the limitation of it being hard to enact them, and they don't support any actual debugging activities. Moreover, such a system is hard to introduce into an actual environment. In this paper, we propose DMET (Debugging METhod) to solve such problems. DMET supports debugging activities when a defect is found by regression tests through detection, indication and reflection procedures. We also implement DSUS (Debugging SUpporting System) based on DMET. DSUS executes DMET procedures automatically, and it is easy to configure for an existing environment. Through experimentation with DMET and DSUS, we have confirmed that DMET/DSUS reduce the debugging time of software significantly. As a result, DMET/DSUS help in evolving the software for the software maintenance aspects",2000,0, 1720,Investigations on hydrogen in silicon by means of lifetime measurements,"Various techniques (SIMS, thermal effusion, FTIR) have been suggested for the determination of the diffusion of hydrogen in multicrystalline silicon. However these methods are either laborious or of a minor accuracy. Our work concentrates on the determination of hydrogen passivation depths in mc-Si by determining the minority carrier lifetime as a function of hydrogen passivation time. For the investigations EMC silicon and reference FZ silicon wafers have been used. From experimental data the passivation depth is obtained numerically using the simulation software PC1D and analytically using a simplified equation. For EMC, passivation depths in regions of good and poor quality have been obtained indicating no significant influence on the passivation depth. Further experiments by polishing the wafer prior to lifetime measurements with a small angle have been performed for determination of the SRV",2000,0, 1721,Efficient data broadcast scheme on wireless link errors,"As portable wireless computers become popular, mechanisms to transmit data to such users are of significant interest. Data broadcast is effective in dissemination-based applications to transfer the data to a large number of users in the asymmetric environment where the downstream communication capacity is relatively much greater than the upstream communication capacity. Index based organization of data transmitted over wireless channels is very important to reduce power consumption. We consider an efficient (1:m) indexing scheme for data broadcast on unreliable wireless networks. We model the data broadcast mechanism on the error prone wireless networks, using the Markov model. We analyze the average access time to obtain the desired data item and find that the optimal index redundancy (m) is SQRT[Data/{Index*(1-p)âˆ?/sup>K}], where p is the failure rate of the wireless link, Data is the size of the data in a broadcast cycle, Index is the size of index, and K is the index level. We also measure the performance of data broadcast schemes by parametric analysis",2000,0, 1722,A Gibbs-sampler approach to estimate the number of faults in a system using capture-recapture sampling [software reliability],"A new recapture debugging model is suggested to estimate the number of faults in a system, ν, and the failure intensity of each fault, φ. The Gibbs sampler and the Metropolis algorithm are used in this inference procedure. A numerical illustration suggests a notable improvement on the estimation of ν and φ compared with that of a removal debugging model",2000,0, 1723,A study on fault-proneness detection of object-oriented systems,"Fault-proneness detection in object-oriented systems is an interesting area for software companies and researchers. Several hundred metrics have been defined with the aim of measuring the different aspects of object-oriented systems. Only a few of them have been validated for fault detection, and several interesting works with this view have been considered. This paper reports a research study starting from the analysis of more than 200 different object-oriented metrics extracted from the literature with the aim of identifying suitable models for the detection of the fault-proneness of classes. Such a large number of metrics allows the extraction of a subset of them in order to obtain models that can be adopted for fault-proneness detection. To this end, the whole set of metrics has been classified on the basis of the measured aspect in order to reduce them to a manageable number; then, statistical techniques were employed to produce a hybrid model comprised of 12 metrics. The work has focused on identifying models that can detect as many faulty classes as possible and, at the same time, that are based on a manageably small set of metrics. A compromise between these aspects and the classification correctness of faulty and non-faulty classes was the main challenge of the research. As a result, two models for fault-proneness class detection have been obtained and validated",2001,1, 1724,The confounding effect of class size on the validity of object-oriented metrics,"Much effort has been devoted to the development and empirical validation of object-oriented metrics. The empirical validations performed thus far would suggest that a core set of validated metrics is close to being identified. However, none of these studies allow for the potentially confounding effect of class size. We demonstrate a strong size confounding effect and question the results of previous object-oriented metrics validation studies. We first investigated whether there is a confounding effect of class size in validation studies of object-oriented metrics and show that, based on previous work, there is reason to believe that such an effect exists. We then describe a detailed empirical methodology for identifying those effects. Finally, we perform a study on a large C++ telecommunications framework to examine if size is really a confounder. This study considered the Chidamber and Kemerer metrics and a subset of the Lorenz and Kidd metrics. The dependent variable was the incidence of a fault attributable to a field failure (fault-proneness of a class). Our findings indicate that, before controlling for size, the results are very similar to previous studies. The metrics that are expected to be validated are indeed associated with fault-proneness",2001,1, 1725,Fuzzy logic traffic control in broadband communication networks,"Traffic predictions have been demonstrated with the capability to improve network efficiency and QoS in broadband ATM networks. Recent research shows that fuzzy logic prediction outperforms conventional autoregression predictions. The application of fuzzy logic also has a potential to control traffic more effectively. In this paper, we propose the use of the fuzzy logic prediction on connection admission control (CAC) and congestion control on high speed networks. We first modeled traffic characteristics using an on-line fuzzy logic predictor on CAC. Simulation results show that fuzzy logic prediction improves the efficiency of both conventional and measurement-based CAC. In addition, the measurement-based approach incorporating fuzzy logic inference and using fuzzy logic prediction is shown to achieve higher network utilization while maintaining QoS. We then applied the fuzzy logic predictor to congestion control in which the ABR queue is estimated one round-trip in advance. Simulation results show that the fuzzy logic control scheme significantly reduces convergence time and overall buffer requirements as compared with conventional schemes",2001,0, 1726,Neural network detection and identification of actuator faults in a pneumatic process control valve,"This paper establishes a scheme for detection and identification of actuator faults in a pneumatic process control valve using neural networks. First, experimental performance parameters related to the valve step responses, including dead time, rise time, overshoot, and the steady state error are obtained directly from a commercially available software package for a variety of faulty operating conditions. Acquiring training data in this way has eliminated the need for additional instrumentation of the valve. Next, the experimentally determined performance parameters are used to train a multilayer perceptron network to detect and identify incorrect supply pressure, actuator vent blockage and diaphragm leakage faults. The scheme presented here is novel in that it demonstrates that a pattern recognition approach to fault detection and identification, for pneumatic process control valves, using features of the valve step response alone, is possible.",2001,0, 1727,Broadband vector channel sounder for MIMO channel measurement,"3G and 4G mobile communication systems consider multiple antennas at both receiver and transmitter for reaching enhanced channel capacity at well defined quality of service. Perfect design of such systems requires profound knowledge of the radio channel, especially its time-variant characteristics in various radio environments. A precise characterization of the radio channel is given by the vector channel impulse response (VCIR). In multi input multiple output (MIMO) systems the VCIR is measured for each transmitting antenna to a configured RX antenna array. The time variance of the radio channel requires high speed recording of VCIR with respect to the maximum Doppler frequency. The presented vector channel sounder covers all MIMO aspects. Up to 256 MIMO channels can be configured by software. The broadband technique grants high time resolution. Superresolution techniques can be used for joint estimation of time delay, direction of arrival and direction of departure with respect to multipath propagation. The measurement data can be used directly (as stored radio channel) as well as indirectly (as derived channel model) for realistic link- and system-level simulation.",2001,0, 1728,Simulation and Evaluation of Optimization Problem Solutions in Distributed Energy Management Systems,"Deregulation in electricity markets requires fast and robust optimization tools for a secure and efficient operation of the electric power system. In addition, there is the need for integrating and coordinating operational decisions taken by different utilities acting in the same market. Distributed energy management systems (DEMS) may help to fulfill these requirements. The design of DEMS requires detailed simulation results for the evaluation of its performance. To simulate the operation of DEMS from the optimization standpoint, a general purpose distributed optimization software tool, DistOpt, is used, and its capabilities are extended to handle power system problems. The application to the optimal power flow problem is presented.",2001,0, 1729,A Novel Software Implementation Concept for Power Quality Study,"A novel concept for power quality study is proposed. The concept integrates the power system modeling, classifying, and characterizing of power quality events, studying equipment sensitivity to the event disturbance and locating the point of event occurrence into one unified frame. Both Fourier and wavelet analyses are applied for extracting distinct features of various types of events as well as for characterizing the events. A new fuzzy expert system for classifying power quality events based on such features is presented with improved performance over previous neural network-based methods. A novel simulation method is outlined for evaluating the operating characteristics of the equipment during specific events. A software prototype implementing the concept has been developed in MATLAB. The voltage sag event is taken as an example for illustrating the analysis methods and software implementation issues. It is concluded that the proposed approach is feasible and promising for real world applications.",2001,0, 1730,A Short-Circuit Current Study for the Power Supply System of Taiwan Railway,"The western Taiwan railway transportation system consists mainly on a mountain route and ocean route. Taiwan Railway Administration (TRA) has conducted a series of experiments on the ocean route in recent years to identify the possible causes of unknown events that cause the trolley contact wires to melt down frequently. The conducted tests include the short-circuit fault test within the power supply zone of the Ho Long Substation (Zhu Nan to Tong Xiao) that had the highest probability for the melt down events. Those test results, based on the actual measured maximum short-circuit current, provide a valuable reference for TRA when comparing against the said events. The Le Blanc transformer is the main transformer of the Taiwan railway electrification system. The Le Blanc transformer mainly transforms the Taiwan Power Company (TPC) generated three-phase alternating power supply system (69kV, 60Hz) into two single-phase alternating power distribution systems (M phase and T phase) (26kV, 60Hz) needed for the trolley traction. As a unique winding connection transformer, the conventional software for fault analysis will not be able to simulate its internal current and phase difference between each phase current. Therefore, besides extracts of the short-circuit test results, this work presents an EMTP model based on the Taiwan Railway Substation equivalent circuit model with a Le Blanc transformer. The proposed circuit model can simulate the same short-circuit test to verify the actual fault current and accuracy of the equivalent circuit model.",2001,0, 1731,Does code decay? Assessing the evidence from change management data,"A central feature of the evolution of large software systems is that change-which is necessary to add new functionality, accommodate new hardware, and repair faults-becomes increasingly difficult over time. We approach this phenomenon, which we term code decay, scientifically and statistically. We define code decay and propose a number of measurements (code decay indices) on software and on the organizations that produce it, that serve as symptoms, risk factors, and predictors of decay. Using an unusually rich data set (the fifteen-plus year change history of the millions of lines of software for a telephone switching system), we find mixed, but on the whole persuasive, statistical evidence of code decay, which is corroborated by developers of the code. Suggestive indications that perfective maintenance can retard code decay are also discussed",2001,0, 1732,System reliability analysis: the advantages of using analytical methods to analyze non-repairable systems,"Most of the system analysis software available on the market today employs the use of simulation methods for estimating the reliability of nonrepairable systems. Even though simulation methods are easy to apply and offer great versatility in modeling and analyzing complex systems, there are some limitations to their effectiveness. For example, if the number of simulations performed is not large enough, these methods can be error prone. In addition, performing a large number of simulations can be extremely time-consuming and simulation offers a small range of calculation results when compared to analytical methods. Analytical methods have been avoided due to their complexity in favor of the simplicity of using simulation. A software tool has been developed that calculates the exact analytical solution for the reliability of a system. Given the reliability equation for the system, further analyses on the system can be performed, such as computing exact values of the reliability, failure rate, at specific points in time, as well as computing the system MTTF (mean time to failure), and reliability importance measures for the components of the system. In addition, optimization and reliability allocation techniques can be utilized to aid engineers in their design improvement efforts. Finally, the time-consuming calculations and the non-repeatability issue of the simulation methodology are eliminated",2001,0, 1733,A methodology for operational profile refinement,"Numerous reliability models and testing practices are based on operational profiles. Typically, a single operational profile is used to represent the usage of a system with the assumption that a homogeneous customer base executes the system. However, if the customer base is heterogeneous, estimates computed on a single operational profile may be inaccurate. A single operational profile does not reflect the diverse customer patterns and it only “averagesâ€?the usage of the system, obscuring the real information about the operations probabilities. Decisions made on these estimates are likely to be biased and of limited usefulness. This paper presents a refinement methodology for the generation of more accurate operational profiles that truly represent the diverse customer usage patterns. Clustering analysis supports the refinement methodology for identifying groups of customers with similar characteristics. Empirical stopping rules and validation procedures complete the refining methodology. A complete example of the methodology is presented on a large application. The example evidences the different perspective and accuracy that can be obtained through this refining methodology",2001,0, 1734,Improvements in reliability-growth modeling,"The Duane model has been used for a number of years for graphically portraying reliability growth. There are, however, a number of inherent limitations which make it unsatisfactory for the regular monitoring of reliability growth progress. These limitations are explored and a new model is presented based on variance stabilizing transformation theory. This model retains the ease of use while avoiding the disadvantages of the Duane model. Computer simulations show that the new model provides a better fit to the data over the range of Duane slopes normally observed during a reliability growth program. Computer simulations also show that the new model provides higher values of instantaneous mean time between failures (MTBF) than the Duane model. Twelve published software reliability datasets are used to illustrate the use and application of the new model. The performance of both the Duane and the new model is compared for these software reliability datasets. The results of the performance comparison are consistent with the computer simulations",2001,0, 1735,Product performance-evaluation using Monte Carlo simulation: a case study,"Product performance evaluation in the room and in the extreme environmental conditions is one of the most important design engineering activities. Is the product designed to operate within the specified tolerances over the specified environment and are the product performance parameters optimized so that the effects of the environmental stresses are minimized? These are the questions that a design engineer has to answer before any new product is released to the market. Therefore, environmental testing and monitoring of the product outputs, intended design performance and other product design parameters need to be performed. Such tests may include significant number of samples, may be time consuming and very costly. Consequently, environmental tests should be designed and planned very carefully and the data analysis methodologies should require the minimum resources for data collection. The proposed methodology is based on the idea of the product performance modeling and monitoring the performance degradation over the range of the environmental stresses. It is very flexible and is applicable in all situations when a product's performance can be modeled with a linear or nonlinear functions, whose parameters may be random, correlated and stress dependent. It provides the tools for performance stress evaluation and reliability estimation. The methodology is presented through the following case study",2001,0, 1736,"Code coverage, what does it mean in terms of quality?","Unit code test coverage has long been known to be an important metric for testing software, and many development groups require 85% coverage to achieve quality targets. Assume we have a test, T1 which has 100% code coverage and it detects a set of defects, D1. The question, which is answered here, is ""What percentage of the defects in D1 will be detected if a random subset of the tests in T1 are applied to the code, which has code coverage of X% of the code?"" The purpose of this paper is to show the relation between code quality and code coverage. The relationship is derived via a model of code defect levels. A sampling technique is employed and modeled with the hypergeometric distribution while assuming uniform probability and a random distribution of defects in the code, which invokes the binomial distribution. The result of this analysis is a simple relation between defect level and quality of the code delivered after the unit code is tested. This model results in the rethinking of the use of unit code test metrics and the use of support tools",2001,0, 1737,Probabilistic communication optimizations and parallelization for distributed-memory systems,"In high-performance systems execution time is of crucial importance justifying advanced optimization techniques. Traditionally, optimization is based on static program analysis. The quality of program optimizations, however, can be substantially improved by utilizing runtime information. Probabilistic data-flow frameworks compute the probability with what data-flow facts may hold at some program point based on representative profile runs. Advanced optimizations can use this information in order to produce highly efficient code. In this paper we introduce a novel optimization technique in the context of High Performance Fortran (HPF) that is based on probabilistic data-flow information. We consider statically undefined attributes which play an important role for parallelization and compute for those attributes the probabilities to hold some specific value during runtime. For the most probable attribute values highly-optimized, specialized code is generated. In this way significantly better performance results can be achieved. The implementation of our optimization is done in the context of VFC, a source-to-source parallelizing compiler for HPF/F90",2001,0, 1738,A weighted distribution of residual failure data using Weibull model,"In analyzing the distribution of residual failure data using the Weibull distribution from the first detection of a failure to the completion of repairing failures, we have proposed the method of applying a weight to each failure. To put it concretely, we have suggested the method of applying a weight according to the importance of each failure within the analytic area in order to manage and measure efficiently the failure data found in the course of confirmation, verification, debugging, adding, and repairing functions collectively. The procedure is carried out after constituting a model switching system and installing a synthesis software package on the HANbit ACE ATM switching system under the development phase, which is the main node in building the B-ISDN for multimedia services",2001,0, 1739,Framework of end-to-end performance measurement and analysis system for Internet applications,"The Internet provides only the best-effort service for users. There is no guaranteed QoS service for end users. In general, an end-to-end path of the Internet is changed by the bandwidth, delay and packet loss by time. Therefore, the measurement of the end-to-end performance is a difficult task for users who need to establish the guaranteed path for critical applications. This paper proposes a framework for measurement, analysis and application program interface to estimate the end-to-end performance for critical applications. The proposed framework is based on the combination of the historical data for previous measurements and the real-time data for current measurement. The framework is effective for applications which need to know the end-to-end performance. The preliminary measurement in this paper has been done to verify the method",2001,0, 1740,A real-time hardware fault detector using an artificial neural network for distance protection,"A real-time fault detector for the distance protection application, based on artificial neural networks, is described. Previous researchers in this field report use of complex filters and artificial neural networks with large structure or long training times. An optimum neural network structure with a short training time is presented. Hardware implementation of the neural network is addressed with a view to improve the performance in terms of speed of operation. By having a smaller network structure the hardware complexity of implementation reduces considerably. Two preprocessors are described for the distance protection application which enhance the training performance of the artificial neural network many fold. The preprocessors also enable real-time functioning of the artificial neural network for the distance protection application. Design of an object oriented software simulator, which was developed to identify the hardware complexity of implementation, and the results of the analysis are discussed. The hardware implementation aspects of the preprocessors and of the neural network are briefly discussed",2001,0, 1741,Design of multi-invariant data structures for robust shared accesses in multiprocessor systems,"Multiprocessor systems are widely used in many application programs to enhance system reliability and performance. However, reliability does not come naturally with multiple processors. We develop a multi-invariant data structure approach to ensure efficient and robust access to shared data structures in multiprocessor systems. Essentially, the data structure is designed to satisfy two invariants, a strong invariant, and a weak invariant. The system operates at its peak performance when the strong invariant is true. The system will operate correctly even when only the weak invariant is true, though perhaps at a lower performance level. The design ensures that the weak invariant will always be true in spite of fail-stop processor failures during the execution. By allowing the system to converge to a state satisfying only the weak invariant, the overhead for incorporating fault tolerance can be reduced. We present the basic idea of multi-invariant data structures. We also develop design rules that systematically convert fault-intolerant data abstractions into corresponding fault-tolerant versions. In this transformation, we augment the data structure and access algorithms to ensure that the system always converges to the weak invariant, even in the presence of fail-stop processor failures. We also design methods for the detection of integrity violations and for restoring the strong invariant. Two data structures, namely binary search tree and double-linked list, are used to illustrate the concept of multi-invariant data structures",2001,0, 1742,Empirical studies of a prediction model for regression test selection,"Regression testing is an important activity that can account for a large proportion of the cost of software maintenance. One approach to reducing the cost of regression testing is to employ a selective regression testing technique that: chooses a subset of a test suite that was used to test the software before the modifications; then uses this subset to test the modified software. Selective regression testing techniques reduce the cost of regression testing if the cost of selecting the subset from the test suite together with the cost of running the selected subset of test cases is less than the cost of rerunning the entire test suite. Rosenblum and Weyuker (1997) proposed coverage-based predictors for use in predicting the effectiveness of regression test selection strategies. Using the regression testing cost model of Leung and White (1989; 1990), Rosenblum and Weyuker demonstrated the applicability of these predictors by performing a case study involving 31 versions of the KornShell. To further investigate the applicability of the Rosenblum-Weyuker (RW) predictor, additional empirical studies have been performed. The RW predictor was applied to a number of subjects, using two different selective regression testing tools, Deja vu and TestTube. These studies support two conclusions. First, they show that there is some variability in the success with which the predictors work and second, they suggest that these results can be improved by incorporating information about the distribution of modifications. It is shown how the RW prediction model can be improved to provide such an accounting",2001,0, 1743,An empirical study using task assignment patterns to improve the accuracy of software effort estimation,"In most software development organizations, there is seldom a one-to-one mapping between software developers and development tasks. It is frequently necessary to concurrently assign individuals to multiple tasks and to assign more than one individual to work cooperatively on a single task. A principal goal in making such assignments should be to minimize the effort required to complete each task. But what impact does the manner in which developers are assigned to tasks have on the effort requirements? This paper identifies four task assignment factors: team size, concurrency, intensity, and fragmentation. These four factors are shown to improve the predictive ability of the well-known intermediate COCOMO cost estimation model. A parsimonious effort estimation model is also derived that utilizes a subset of the task assignment factors and unadjusted function points. For the data examined, this parsimonious model is shown to have goodness of fit and quality of estimation superior to that of the COCOMO model, while utilizing fewer cost factors",2001,0, 1744,Experimental application of extended Kalman filtering for sensor validation,"A sensor failure detection and identification scheme for a closed loop nonlinear system is described. Detection and identification tasks are performed by estimating parameters directly related to potential failures. An extended Kalman filter is used to estimate the fault-related parameters, while a decision algorithm based on threshold logic processes the parameter estimates to detect possible failures. For a realistic evaluation of its performance, the detection scheme has been implemented on an inverted pendulum controlled by real-time control software. The failure detection and identification scheme is tested by applying different types of failures on the sensors of the inverted pendulum. Experimental results are presented to validate the effectiveness of the approach",2001,0, 1745,Evaluating the effect of inheritance on the modifiability of object-oriented business domain models,"The paper describes an experiment to assess the impact of inheritance on the modifiability of object oriented business domain models. This experiment is part of a research project on the quality determinants of early systems development artefacts, with a special focus on the maintainability of business domain models. Currently there is little empirical information about the relationship between the size, structural and behavioural properties of business domain models and their maintainability. The situation is different in object oriented software engineering where a number of experimental investigations into the maintainability of object oriented software have been conducted. The results of our experiment indicate that extensive use of inheritance leads to models that are more difficult to modify. These findings are in line with the conclusions drawn from three similar controlled experiments on inheritance and modifiability of object oriented software",2001,0, 1746,Prediction models for software fault correction effort,"We have developed a model to explain and predict the effort associated with changes made to software to correct faults while it is undergoing development. Since the effort data available for this study is ordinal in nature, ordinal response models are used to explain the effort in terms of measures of fault locality and the characteristics of the software components being changed. The calibrated ordinal response model is then applied to two projects not used in the calibration to examine predictive validity",2001,0, 1747,Assessing optimal software architecture maintainability,"Over the last decade, several authors have studied the maintainability of software architectures. In particular, the assessment of maintainability has received attention. However, even when one has a quantitative assessment of the maintainability of a software architecture, one still does not have any indication of the optimality of the software architecture with respect to this quality attribute. Typically, the software architect is supposed to judge the assessment result based on his or her personal experience. In this paper, we propose a technique for analysing the optimal maintainability of a software architecture based on a specified scenario profile. This technique allows software architects to analyse the maintainability of their software architecture with respect to the optimal maintainability. The technique is illustrated and evaluated using industrial cases",2001,0, 1748,A universal communication model for an automotive system integration platform,"In this paper, we present a virtual integration platform based design methodology for distributed automotive systems. The platform, built within the `Virtual Component Co-Design'' tool (VCC), provides the ability of distributing a given system functionality over an architecture so as to validate different solutions in terms of cost, safety requirements, and real-time constraints. The virtual platform constitutes the foundation for design decisions early in the development phase, therefore enabling decisive and competitive advantages in the development process. This paper focuses on one of the key-enablers of the methodology, the universal communication model (UCM). The UCM is defined at a level of abstraction that allows accurate estimates of the performance including the latencies over the bus network, and good simulation performance. In addition, due to the high level of reusability and parameterization of its components, it can be used as a framework for modeling the different communication protocols common in the automotive domain",2001,0, 1749,Full chip false timing path identification: applications to the PowerPCTM microprocessors,"Static timing anaylsis sets the industry standard in the design methodology of high speed/performance microprocessors to determine whether timing requirements have been met. Unfortunately, not all the paths identified using such analysis can be sensitized. This leads to a pessimistic estimation of the processor speed. Also, no amount of engineering effort spent on optimizing such paths can improve the timing performance of the chip. In the past we demonstrated initial results of how ATPG techniques can be used to identify false paths efficiently. Due to the gap between the physical design on which the static timing analysis of the chip is bused and the test view on which the ATPG techniques are applied to identify false paths, in many cases only sections of some of the paths in the full-chip were analyzed in our initial results. In this paper, we will fully analyze all the timing paths using the ATPG techniques, thus overcoming the gap between the testing and timing analysis techniques. This enables us to do false path identification at the full-chip level of the circuit. Results of applying our technique to the second generation G4 PowerPCTM will be presented",2001,0, 1750,Assessment of true worst case circuit performance under interconnect parameter variations,"The complicated manufacturing processes dictate that process variations are unavoidable in today's VLSI products. Unlike device variations, which can be captured by worst/best case corner points, the effects of interconnect variations are context-dependent, which makes it difficult to capture the true worst-case timing performance. This paper discusses an efficient method to explore the extreme values of performance metrics and the specific parameters that will create these extreme performances. The described approach is based on a iterative search technique which facilitates its proper search direction by calculating an explicit analytical approximation model",2001,0, 1751,Memory hierarchy optimization of multimedia applications on programmable embedded cores,"Data memory hierarchy optimization and partitioning for a widely used multimedia application kernel known as the hierarchical motion estimation algorithm is undertaken, with the use of global loop and data-reuse transformations for three different embedded processor architecture models. Exhaustive exploration of the obtained results clarifies the effect of the transformations on power, area, and performance and also indicates a relation between the complexity of the application and the power savings obtained by this strategy. Furthermore, the significant contribution of the instruction memory even after the application of performance optimizations to the total power budget becomes evident and a methodology is introduced in order to reduce this component",2001,0, 1752,A controlled experiment to assess the effectiveness of inspection meetings,"Software inspection is one of the best practices for detecting and removing defects early in the software development process. In a software inspection, review is first performed individually and then by meeting as a team. In the last years, some empirical studies have shown that inspection meetings do not improve the effectiveness of the inspection process with respect to the number of true discovered defects. While group synergy allows inspectors to find some new defects, these meeting gains are offset by meeting losses, that is defects found by individuals but not reported as a team. We present a controlled experiment with more than one hundred undergraduate students who inspected software requirements documents as part of a university course. We compare the performance of nominal and real teams, and also investigate the reasons for meeting losses. Results show that nominal teams outperformed real teams, there were more meeting losses than meeting gains, and that most of the losses were defects found by only one individual in the inspection team",2001,0, 1753,Investigating the impact of reading techniques on the accuracy of different defect content estimation techniques,"Software inspections have established an impressive track record for early defect detection and correction. To increase their benefits, recent research efforts have focused on two different areas: systematic reading techniques and defect content estimation techniques. While reading techniques are to provide guidance for inspection participants on how to scrutinize a software artifact in a systematic manner, defect content estimation techniques aim at controlling and evaluating the inspection process by providing an estimate of the total number of defects in an inspected document. Although several empirical studies have been conducted to evaluate the accuracy of defect content estimation techniques, only few consider the reading approach as an influential factor. The authors examine the impact of two specific reading techniques: a scenario based reading technique and checklist based reading, on the accuracy of different defect content estimation techniques. The examination is based on data that were collected in a large experiment with students of the Vienna University of Technology. The results suggest that the choice of the reading technique has little impact on the accuracy of defect content estimation techniques. Although more empirical work is necessary to corroborate this finding, it implies that practitioners can use defect content estimation techniques without any consideration of their current reading technique",2001,0, 1754,Influence of team size and defect detection technique on inspection effectiveness,"Inspection team size and the set of defect detection techniques used by the team are major characteristics of the inspection design, which influences inspection effectiveness, benefit and cost. The authors focus on the inspection performance of a nominal, that is non-communicating team, similar to the situation of an inspection team after independent individual preparation. We propose a statistical model based on empirical data to calculate the expected values for the inspection effectiveness and effort of synthetic nominal teams. Further, we introduce an economic model to compute the inspection benefits, net gain, and return on investment. With these models we determine (a) the best mix of reading techniques (RTs) to maximize the average inspection performance for a given team size, (b) the optimal team size and RT mix for a given inspection time budget, and (c) the benefit of an additional inspector for a given team size. Main results of the investigation with data from a controlled experiment are: (a) benefits of an additional inspector for a given RT diminished quickly with growing team size, thus, above a given team size a mix of different RTs is more effective and has a higher net gain than using only one RT; (b) the cost-benefit model limits team size, since the diminishing gain of an additional inspector at some point is more than offset by his additional cost",2001,0, 1755,Understanding and measuring the sources of variation in the prioritization of regression test suites,"Test case prioritization techniques let testers order their test cases so that those with higher priority, according to some criterion, are executed earlier than those with lower priority. In previous work (1999, 2000), we examined a variety of prioritization techniques to determine their ability to improve the rate of fault detection of test suites. Our studies showed that the rate of fault detection of test suites could be significantly improved by using more powerful prioritization techniques. In addition, they indicated that rate of fault detection was closely associated with the target program. We also observed a large quantity of unexplained variance, indicating that other factors must be affecting prioritization effectiveness. These observations motivate the following research questions. (1) Are there factors other than the target program and the prioritization technique that consistently affect the rate of fault detection of test suites? (2) What metrics are most representative of each factor? (3) Can the consideration of additional factors lead to more efficient prioritization techniques? To address these questions, we performed a series of experiments exploring three factors: program structure, test suite composition and change characteristics. This paper reports the results and implications of those experiments",2001,0, 1756,A primer on object-oriented measurement,"Many object-oriented metrics have been proposed over the last decade. A few of these metrics have undergone some form of empirical validation, and some are actually being used by a number of organizations as part of an effort to manage software quality. There is evidence that using object-oriented metrics can be beneficial to conducting business and to the bottom line. This is encouraging, as estimates and prediction models using design metrics can be used to allocate maintenance resources and for obtaining assurances about software quality. The purpose of this paper is to present an overview of object-oriented metrics, their use and their (empirical) validation. The focus is on static class-level metrics, since these have been studied the most",2001,0, 1757,Controlling overfitting in software quality models: experiments with regression trees and classification,"In these days of “faster, cheaper, betterâ€?release cycles, software developers must focus enhancement efforts on those modules that need improvement the most. Predictions of which modules are likely to have faults during operations is an important tool to guide such improvement efforts during maintenance. Tree-based models are attractive because they readily model nonmonotonic relationships between a response variable and its predictors. However, tree-based models are vulnerable to overfitting, where the model reflects the structure of the training data set too closely. Even though a model appears to be accurate on training data, if overfitted it may be much less accurate when applied to a current data set. To account for the severe consequences of misclassifying fault-prone modules, our measure of overfitting is based on the expected costs of misclassification, rather than the total number of misclassifications. In this paper, we apply a regression-tree algorithm in the S-Plus system to the classification of software modules by the application of our classification rule that accounts for the preferred balance between misclassification rates. We conducted a case study of a very large legacy telecommunications system, and investigated two parameters of the regression-tree algorithm. We found that minimum deviance was strongly related to overfitting and can be used to control it, but the effect of minimum node size on overfitting is ambiguous",2001,0, 1758,Evaluating software degradation through entropy,"Software systems are affected by degradation as an effect of continuous change. Since late interventions are too much onerous, software degradation should be detected early in the software lifetime. Software degradation is currently detected by using many different complexity metrics, but their use to monitor maintenance activities is costly. These metrics are difficult to interpret, because each emphasizes a particular aspect of degradation and the aspects shown by different metrics are not orthogonal. The purpose of our research is to measure the entropy of a software system to assess its degradation. In this paper, we partially validate the entropy class of metrics by a case study, replicated on successive releases of a set of software systems. The validity is shown through direct measures of software quality, such as the number of detected defects, the maintenance effort and the number of slipped defects",2001,0, 1759,Robustness and diagnosability of OO systems designed by contracts,"While there is a growing interest in component-based systems for industrial applications, little effort has so far been devoted to quality evaluation of these systems. This paper presents the definition of measures for two quality factors, namely robustness and “diagnosabilityâ€?for the special case of object-oriented (OO) systems, for which thee approach known as design-by-contract has been used. The main steps in constructing these measures are given, from informal definitions of the factors to be measured to the mathematical model of the measures. To fix the parameters, experimental studies have been conducted, essentially based on applying mutation analysis in the OO context. Several measures are presented that reveal and estimate the contribution of the contracts' quality and density to the overall quality of a system in terms of robustness and diagnosability",2001,0, 1760,Better validation in a world-wide development environment,"Increasingly, software projects are handled in a global and distributed project setup. Global software development, however, also challenges traditional techniques of software engineering, such as peer reviews or teamwork. In particular, validation activities during development, such as inspections, need to be adjusted to achieve results which are both efficient and effective. Effective teamwork and the coaching of engineers contribute highly to successful projects. In this article, we evaluate experiences with validation activities in a global setting within Alcatel's switching and routing business. We investigate three hypotheses related, respectively, to the effects of collocated inspections, intensive coaching and feature-oriented development teams on globally distributed projects. As all these activities mean initial investment compared to a standard process with scattered activities, the major validation criterion for the three hypotheses is cost reduction due to earlier defect detection and less defects introduced. The data is taken from a sample of over 60 international projects of various sizes, from which we have collected all types of software product and process metrics in the past four years",2001,0, 1761,Investigation of logistic regression as a discriminant of software quality,"Investigates the possibility that logistic regression functions (LRFs), when used in combination with Boolean discriminant functions (BDFs), which we had previously developed, would improve the quality classification ability of BDFs when used alone; this was found to be the case. When the union of a BDF and LRF was used to classify quality, the predictive accuracy of quality and inspection cost was improved over that of using either function alone for the Space Shuttle. Also, the LRFs proved useful for ranking the quality of modules in a build. The significance of these results is that very high-quality classification accuracy (1.25% error) can be obtained while reducing the inspection cost incurred in achieving high quality. This is particularly important for safety-critical systems. Because the methods are general and not particular to the Shuttle, they could be applied to other domains. A key part of the LRF development was a method for identifying the critical value (i.e. threshold) that could discriminate between high and low quality, and at the same time constrain the cost of inspection to a reasonable value",2001,0, 1762,"Measurement, prediction and risk analysis for Web applications","Accurate estimates of development effort play an important role in the successful management of larger Web development projects. By applying measurement principles to measure qualities of the applications and their development processes, feedback can be obtained to help understand, control and improve products and processes. The objective of this paper is to present a Web design and authoring prediction model based on a set of metrics which were collected using a case study evaluation (CSE). The paper is organised into three parts. Part I describes the CSE in which the metrics used in the prediction model were collected. These metrics were organised into five categories: effort metrics, structure metrics, complexity metrics, reuse metrics and size metrics. Part II presents the prediction model proposed, which was generated using a generalised linear model (GLM), and assesses its predictive power. Finally, part III investigates the use of the GLM as a framework for risk management",2001,0, 1763,A vector-based approach to software size measurement and effort estimation,"Software size is a fundamental product measure that can be used for assessment, prediction and improvement purposes. However, existing software size measures, such as function points, do not address the underlying problem complexity of software systems adequately. This can result in disproportional measures of software size for different types of systems. We propose a vector size measure (VSM) that incorporates both functionality and problem complexity in a balanced and orthogonal manner. The VSM is used as the input to a vector prediction model (VPM) which can be used to estimate development effort early in the software life-cycle. We theoretically validate the approach against a formal framework. We also empirically validate the approach with a pilot study. The results indicate that the approach provides a mechanism to measure the size of software systems, classify software systems, and estimate development effort early in the software life-cycle to within ±20% across a range of application types",2001,0, 1764,Aspect-oriented programming takes aim at software complexity,"As global digitalization and the size of applications expand at an exponential rate, software engineering's complexities are also growing. One feature of this complexity is the repetition of functionality throughout an application. An example of the problems this complexity causes occurs when programmers must change an oft-repeated feature for an updated or new version of an application. It is often difficult for programmers to find every instance of such a feature in millions of lines of code. Failing to do so, however, can introduce bugs. To address this issue, software researchers are developing methodologies based on a new programming element: the aspect. An aspect is a piece of code that describes a recurring property of a program. Applications can, of course, have multiple aspects. Aspects provide cross-cutting modularity. In other words, programmers can use aspects to create software modules for issues that cut across various parts of an application. Aspects have the potential to make programmers' work easier, less time-consuming and less error-prone. Proponents say aspects could also lead to less expensive applications, shorter upgrade cycles and software that is flexible and more customizable. A number of companies and universities are working on aspects or aspect-like concepts",2001,0, 1765,The application of neural networks to fuel processors for fuel-cell vehicles,"Passenger vehicles fueled by hydrocarbons or alcohols and powered by proton exchange membrane (PEM) fuel cells address world air quality and fuel supply concerns while avoiding hydrogen infrastructure and on-board storage problems. Reduction of the carbon monoxide concentration in the on-board fuel processor's hydrogen-rich gas by the preferential oxidizer (PrOx) under dynamic conditions is crucial to avoid poisoning of the PEM fuel cell's anode catalyst and thus malfunction of the fuel-cell vehicle. A dynamic control scheme is proposed for a single-stage tubular cooled PrOx that performs better than, but retains the reliability and ease of use of, conventional industrial controllers. The proposed hybrid control system contains a cerebellar model articulation controller artificial neural network in parallel with a conventional proportional-integral-derivative (PID) controller. A computer simulation of the preferential oxidation reactor was used to assess the abilities of the proposed controller and compare its performance to the performance of conventional controllers. Realistic input patterns were generated for the PrOx by using models of vehicle power demand and upstream fuel-processor components to convert the speed sequences in the Federal Urban Driving Schedule to PrOx inlet temperatures, concentrations, and flow rates. The proposed hybrid controller generalizes well to novel driving sequences after being trained on other driving sequences with similar or slower transients. Although it is similar to the PID in terms of software requirements and design effort, the hybrid controller performs significantly better than the PID in terms of hydrogen conversion setpoint regulation and PrOx outlet carbon monoxide reduction",2001,0, 1766,A dynamic group management framework for large-scale distributed event monitoring,"Distributed event monitoring is an important service for fault, performance and security management. Next generation event monitoring services are highly distributed and involve a large number of monitoring agents. In order to support scalable event monitoring, the monitoring agents use IP multicasting for disseminating events and control information. However, due to the dynamic nature of event detection and correlation in distributed monitoring, devising an efficient group management for agents organization and coordination becomes a challenging issue. This paper presents an adaptive group management framework that dynamically re-configures the group structures and membership assignments at run-time according to the event correlation requirements and allows for optimal delivery of multicast messages between the management entities. This framework provides techniques for solving agents' state synchronization, collision-free group allocation and agents bootstrap problems in distributed event monitoring. The presented framework has been implemented within a HiFi system which is a distributed hierarchical monitoring system",2001,0, 1767,Semi-active replication of SNMP objects in agent groups applied for fault management,"It is often useful to examine management information base (MIB) objects of a faulty agent in order to determine why it is faulty. This paper presents a new framework for semi-active replication of SNMP management objects in local area networks. The framework is based on groups of agents that communicate with each other using reliable multicast. A group of agents provides fault-tolerant object functionality. An SNMP service is proposed that allows replicated MIB objects of a faulty agent of a given group to be accessed through fault-free agents of that group. The presented framework allows the dynamic definition of agent groups, and management objects to be replicated in each group. A practical fault-tolerant tool for local area network fault management was implemented and is presented. The system employs SNMP agents that interact with a group communication tool. As an example, we show how the examination of TCP-related objects of faulty agents have been used in the fault diagnosis process. The impact of replication on network performance is evaluated",2001,0, 1768,Self-aware services: using Bayesian networks for detecting anomalies in Internet-based services,"We propose a general architecture and implementation for the autonomous assessment of the health of arbitrary service elements, as a necessary prerequisite to self-control. We describe a health engine, the central component of our proposed `self-awareness and control' architecture. The health engine combines domain independent statistical analysis and probabilistic reasoning technology (Bayesian networks) with domain dependent measurement collection and evaluation methods. The resultant probabilistic assessment enables open, non-hierarchical communications about service element health. We demonstrate the validity of our approach using HP's corporate email service and detecting email anomalies: mail loops and a virus attack",2001,0, 1769,State estimator condition number analysis,This paper develops formulas for the condition number of the state estimation problem as a function of the different types and number of measurements. We present empirical results using the IEEE RTS-96 and IEEE 118 bus systems that validate the formulas,2001,0, 1770,Diagnosis and prognosis of bearings using data mining and numerical visualization techniques,"Traditionally, condition-based monitoring techniques have been used to diagnose failure in rotary machinery by application of low-level signal processing and trend analysis techniques. Such techniques consider small windows of data from large data sets to give preliminary information of developing fault(s) or failure precursor(s). However, these techniques only provide information of a minute portion of a large data set, which limits the accuracy of predicting the remaining useful life of the system. Diagnosis and prognosis (DAP) techniques should be able to identify the origin of the fault(s), estimate the rate of its progression and determine the remaining useful life of the system. This research demonstrates the use of data mining and numerical visualization techniques for diagnosis and prognosis of bearing vibration data. By using these techniques a comprehensive understanding of large vibration data sets can be attained. This approach uses intelligent agents to isolate particular bearing vibration characteristics using statistical analysis and signal processing for data compression. The results of the compressed data can be visualized in 3-D plots and used to track the origination and evolution of failure in the bearing vibration data. The Bearing Test Bed is used for applying measurable static and dynamic stresses on the bearing and collecting vibration signatures from the stressed bearings",2001,0, 1771,An arrhythmia detector and heart rate estimator for overnight polysomnography studies,"We present an algorithm for automatic on-line analysis of the ECG channel acquired during overnight polysomnography (PSG) studies. The system is independent of ECG morphology, requires no manual initialization, and operates automatically throughout the night. It highlights likely occurrences of arrhythmias and intervals of bad signal quality while outputting a continual estimate of heart rate. Algorithm performance is validated against standard ECG databases and PSG data. Results demonstrate a minimal false negative rate and a low false positive rate for arrhythmia detection, and robustness over a wide range of noise contamination.",2001,0, 1772,Client-transparent fault-tolerant Web service,"Most of the existing fault tolerance schemes for Web servers detect server failure and route future client requests to backup servers. These techniques typically do not provide transparent handling of requests whose processing was in progress when the failure occurred. Thus, the system may fail to provide the user with confirmation for a requested transaction or clear indication that the transaction was not performed. We describe a client-transparent fault tolerance scheme for Web servers that ensures correct handling of requests in progress at the time of server failure. The scheme is based on a standby backup server and simple proxies. The error handling mechanisms of TCP are used to multicast requests to the primary and backup as well as to reliably deliver replies from a server that may fail while sending the reply. Our scheme does not involve OS kernel changes or use of user-level TCP implementations and requires minimal changes to the Web server software",2001,0, 1773,Dynamic load sharing with unknown memory demands in clusters,"A compute farm is a pool of clustered workstations to provide high performance computing services for CPU-intensive, memory-intensive, and I/O active jobs in a batch mode. Existing load sharing schemes with memory considerations assume jobs' memory demand sizes are known in advance or predictable based on users' hints. This assumption can greatly simplify the designs and implementations of load sharing schemes, but is not desirable in practice. In order to address this concern, we present three new results and contributions in this study. Conducting Linux kernel instrumentation, we have collected different types of workload execution traces to quantitatively characterize job interactions, and modeled page fault behavior as a function of the overloaded memory sizes and the amount of jobs' I/O activities. Based on experimental results and collected dynamic system information, we have built a simulation model which accurately emulates the memory system operations and job migrations with virtual memory considerations. We have proposed a memory-centric load sharing scheme and its variations to effectively process dynamic memory, allocation demands, aiming at minimizing execution time of each individual job by dynamically migrating and remotely submitting jobs to eliminate or reduce page faults and to reduce the queuing time for CPU services. Conducting trace-driven simulations, we have examined these load sharing policies to show their effectiveness",2001,0, 1774,Enforcing perfect failure detection,Perfect failure detectors can correctly decide whether a computer is crashed. However it is impossible to implement a perfect failure detector in purely asynchronous systems. We show how to enforce perfect failure detection in timed distributed systems with hardware watchdogs. The two main system model assumptions are: each computer can measure time intervals with a known maximum error; and each computer has a watchdog that crashes the computer unless the watchdog is periodically updated. We have implemented a system that satisfies both assumptions using a combination of off-the-shelf software and hardware,2001,0, 1775,Design and implementation of a composable reflective middleware framework,"With the evolution of the global information infrastructure, service providers will need to provide effective and adaptive resource management mechanisms that can serve more concurrent clients and deal with applications that exhibit quality-of-service (QoS) requirements. Flexible, scalable and customizable middleware can be used as an enabling technology for next-generation systems that adhere to the QoS requirements of applications that execute in highly dynamic distributed environments. To enable application-aware resource management, we are developing a customizable and composable middleware framework called CompOSE|Q (Composable Open Software Environment with QoS), based on a reflective meta-model. In this paper, we describe the architecture and runtime environment for CompOSE|Q and briefly assess the performance overhead of the additional flexibility. We also illustrate how flexible communication mechanisms can be supported efficiently in the CompOSE|Q framework",2001,0, 1776,Investigating the cost-effectiveness of reinspections in software development,"Software inspection is one of the most effective methods to detect defects. Reinspection repeats the inspection process for software products that are suspected to contain a significant number of undetected defects after an initial inspection. As a reinspection is often believed to be less efficient than an inspection an important question is whether a reinspection justifies its cost. In this paper we propose a cost-benefit model for inspection and reinspection. We discuss the impact of cost and benefit parameters on the net gain of a reinspection with empirical data from an experiment in which 31 student teams inspected and reinspected a requirements document. Main findings of the experiment are: a) For reinspection benefits and net gain were significantly lower than for the initial inspection. Yet, the reinspection yielded a positive net gain for most teams with conservative cost-benefit assumptions. B) Both the estimated benefits and number of major defects are key factors for reinspection net gain, which emphasizes the need for appropriate estimation techniques.",2001,0, 1777,Quantifying the costs and benefits of architectural decisions,"The benefits of a software system are assessable only relative to the business goals the system has been developed to serve. In turn, these benefits result from interactions between the system's functionality and its quality attributes (such as performance, reliability and security). Its quality attributes are, in most cases, dictated by its architectural design decisions. Therefore, we argue that the software architecture is the crucial artifact to study in making design tradeoffs and in performing cost-benefit analyses. A substantial part of such an analysis is in determining the level of uncertainty with which we estimate both costs and benefits. We offer an architecture-centric approach to the economic modeling of software design decision making called CBAM (Cost Benefit Analysis Method), in which costs and benefits are traded off with system quality attributes. We present the CBAM, the early results from applying this method in a large-scale case study, and discuss the application of more sophisticated economic models to software decision making.",2001,0, 1778,Incorporating varying test costs and fault severities into test case prioritization,"Test case prioritization techniques schedule test cases for regression testing in an order that increases their ability to meet some performance goal. One performance goal, rate of fault detection, measures how quickly faults are detected within the testing process. In previous work (S. Elbaum et al., 2000; G. Rothermel et al., 1999), we provided a metric, APFD, for measuring rate of fault detection, and techniques for prioritizing test cases to improve APFD, and reported the results of experiments using those techniques. This metric and these techniques, however, applied only in cases in which test costs and fault severity are uniform. We present a new metric for assessing the rate of fault detection of prioritized test cases that incorporates varying test case and fault costs. We present the results of a case study illustrating the application of the metric. This study raises several practical questions that might arise in applying test case prioritization; we discuss how practitioners could go about answering these questions.",2001,0, 1779,Theory of software reliability based on components,"We present a foundational theory of software system reliability based on components. The theory describes how component developers can design and test their components to produce measurements that are later used by system designers to calculate composite system reliability, without implementation and test of the system being designed. The theory describes how to make component measurements that are independent of operational profiles, and how to incorporate the overall system-level operational profile into the system reliability calculations. In principle, the theory resolves the central problem of assessing a component, which is: a component developer cannot know how the component will be used and so cannot certify it for an arbitrary use; but if the component buyer must certify each component before using it, component based development loses much of its appeal. This dilemma is resolved if the component developer does the certification and provides the results in such a way that the component buyer can factor in the usage information later without repeating the certification. Our theory addresses the basic technical problems inherent in certifying components to be released for later use in an arbitrary system. Most component research has been directed at functional specification of software components; our theory addresses the other equally important side of the coin: component quality.",2001,0, 1780,Improving validation activities in a global software development,"Global software development challenges traditional techniques of software engineering, such as peer reviews or teamwork. Effective teamwork and coaching of engineers contribute highly towards successful projects. In this case study, we evaluate experiences with validation activities in a global setting within Alcatel's switching and routing business. We investigate three hypotheses related to the effects of (a) collocated inspections, (b) intensive coaching and (c) feature-oriented development teams on globally distributed projects. As all these activities mean an initial investment compared to a standard process with scattered activities, the major validation criteria for the three hypotheses are: (1) cost reduction due to earlier defect detection and (2) less defects introduced. The data is taken from a sample of over 60 international projects of various sizes, from which we have collected all types of product and process metrics over the past four years.",2001,0, 1781,Reengineering analysis of object-oriented systems via duplication analysis,"All software systems, no matter how they are designed, are subject to continuous evolution and maintenance activities to eliminate defects and extend their functionalities. This is particularly true for object-oriented systems, where we may develop different software systems using the same internal library or framework. These systems may evolve in quite different directions in order to cover different functionalities. Typically, there is the need to analyze their evolution in order to redefine the library or framework boundaries. This is a typical problem of software reengineering analysis. In this paper, we describe metrics, based on duplication analysis, that contribute to the process of reengineering analysis of object-oriented systems. These metrics are the basic elements of a reengineering analysis method and tool. Duplication analyses at the file, class and method levels have been performed. A structural analysis using metrics that capture similarities in class structure has been also exploited. In order to identify the best approach for the reengineering analysis of object-oriented systems, a comparison between the two approaches is described. In this paper, a case study based on real cases is presented, in which the results obtained by using a reengineering process with and without the analysis tool are described. The purpose of this study is to discover which method is the most powerful and how much time reduction can be obtained by its use.",2001,0, 1782,Adaptive postfiltering of transform coefficients for the reduction of blocking artifacts,"This paper proposes a novel postprocessing technique for reducing blocking artifacts in low-bit-rate transform-coded images. The proposed approach works in the transform domain to alleviate the accuracy loss of transform coefficients, which is introduced by the quantization process. The masking effect in the human visual system (HVS) is considered, and an adaptive weighting mechanism is then integrated into the postfiltering. In low-activity areas, since blocking artifacts appear to be perceptually more detectable, a large window is used to efficiently smooth out the artifacts. In order to preserve image details, a small mask, as well as a large central weight, is employed for processing those high-activity blocks, where blocking artifacts are less noticeable due to the masking ability of local background. The quantization constraint is finally applied to the postfiltered coefficients. Experimental results show that the proposed technique provides satisfactory performance as compared to other postfilters in both objective and subjective image quality",2001,0, 1783,Effective values: an approach for characterizing dependability parameters,"The authors discuss a novel direction for evaluating system dependability. The research efforts outlined are part of a special focus concerning the dependability of complex distributed systems at the Department of Computer Engineering of the Federal Armed Forces University in Munich, Germany. Our approach emphasizes the behavior of components after they have been embedded in a system. We use the notion of effective values for the analysis and estimation of key dependability parameters. The concept is targeted to unify performance characterization with the dependability. Besides analysis, we report on some simulation results, which were in favor of our assumptions.",2001,0, 1784,Fault-adaptive control: a CBS application,"Complex control applications require a capability for accommodating faults in the controlled plant. Fault accommodation involves the detection and isolation of faults, and taking an appropriate control action that mitigates the effect of the faults and maintains control. This requires the integration of fault diagnostics with control, in a feedback loop. The paper discusses how a generic framework for building fault-adaptive control systems can be created using a model based approach. Instances of the framework are examples of complex CBSs (computer based systems) that have novel capabilities.",2001,0, 1785,An internally replicated quasi-experimental comparison of checklist and perspective based reading of code documents,"The basic premise of software inspections is that they detect and remove defects before they propagate to subsequent development phases where their detection and correction cost escalates. To exploit their full potential, software inspections must call for a close and strict examination of the inspected artifact. For this, reading techniques for defect detection may be helpful since these techniques tell inspection participants what to look for and, more importantly, how to scrutinize a software artifact in a systematic manner. Recent research efforts investigated the benefits of scenario-based reading techniques. A major finding has been that these techniques help inspection teams find more defects than existing state-of-the-practice approaches, such as, ad-hoc or checklist-based reading (CBR). We experimentally compare one scenario-based reading technique, namely, perspective-based reading (PBR), for defect detection in code documents with the more traditional CBR approach. The comparison was performed in a series of three studies, as a quasi experiment and two internal replications, with a total of 60 professional software developers at Bosch Telecom GmbH. Meta-analytic techniques were applied to analyze the data",2001,0, 1786,An experiment measuring the effects of personal software process (PSP) training,"The personal software process is a process improvement methodology aimed at individual software engineers. It claims to improve software quality (in particular defect content), effort estimation capability, and process adaptation and improvement capabilities. We have tested some of these claims in an experiment comparing the performance of participants who had just previously received a PSP course to a different group of participants who had received other technical training instead. Each participant of both groups performed the same task. We found the following positive effects: the PSP group estimated their productivity (though not their effort) more accurately, made fewer trivial mistakes, and their programs performed more careful error-checking; further, the performance variability was smaller in the PSP group in various respects. However, the improvements are smaller than the PSP proponents usually assume, possibly due to the low actual usage of PSP techniques in the PSP group. We conjecture that PSP training alone does not automatically realize the PSP's potential benefits (as seen in some industrial PSP success stories) when programmers are left alone with motivating themselves to actually use the PSP techniques",2001,0, 1787,Designing a service of failure detection in asynchronous distributed systems,"Even though introduced for solving the consensus problem in asynchronous distributed systems, the notion of unreliable failure detector can be used as a powerful tool for any distributed protocol in order to get better performance by allowing the usage of aggressive time-outs to detect failures of entities executing the protocol. We present the design of a Failure Detection Service (FDS) based on the notion of unreliable failure detectors introduced by T. Chandra and S. Toueg (1996). FDS is able to detect crashed objects and entities that permanently omit to send messages without imposing changes to the source code of the underlying protocols that use this service. Also, FDS provides an object oriented interface to its subscribers and, more important, it does not add network overhead if no entity subscribes to the service. The paper can be also seen as a first step towards a distributed implementation of a heartbeat-based failure management system as defined in fault-tolerant CORBA specification",2001,0, 1788,On the use of mobile code technology for monitoring Grid system,"Grid systems are becoming the underlying infrastructure for many high-performance distributed scientific applications. Resources in these systems have heterogeneous features, are connected by potentially unreliable networks, and are often under different administrative management domains. These remarks show how a new generation of techniques, mechanisms and tools need to be considered that can cope with the complexity of such systems and provide the performance suitable for scientific computing applications. In this paper an architecture for monitoring services in Grid environments based on mobile agent technology is presented. One of the most interesting aspects of the proposed approach is the possibility of customizing the parameters to be monitored. After describing the general framework where agents will execute, an application for improving the capabilities of traditional network monitoring paradigms is presented",2001,0, 1789,A probabilistic priority scheduling discipline for high speed networks,"In high speed networks, the strict priority (SP) scheduling discipline is perhaps the most common and simplest method to schedule packets from different classes of applications, each with diverse performance requirements. With this discipline, however, packets at higher priority levels can starve packets at lower priority levels. To resolve this starvation problem, we propose to assign a parameter to each priority queue in the SP discipline. The assigned parameter determines the probability with which its corresponding queue is served when the queue is polled by the server. We thus form a new packet scheduling discipline, referred to as the probabilistic priority (PP) discipline. By properly setting the assigned parameters, service differentiation as well as fairness among traffic classes can be achieved in PP. In addition, the PP discipline can be easily reduced to the ordinary SP discipline or to the reverse SP discipline",2001,0, 1790,Comparing fail-silence provided by process duplication versus internal error detection for DHCP server,"This paper uses fault injection to compare the ability of two fault-tolerant software architectures to protect an application from faults. These two architectures are Voltan, which uses process duplication, and Chameleon ARMORs, which use self-checking. The target application is a Dynamic Host Configuration Protocol (DHCP) server, a widely used application for managing IP addresses. NFTAPE, a software-based fault injection environment, is used to inject three classes of faults, namely random memory bit-flip, control-flow and high-level target specific faults, into each software architecture and into baseline Solaris and Linux versions",2001,0, 1791,Trading off execution time for reliability in scheduling precedence-constrained tasks in heterogeneous computing,"This paper investigates the problem of matching and scheduling of an application, which is composed of tasks with precedence constraints, to minimize both execution time and probability of failure of the application in a heterogeneous computing system. In general, however, it is impossible to satisfy both objectives at the same time because of conflicting requirements. The best one can do is to trade off execution time for reliability or vice versa, according to users' needs. Furthermore, there is a need for an algorithm which can assign tasks of an application to satisfy both of the objectives to some degree. Motivated from these facts, two different algorithms, which are capable of trading off execution time for reliability, are developed. To enable the proposed algorithms to account for the reliability of resources in the system, an expression which gives the reliability of the application under a given task assignment is derived. The simulation results are provided to validate the performance of the proposed algorithms",2001,0, 1792,Performance analysis of image compression using wavelets,"The aim of this paper is to examine a set of wavelet functions (wavelets) for implementation in a still image compression system and to highlight the benefit of this transform relating to today's methods. The paper discusses important features of wavelet transform in compression of still images, including the extent to which the quality of image is degraded by the process of wavelet compression and decompression. Image quality is measured objectively, using peak signal-to-noise ratio or picture quality scale, and subjectively, using perceived image quality. The effects of different wavelet functions, image contents and compression ratios are assessed. A comparison with a discrete-cosine-transform-based compression system is given. Our results provide a good reference for application developers to choose a good wavelet compression system for their application",2001,0, 1793,In-situ plasma etch process endpoint control in integrated circuit manufacturing,"In-situ noninvasive etch process endpoint control with repeatable accuracy is essential for producing the kinds of yields, product quality, and throughput necessary for IC manufacturers to remain competitive. A unique in-situ endpoint detection system with the capability to accept different, multiple sensors based on various optical analytical techniques is described here. When combined in the same touchscreen instrument platform, these different techniques increase instrument flexibility in handling a larger variety of applications, directly enhancing the ability to increase process yields. Weak optical emission spectroscopy (OES) wavelength intensities resulting from either weak transitions or low emitting species concentrations from low exposed area and/or low vapor pressure species, requires a highly sensitive endpoint detection system containing highly sensitive and efficient sensors, transducers, signal conditioning components and software, and flexible algorithms to call process endpoint. These same requirements are necessary in other optical analytical techniques such as optical reflectometry or optical interferometry using single or multiple wavelength static or scanning transmission and detection. The paper describes how the optical sensor signal is subsequently processed into an electronic signal upon which proprietary window triggering software endpoint algorithms are applied to call an end to the process condition. This endpoint system is more than a process control tool, as it also offers data analysis capability, making it an in-situ diagnostic instrument for process troubleshooting and process development",2001,0, 1794,Review sample shaping through the simultaneous use of multiple classification technologies in IMPACT ADC [IC yield analysis],"Among the various options available for yield improvement tools is automatic defect classification (ADC). ADC reduces the amount of manual inspection required as well as increasing the amount of classified data available. It can serve to entirely remove operator-based manual classification. The classified data can then be used to identify the cause of the yield-limiting problem in the line. To allow this, ADC can sample a limited yet representative number of defects from the total count, but does so under less constraints than those used by an operator who will, depending on experience and knowledge, adjust their chosen sample for each case. As such, the sampling methods employed can have critical effect on the overall results. This paper looks at how the sampling criteria should be established in order to provide the best representation of existing problems in the inspected wafers, and whether this is actually possible using ADC",2001,0, 1795,A Bayesian predictive software reliability model with pseudo-failures,"In our previous paper (2000), a Bayesian software reliability model with stochastically decreasing hazard rate was presented. Within any given failure time interval, the hazard rate is a function of both total testing time as well as number of encountered encountered failures. In this paper, to improve the predictive performance of our previously proposed model, a pseudo-failure is inserted whenever there is a period of failure-free execution equals (1-α)th percentile of the predictive distribution for time until the next failure has passed. We apply the enhanced model with pseudo-failures inserted to actual software failure data and show it gives better results under the sum of square errors criteria compared to previous Bayesian models and other existing times between failures models",2001,0, 1796,A DSP-based FFT-analyzer for the fault diagnosis of rotating machine based on vibration analysis,"A DSP-based measurement system dedicated to the vibration analysis on rotating machines was designed and realized. Vibration signals are on-line acquired and processed to obtain a continuous monitoring of the machine status. In case of fault, the system is capable of isolating the fault with a high reliability. The paper describes in detail the approach followed to built up fault and unfault models together with the chosen hardware and software solutions. A number of tests carried out on small-size three-phase asynchronous motors highlights high promptness in detecting faults, low false alarm rate, and very good diagnostic performance",2001,0, 1797,Design and development of a digital multifunction relay for generator protection,This paper presents the design and development of a rotor earth fault protection function as part of a multifunction generator protection relay. The relay design is based on a low frequency square wave injection method in detecting rotor earth faults. The accuracy of rotor earth fault resistance measurement is improved by applying piecewise quadratic approximation to the nonlinear gain characteristic of the measurement circuit. The paper also presents the hardware and software architecture of the relay,2001,0, 1798,A frame-level measurement apparatus for performance testing of ATM equipment,"Performance testing of ATM equipment is here dealt with. In particular, the attention is paid to frame-level metrics, recently proposed by the ATM forum because of their suitability to reflect user-perceived performance better than traditional cell-level metrics. Following the suggestions of the ATM forum, more and more network engineers and production managers are nowadays interested in these metrics, thus increasing the need of instruments and measurement solutions appropriate to their estimation. Trying to satisfy this exigency, a new VXI-based measurement apparatus is proposed in the paper. The apparatus features a suitable software, developed by the authors, which allows the evaluation of the aforementioned metrics by making simply use of common ATM analyzers; only two VXI line interfaces, capable of managing both the physical and ATM layer, are, in fact, adopted. At first, some details about the hierarchical structure of the ATM technology as used as the main differences between frames, peculiar to the ATM adaptation layer, and cells characterizing the lower ATM layer are given. Then, both the hardware and software solutions of the measurement apparatus are described in detail with a particular attention to the measurement procedures implemented. At the end the performance of a new ATM device, developed by Ericsson, is assessed in terms of frame-level metrics by means of the proposed apparatus",2001,0, 1799,"Low-cost, software-based self-test methodologies for performance faults in processor control subsystems","A software-based testing methodology for processor control subsystems, targeting hard-to-test performance faults in high-end embedded and general-purpose processors, is presented. An algorithm for directly controlling, using the instruction-set architecture only, the branch-prediction logic, a representative example of the class of processor control subsystems particularly prone to such performance faults, is outlined. Experimental results confirm the viability of the proposed methodology as a low-cost and effective answer to the problem of hard-to-test performance faults in processor architectures",2001,0, 1800,Run-time upgradable software in a large real-time telecommunication system,"For large real-time systems it is often important to achieve non-stop availability. Software updates must therefore be made during program operation. We consider a number of techniques for run-time software updates, and concentrate on one particular technique called Dynamic C++ Classes that allows run-time updates of C++ programs. The Dynamic C++ Classes approach is evaluated on a large real-time telecommunication system, the Ericsson Billing Gateway. The evaluation showed that there were no significant performance problems generated by the Dynamic C++ Classes approach. We have identified and discussed a number of problems with the approach and we have also implemented some improvements of the approach",2001,0, 1801,A new uncertainty measure for belief networks with applications to optimal evidential inferencing,"We are concerned with the problem of measuring the uncertainty in a broad class of belief networks, as encountered in evidential reasoning applications. In our discussion, we give an explicit account of the networks concerned, and call them the Dempster-Shafer (D-S) belief networks. We examine the essence and the requirement of such an uncertainty measure based on well-defined discrete event dynamical systems concepts. Furthermore, we extend the notion of entropy for the D-S belief networks in order to obtain an improved optimal dynamical observer. The significance and generality of the proposed dynamical observer of measuring uncertainty for the D-S belief networks lie in that it can serve as a performance estimator as well as a feedback for improving both the efficiency and the quality of the D-S belief network-based evidential inferencing. We demonstrate, with Monte Carlo simulation, the implementation and the effectiveness of the proposed dynamical observer in solving the problem of evidential inferencing with optimal evidence node selection",2001,0, 1802,The effect of faults on plasma particle detector data reduction,"Missions in NASA's Solar-Terrestrial Probe Line feature challenges such as multiple spacecraft and high data production rates. An important class of scientific instruments that have for years strained against limits on communications are the particle detectors used to measure space plasma density, temperature, and flow. The Plasma Moment Application (PMA) software is being developed for the NASA Remote Exploration and Experimentation (REE) Program's series of Flight Processor Testbeds. REE seeks to enable instrument science teams to move data analyses such as PMA on board the spacecraft thereby reducing communication downlink requirements. Here we describe the PMA for the first time and examine its behavior under single bit faults in its static state. We find that ~90% of the faults lead to tolerable behavior, while the remainder cause either program failure or nonsensical results. These results help guide the development of fault tolerant, non-hardened flight/science processors",2001,0, 1803,Advanced test cell diagnostics for gas turbine engines,"Improved test cell diagnostics capable of detecting and classifying engine mechanical and performance faults as well as instrumentation problems is critical to reducing engine operating and maintenance costs while optimizing test cell effectiveness. Proven anomaly detection and fault classification techniques utilizing engine Gas Path Analysis (GPA) and statistical/empirical models of structural and performance related engine areas can now be implemented for real-time and post-test diagnostic assessments. Integration and implementation of these proven technologies into existing USAF engine test cells presents a great opportunity to significantly improve existing engine test cell capabilities to better meet today's challenges. A suite of advanced diagnostic and troubleshooting tools have been developed and implemented for gas turbine engine test cells as part of the Automated Jet Engine Test Strategy (AJETS) program. AJETS is an innovative USAF program for improving existing engine test cells by providing more efficient and advanced monitoring, diagnostic and troubleshooting capabilities. This paper describes the basic design features of the AJETS system; including the associated data network, sensor validation and anomaly detection/diagnostic software that was implemented in both a real-time and post-test analysis mode. These advanced design features of AJETS are currently being evaluated and advanced utilizing data from TF39 test cell installations at Travis AFB and Dover AFB",2001,0, 1804,A general prognostic tracking algorithm for predictive maintenance,"Prognostic health management (PHIM) is a technology that uses objective measurements of condition and failure hazard to adaptively optimize a combination of availability, reliability, and total cost of ownership of a particular asset. Prognostic utility for the signature features are determined by transitional failure experiments. Such experiments provide evidence for the failure alert threshold and of the likely advance warning one can expect by tracking the feature(s) continuously. Kalman filters are used to track changes in features like vibration levels, mode frequencies, or other waveform signature features. This information is then functionally associated with load conditions using fuzzy logic and expert human knowledge of the physics and the underlying mechanical systems. Herein is the greatest challenge to engineering. However, it is straightforward to track the progress of relevant features over time using techniques such as Kalman filtering. Using the predicted states, one can then estimate the future failure hazard, probability of survival, and remaining useful life in an automated and objective methodology",2001,0, 1805,An integrated diagnostics virtual test bench for life cycle support,"Qualtech Systems, Inc. (QSI) has developed an architecture that utilizes the existing TEAMS (Testability Engineering and Maintenance Systems) integrated tool set as the foundation to a computing environment for modeling and rigorous design analysis. This architecture is called a Virtual Test Bench (VTB) for Integrated Diagnostics. The VTB approach addresses design for testability, safety, and risk reduction because it provides an engineering environment to develop/provide: 1. Accurate, comprehensive, and graphical model based failure mode, effects and diagnostic analysis to understand failure modes, their propagation, effects, and ability of diagnostics to address these failure modes. 2. Optimization of diagnostic methods and test sequencing supporting the development of an effective mix of diagnostic methods. 3. Seamless integration from analysis, to run-time implementation, to maintenance process and life cycle support. undetected fault lists, ambiguity group lists, and optimized diagnostic trees. 4. A collaborative, widely distributed engineering environment to ""ring-out"" the design before it is built and flown. The VTB architecture offers an innovative solution in a COTS package for system/component modeling, design for safety, failure mode/effect analysis, testability engineering, and rigorous integration/testing of the IVHM (Integrated Vehicle Health Management) function with the rest of the vehicle. The VTB approach described in this paper will use the TEAMS software tool to generate detailed, accurate ""failure"" models of the design, assess the propagation of the failure mode effects, and determine the impact on safety, mission and support costs. It will generate FMECA, mission reliability assessments, incorporate the diagnostic and prognostic test designs, and perform testability analysis. Diagnostic functions of the VTB include fault detection and isolation metrics undetected fault lists, ambiguity group lists, and optimized diagnostic trees",2001,0, 1806,Formalizing COSMIC-FFP using ROOM,"We propose a formalization of the COSMIC Full Function Point (COSMIC-FFP) measure for the Real-time Object Oriented Modeling (ROOM) language. COSMIC-FFP is a measure of the functional size of software. It has been proposed by the COSMIC group as an adaptation of the function point measure for real-time systems. The definition of COSMIC-FFP is general and can be applied to any specification language. The benefits of our formalization are twofold. First it eliminates measurement variance, because the COSMIC informal definition is subject to interpretation by COSMIC-FFP raters, which may lead to different counts for the same specification, depending on the interpretation made by each rater. Second it allows the automation of COSMIC-FFP measurement for ROOM specifications, which reduces measurement costs. Finally, the formal definition of COSMIC-FFP can provide a clear and unambiguous characterization of COSMIC-FFP concepts which is helpful for measuring COSMIC-FFP for other object-oriented notations like UML",2001,0, 1807,"Curve evolution implementation of the Mumford-Shah functional for image segmentation, denoising, interpolation, and magnification","We first address the problem of simultaneous image segmentation and smoothing by approaching the Mumford-Shah (1989) paradigm from a curve evolution perspective. In particular, we let a set of deformable contours define the boundaries between regions in an image where we model the data via piecewise smooth functions and employ a gradient flow to evolve these contours. Each gradient step involves solving an optimal estimation problem for the data within each region, connecting curve evolution and the Mumford-Shah functional with the theory of boundary-value stochastic processes. The resulting active contour model offers a tractable implementation of the original Mumford-Shah model (i.e., without resorting to elliptic approximations which have traditionally been favored for greater ease in implementation) to simultaneously segment and smoothly reconstruct the data within a given image in a coupled manner. Various implementations of this algorithm are introduced to increase its speed of convergence. We also outline a hierarchical implementation of this algorithm to handle important image features such as triple points and other multiple junctions. Next, by generalizing the data fidelity term of the original Mumford-Shah functional to incorporate a spatially varying penalty, we extend our method to problems in which data quality varies across the image and to images in which sets of pixel measurements are missing. This more general model leads us to a novel PDE-based approach for simultaneous image magnification, segmentation, and smoothing, thereby extending the traditional applications of the Mumford-Shah functional which only considers simultaneous segmentation and smoothing",2001,0, 1808,ICC 2001. IEEE International Conference on Communications. Conference Record (Cat. No.01CH37240),"The following topics were dealt with: multiuser detection; turbo codes; iterative decoding; network elements functionality; QoS in IP networks; transmission systems; wireless LAN; ad hoc networks; modulation; coding; wireless networks performance; synchronization; channel estimation; equalization; wireless multimedia; high speed networking; CDMA; multiple antennas and diversity; high speed network architecture; QoS in switching, routing and integration; optical networks; QoS for multimedia applications, services and networks; signal processing for communications; multimedia technologies; traffic and communication architectures; advanced signal processing for multimedia; fading channels; space-time codes; personal communications; medium access; next generation applications and services; mobile data networks; radio resource management; CDMA receiver algorithms; multicarrier and spread spectrum communications; advanced network surveillance and traffic management techniques; MPLS; video-on-demand systems; information infrastructure; OFDM; multicasting; broadband networks; network management; QoS for next generation Internet; wideband wireless local networks; interference mitigation and equalization; scheduling and queuing; service management of evolving telecommunication networks; enterprise networking; access protocols; turbo coding for magnetic recording; smart antennas; computer communications; Diffserv solutions for the next generation Internet; next generation network operations and management; communications theory; wireless IP networking; multicast routing and flow classification; satellite communications; signal processing for data storage; packet switching and packet scheduling; ATM network switches and routing; transport control protocol; communications software",2001,0, 1809,Application of vibration sensing in monitoring and control of machine health,"In this paper, an application for monitoring and control of machine health using vibration sensing is developed. This vibration analyzer is able to continuously monitor and compare the actual vibration pattern against a vibration signature, based on a fuzzy fusion technique. More importantly, this intelligent knowledge-based real-time analyzer is able to detect excessive vibration conditions much sooner than a resulting fault could be detected by an operator. Subsequently, appropriate actions can be taken, say to provide a warning or automatic corrective action. This approach may be implemented independently of the control system and as such can be applied to existing equipment without modification of the normal mode of operation. Simulation and experimental results are provided to illustrate the advantages of the approach taken in this application",2001,0, 1810,Application QoS management for distributed computing systems,"As a large number of distributed multimedia systems are deployed on computer networks, quality of service (QoS) for users becomes more important. This paper defines it as application QoS, and proposes the application QoS management system (QMS). It controls the application QoS according to the system environment by using simple measurement-based control methods. QMS consists of three types of modules. These are a notificator for module detecting QoS deterioration, a manager module for deciding the control method according to the application management policies, and a controller module for executing the control. The QMS manages the application QoS by communicating between these modules distributed on the network. Moreover, this paper especially focuses on the function setting QoS management policies to the QMS and proposes the setting method. By a simulation experiment, we confirmed that the system made it possible to negotiate the QoS among many applications and it was able to manage the whole applications according to the policies",2001,0, 1811,A fast IP classification algorithm applying to multiple fields,"With the network applications development, routers must support those functions such as firewalls, provision of QoS and traffic billing etc. All these functions need classification of IP packets, according to which it is determined how different packets are processed subsequently. A novel IP classification algorithm is proposed based on the grid of tries algorithm. The new algorithm not only eliminates original limitations in the case of multiple fields but also shows better performance in regard to both time and space. It has better overall performance than many other algorithms",2001,0, 1812,A predictive measurement-based fuzzy logic connection admission control,"This paper presents a novel measurement-based connection admission control (CAC) which uses fuzzy set and fuzzy logic theory. Unlike conventional CAC, the proposed CAC does not use complicated analytical models or a priori traffic descriptors. Instead, traffic parameters are predicted by an on-line fuzzy logic predictor (Qiu et al. 1999). QoS requirements are targeted indirectly by an adaptive weight factor. This weight factor is generated by a fuzzy logic inference system which is based on arrival traffic, queue occupancy and link load. Admission decisions are then based on real-time measurement of aggregate traffic statistics with the fuzzy logic adaptive weight factor as well as the predicted traffic parameters. Both homogeneous and heterogeneous traffic were used in the simulation. Fuzzy logic prediction improves the efficiency of both conventional and measurement-based CAC. In addition, the measurement-based approach incorporating fuzzy logic inference and using fuzzy logic prediction is shown to achieve higher network utilization while maintaining QoS",2001,0, 1813,Optimization of H.263 video encoding using a single processor computer: performance tradeoffs and benchmarking,"We present the optimization and performance evaluation of a software-based H.263 video encoder. The objective is to maximize the encoding rate without losing the picture quality on an ordinary single processor computer such as a PC or a workstation. This requires optimization at all design and implementation phases, including algorithmic enhancements, efficient implementations of all encoding modules, and taking advantage of certain architectural features of the machine. We design efficient algorithms for DCT and fast motion estimation, and exploit various techniques to speed up the processing, including a number of compiler optimizations and removal of redundant operations. For exploiting the architectural features of the machine, we make use of low-level machine primitives such as Sun UltraSPARC's visual instruction set and Intel's multimedia extension, which accelerate the computation in a single instruction stream multiple data stream fashion. Extensive benchmarking is carried out on three platforms: a 167-MHz Sun UltraSPARC-1 workstation, a 233-MHz Pentium II PC, and a 600-MHz Pentium III PC. We examine the effect of each type of optimization for every coding mode of H.263, highlighting the tradeoffs between quality and complexity. The results also allow us to make an interesting comparison between the workstation and the PCs. The encoder yields 45.68 frames per second (frames/s) on the Pentium III PC, 18.13 frames/s on the Pentium II PC, and 12.17 frames/s on the workstation for QCIF resolution video with high perceptual quality at reasonable bit rates, which are sufficient for most of the general switched telephone networks based video telephony applications. The paper concludes by suggesting optimum coding options",2001,0, 1814,A simple method for extracting models from protocol code,"The use of model checking for validation requires that models of the underlying system be created. Creating such models is both difficult and error prone and as a result, verification is rarely used despite its advantages. In this paper we present a method for automatically extracting models from low level software implementations. Our method is based on the use of an extensible compiler system, xg++, to perform the extraction. The extracted model is combined with a model of the hardware, a description of correctness, and an initial state. The whole model is then checked with the Murφ model checker. As a case study, we apply our method to the cache coherence protocols of the Stanford FLASH multiprocessor. Our system has a number of advantages. First, it reduces the cost of creating models, which allows model checking to be used more frequently. Second, it increases the effectiveness of model checking since the automatically extracted models are more accurate and faithful to the underlying implementation. We found a total of 8 errors using our system. Two errors were global resource errors, which would be difficult to find through any other means. We feel the approach is applicable to other low level systems",2001,0, 1815,Learning visual models of social engagement,"We introduce a face detector for wearable computers that exploits constraints in face scale and orientation imposed by the proximity of participants in near social interactions. Using this method we describe a wearable system that perceives “social engagement,â€?i.e., when the wearer begins to interact with other individuals. Our experimental system proved >90% accurate when tested on wearable video data captured at a professional conference. Over 300 individuals were captured during social engagement, and the data was separated into independent training and test sets. A metric for balancing the performance of face detection, localization, and recognition in the context of a wearable interface is discussed. Recognizing social engagement with a user's wearable computer provides context data that can be useful in determining when the user is interruptible. In addition, social engagement detection may be incorporated into a user interface to improve the quality of mobile face recognition software. For example, the user may cue the face recognition system in a socially graceful way by turning slightly away and then toward a speaker when conditions for recognition are favorable",2001,0, 1816,"High performance Chinese OCR based on Gabor features, discriminative feature extraction and model training","We have developed a Chinese OCR engine for machine printed documents. Currently, our OCR engine can support a vocabulary of 6921 characters which include 6707 simplified Chinese characters in GB2312-80, 12 frequently used GBK Chinese characters, 62 alphanumeric characters, 140 punctuation marks and symbols. The supported font styles include Song, Fang Song, Kat, He, Yuan, LiShu, WeiBei, XingKai, etc. The averaged character recognition accuracy is above 99% for newspaper quality documents with a recognition speed of about 250 characters per second on a Pentium III-450 MHz PC yet only consuming less than 2 MB memory. We describe the key technologies we used to construct the above recognizer. Among them, we highlight three key techniques contributing to the high recognition accuracy, namely the use of Gabor features, the use of discriminative feature extraction, and the use of minimum classification error as a criterion for model training",2001,0, 1817,An approach for analysing the propagation of data errors in software,"We present a novel approach for analysing the propagation of data errors in software. The concept of error permeability is introduced as a basic measure upon which we define a set of related measures. These measures guide us in the process of analysing the vulnerability of software to find the modules that are most likely exposed to propagating errors. Based on the analysis performed with error permeability and its related measures, we describe how to select suitable locations for error detection mechanisms (EDMs) and error recovery mechanisms (ERMs). A method for experimental estimation of error permeability, based on fault injection, is described and the software of a real embedded control system analysed to show the type of results obtainable by the analysis framework. The results show that the developed framework is very useful for analysing error propagation and software vulnerability and for deciding where to place EDMs and ERMs.",2001,0, 1818,FATOMAS-a fault-tolerant mobile agent system based on the agent-dependent approach,"Fault tolerance is fundamental to the further development of mobile agent applications. In the context of mobile agents, fault-tolerance prevents a partial or complete loss of the agent, i.e., it ensures that the agent arrives at its destination. We present FATOMAS, a Java-based fault-tolerant mobile agent system based on an algorithm presented in an earlier paper (2000). Contrary to the standard ""place-dependent"" architectural approach, FATOMAS uses the novel ""agent-dependent"" approach. In this approach, the protocol that provides fault tolerance travels with the agent. This has the important advantage to allow fault-tolerant mobile agent execution without the need to modify the underlying mobile agent platform (in our case ObjectSpace's Voyager). In our performance evaluation, we show the costs of our approach relative to the single, non-replicated agent execution. Pipelined mode and optimized agent forwarding are two optimizations that reduce the overhead of a fault-tolerant mobile agent execution.",2001,0, 1819,Experimental evaluation of the fail-silent behavior of a distributed real-time run-time support built from COTS components,"Mainly for economic and maintainability reasons, more and more dependable real-time systems are being built from commercial off-the-shelf (COTS) components. To build these systems, a commonly-used assumption is that computers are fail-silent. The goal of our work is so determine the coverage of the fail-silence assumption for computers executing a real-time run-time support system built exclusively from COTS components, in the presence of physical faults. The evaluation of fail-silence has been performed on the HADES (Highly Available Distributed Embedded System) run-time support system, aimed at executing distributed hard real-time dependable applications. The main result of the evaluation is a fail-silence coverage of 99.1%. Moreover, we evaluate the error detection mechanisms embedded in HADES according to a rich set of metrics which provides guidance for choosing the set of error detection mechanisms that is best suited to the system needs (e.g. find the best trade-off between fail-silence coverage and overhead caused by error detection).",2001,0, 1820,Performance validation of fault-tolerance software: a compositional approach,"Discusses the lessons learned in the modeling of a software fault tolerance solution built by a consortium of universities and industrial companies for an Esprit project called TIRAN (TaIlorable fault-toleRANce framework for embedded applications). The requirements of high flexibility and modularity for the software have lead to a modeling approach that is strongly based on compositionality. Since the interest was in assessing both the correctness and the performance of the proposed solution, we have cared for these two aspects at the same time, and, by means of an example, we show how this was a central aspect of our analysis.",2001,0, 1821,The use of a multi-agent paradigm in electrical plant condition monitoring,"Electrical utilities need to operate their equipment closer to their design limits and require to extend their operating life through automatic condition monitoring systems. This paper introduces a multi agent paradigm for data interpretation in electrical plant monitoring. Data interpretation is of significant importance to infer the state of the equipment by converting the condition monitoring data into appropriate information. The vast amount of data and the complex processes behind on-line fault detection indicate the need for an automated solution. The classification of partial discharge signatures from gas insulated substations, as a result of applying different artificial intelligence techniques within a multi agent system, is described in this paper. A multi agent system that views the problem as an interaction of simple independent software entities, for effective use of the available data, is presented. The overall solution is derived from the combination of solutions provided by the components of the multi-agent system. This multi-agent system can employ various intelligent system techniques and has been implemented using the ZEUS Agent Building Toolkit",2001,0, 1822,Criteria for developing clinical decision support systems,"The use of archived information and knowledge derived from data-driven system, both at the point of care and retrospectively, is critical to improving the balance between healthcare expenditure and healthcare quality. Data-driven clinical decision support, augmented by performance feedback and education, is a logical addition to consensus- and evidence-based approaches on the path to widespread use of intelligent search agents, expert recognition and warning systems. We believe that these initial applications should (a) capture and archive, with identifiable end-points, complete episode-of-care information for high-complexity, high-cost illnesses, and (b) utilize large numbers of these cases to drive risk-adjusted “individualizedâ€?probabilities for patients requiring care at the time of intervention",2001,0, 1823,Model-aided diagnosis: an inexpensive combination of model-based and case-based condition assessment,"Online condition monitoring and diagnosis are being utilized more and more for increasing the reliability and availability of technical systems and to reduce their maintenance costs. Today's model-based diagnosis (MBD) tools are able to detect and identify incipient and sudden faults very reliably. For application to cost-sensitive equipment, such as high-voltage circuit breakers (HVCBs), however, the presently available MBD systems are not feasible for economic reasons. In this paper, a novel combination of the model-based with the case-based approach to condition diagnosis is presented, which can be implemented on a low-cost computer and which offers satisfactory performance. The technique is divided into two parts: (1) preparation and (2) diagnosis. The diagnosis part can be executed on an inexpensive low-performance computer. Successful tests on real HVCBs confirm the usefulness of this new approach to condition diagnosis",2001,0, 1824,Power quality assessment from a wave-power station,This paper describes the development and testing of a software based flickermeter used in order to assess the supply quality from the LIMPET wave-power station on Islay. It describes the phenomenon of voltage flicker and the effect that a wave-power station has on this quantity. The paper also explains techniques developed in order to improve flickermeter performance when used with pre-recorded data. It also shows that the standard flickermeter sample frequency may be reduced for wave-station applications. Finally the paper presents flicker results from preliminary data collected from the LIMPET station and shows that the device is operating well within acceptable limits,2001,0, 1825,Discrimination of software quality in a biomedical data analysis system,"Object-oriented visualization-based software systems for biomedical data analysis must deal with complex and voluminous datasets within a flexible yet intuitive graphical user interface. In a research environment, the development of such systems are difficult to manage due to rapidly changing requirements, incorporation of newly developed algorithms, and the needs imposed by a diverse user base. One issue that research supervisors must contend with is an assessment of the quality of the system's software objects with respect to their extensibility, reusability, clarity, and efficiency. Objects from a biomedical data analysis system were independently analyzed by two software architects and ranked according to their quality. Quantitative software features were also compiled at varying levels of granularity. The discriminatory power of these software metrics is discussed and their effectiveness in assessing and predicting software object quality is described",2001,0, 1826,Grid information services for distributed resource sharing,"Grid technologies enable large-scale sharing of resources within formal or informal consortia of individuals and/or institutions: what are sometimes called virtual organizations. In these settings, the discovery, characterization, and monitoring of resources, services, and computations are challenging problems due to the considerable diversity; large numbers, dynamic behavior, and geographical distribution of the entities in which a user might be interested. Consequently, information services are a vital part of any Grid software infrastructure, providing fundamental mechanisms for discovery and monitoring, and hence for planning and adapting application behavior. We present an information services architecture that addresses performance, security, scalability, and robustness requirements. Our architecture defines simple low-level enquiry and registration protocols that make it easy to incorporate individual entities into various information structures, such as aggregate directories that support a variety of different query languages and discovery strategies. These protocols can also be combined with other Grid protocols to construct additional higher-level services and capabilities such as brokering, monitoring, fault detection, and troubleshooting. Our architecture has been implemented as MDS-2, which forms part of the Globus Grid toolkit and has been widely deployed and applied",2001,0, 1827,Benchmarking of advanced technologies for process control: an industrial perspective,"Global competition is forcing industrial plants to continuously improve product quality and reduce costs in order to be more profitable. This scenario doesn't allow producing in less than excellent performance. Combining higher, more consistent product quality with a larger production volume and an increased flexibility however places special stresses on plant assets and equipment jeopardizing safety and environmental compliance. Consequently, process industries are called to operate on a very narrow, constrained path that needs to be continuously monitored, assessed and adjusted. The talk aims at reviewing the main points of the problem and at discussing how benchmarking practices should include and take advantage of advanced automation technologies. It will briefly consider basics for project justification and what an automation vendor may (really should) do in order to help process industries customer to select the best solutions for their plants. A particular emphasis will be placed on sometimes neglected aspects, such as hardware-software integration issues; life-cycle cost-benefit analysis; and operator acceptance and living with the APC tools and strategies. Finally some basic suggestions taken out of field experience will be given",2001,0, 1828,An FDI approach for sampled-data systems,"In this paper, problems related to fault detection and isolation (FDI) in sampled-data (SD) systems are studied. A tool to analyze SD systems from the viewpoint of FDI and based on it a direct design approach of FDI system are developed. Key of these studies is the introduction of an operator which is used to describe the sampling effect. With the aid of the developed tool, we also study the perfect decoupling problem, the influence of sampling on the performance of FDI system and the relationship between the direct and indirect design approaches. The application of the approach proposed is finally illustrated and compared with the indirect design approaches through examples",2001,0, 1829,The Quadrics network (QsNet): high-performance clustering technology,"The Quadrics interconnection network (QsNet) contributes two novel innovations to the field of high-performance interconnects: (I) integration of the virtual-address spaces of individual nodes into a single, global, virtual-address space and (2) network fault tolerance via link-level and end-to-end protocols that can detect faults and automatically re-transmit packets. QsNet achieves these feats by extending the native operating system in the nodes with a network operating system and specialized hardware support in the network interface. As these and other important features of QsNet can be found in the InfiniBand specification, QsNet can be viewed as a precursor to InfiniBand. In this paper, we present an initial performance evaluation of QsNet. We first describe the main hardware and software features of QsNet, followed by the results of benchmarks that we ran on our experimental, Intel-based, Linux cluster built around QsNet. Our initial analysis indicates that QsNet performs remarkably well, e.g., user-level latency under 2 μs and bandwidth over 300 MB/s",2001,0, 1830,Consistency management of product line requirements,"Contemporary software engineering utilizes product lines for reducing time to market and development cost of a single product variant, for improving quality of the products, and for creating better estimations of the development process. Most product line development processes rely on performing a domain analysis to find out commonalities among proposed family members and to estimate how they will vary. On the other hand, most requirements engineering methods focus on the specification of a single system. Despite active research efforts to close this, gap there is still no effective method that allows product specifications in arbitrary levels of detail for a hierarchical product family. In particular, it is not possible to combine different specification mechanisms to produce a complete family specification. The authors approach these problems by presenting a method that allows system specifications both in the product line variant as well as the product family level. This exposes many problems in managing consistency between different methods to specify families of systems. To achieve this, our method offers derivation of consistency management support between different specification levels and among family variants",2001,0, 1831,Re-engineering fault tolerance requirements: a case study in specifying fault tolerant flight control systems,We present a formal specification of fault tolerance requirements for an analytical redundancy based fault tolerant flight control system. The development of the specification is driven by the performance and fault tolerance requirements contained in the US Air Force military specification MIL-F-9490D. The design constraints imposed to the system from adopting the analytical redundancy approach are captured within the specification. We draw some preliminary conclusions from our study,2001,0, 1832,Plain end-to-end measurement for local area network voice transmission feasibility,"It is well known that company Intranets are growing into ubiquitous communications media for everything. As a consequence, network traffic is notoriously dynamic, and unpredictable. Unfortunately, local area networks were designed for scalability and robustness, not for sophisticated traffic monitoring. This paper introduces a performance measurement method based on widely used IP protocol elements, which allows measurement of network performance criteria to predict the voice transmission feasibility of a given local area network. The measurement does neither depend on special VoIP equipment, nor does it need network monitoring hardware. Rather it uses special payload samples to detect unloaded network conditions to receive reference values. These samples are followed by typical VoIP application payload to obtain real-world measurement conditions. The validation of our method was done within a local area network and showed convincing results",2001,0, 1833,Tools and procedures for successful TPS management,"The enormous costs associated with Test Program Sets are well known throughout the industry. Therefore, the successful management of a TPS development program is of crucial importance. This paper will attempt to encompass the various management issues involved in the TPS development process. To begin with, a careful review of the TPS must be effected during the bid process. Task evaluation tools are a distinct aid in management's ability to scope and estimate a TPS development task. Old fashioned engineering experience is used to check the results provided by these automatic tools. Once a job is won, rigid management controls must be kept in place to track the percentage completion against the formal plan (e.g. how often have we seen 90% of the available funds expended halfway through the technical effort). Software evaluation tools are a significant aid in keeping a development program on track. Management must try to balance its TPS staff between experienced engineers, middle range people and newer hires. The transfer of knowledge with such a mix is indeed beneficial. Management must look to create a broad experience base on different level UUT (i.e. systems, subsystems, modules, boards, etc.) spanning the overall circuitry spectrum (i.e. digital, analog, RF, etc.). Providing their designers with truly modem ATE software is a must. The operating system and test language must contribute features including incremental compilation, advanced edit/debug aids, links to major simulators and ATPGs, advanced fault isolation techniques, automated instrument programming, extensive use of graphics and more. Newer aids including analog simulators must also be made available by the TPS management team. Today another major factor to be addressed by management is the need to deal with TPS conversion from legacy ATE systems. The large investment in TPSs must be converted to modem ATE systems as the older ATE machines become totally unsupportable. These and other aspects of successful TPS management are the focus of this paper",2001,0, 1834,"Integrated reliability analysis, diagnostics and prognostics for critical power systems","Critical power systems, such as data centers and communication switching facilities, have very high availability requirements (5 min./year downtime). A data center that consumes electricity at a rate of 3 MW can have a downtime cost of $300,000 an hour. Even a momentary interruption of two seconds may cause a loss of two hours of data processing. Consequently, power quality has emerged as an issue of significant importance in the operation of these systems. In this paper, we address three issues of power quality: real-time detection and diagnosis of power quality problems, reliability and availability evaluation, and capacity margin analysis. The objective of real-time detection and diagnosis is to provide a seamless on-line monitoring and off-line maintenance process. The techniques are being applied to monitor the power quality of a few facilities at the University of Connecticut. Reliability analysis, based on a computationally efficient sum of disjoint products, enables analysts to decide on the optimum levels of redundancy, aids operators in prioritizing the maintenance options within a given budget, and in monitoring the system for capacity margin. Capacity margin analysis helps operators to plan for additional loads and to schedule repair/replacement activities. The resulting analytical and software tool is demonstrated on a sample data center",2001,0, 1835,Fault prognosis using dynamic wavelet neural networks,"Prognostic algorithms for condition based maintenance of critical machine components are presenting major challenges to software designers and control engineers. Predicting time-to-failure accurately and reliably is absolutely essential if such maintenance practices are to find their way into the industrial floor. Moreover, means are required to assess the performance and effectiveness of these algorithms. This paper introduces a prognostic framework based upon concepts from dynamic wavelet neural networks and virtual sensors and demonstrates its feasibility via a bearing failure example. Statistical methods to assess the performance of prognostic routines are suggested that are intended to assist the user in comparing candidate algorithms. The prognostic and assessment methodology proposed here may be combined with diagnostic and maintenance scheduling methods and implemented on a conventional computing platform to serve the needs of industrial and other critical processes",2001,0, 1836,Reliability modeling incorporating error processes for Internet-distributed software,"The paper proposes several improvements to conventional software reliability growth models (SRGMs) to describe actual software development processes by eliminating an unrealistic assumption that detected errors are immediately corrected. A key part of the proposed models is the ""delay-effect factor"", which measures the expected time lag in correcting the detected faults during software development. To establish the proposed model, we first determine the delay-effect factor to be included In the actual correction process. For the conventional SRGMs, the delay-effect factor is basically non-decreasing. This means that the delayed effect becomes more significant as time moves forward. Since this phenomenon may not be reasonable for some applications, we adopt a bell-shaped curve to reflect the human learning process in our proposed model. Experiments on a real data set for Internet-distributed software has been performed, and the results show that the proposed new model gives better performance in estimating the number of initial faults than previous approaches",2001,0, 1837,A multi-sensor based temperature measuring system with self-diagnosis,"A new multi-sensor based temperature measuring system with self-diagnosis is developed to replace a conventional system that uses only a single sensor. Controlled by a 16-bit microprocessor, each sensor output from the sensor array is compared with a randomly selected quantised reference voltage at a voltage comparator and the result is a binary ""one"" or ""zero"". The number of ""ones"" and ""zeroes"" is counted and the temperature can be estimated using statistical estimation and successive approximation. A software diagnostic algorithm was developed to detect and isolate the faulty sensors that may be present in the sensor array and to recalibrate the system. Experimental results show that temperature measurements obtained are accurate with acceptable variances. With the self-diagnostic algorithm, the accuracy of the system in the presence faulty sensors is significantly improved and a more robust measuring system is produced",2001,0, 1838,Evaluating capture-recapture models with two inspectors,"Capture-recapture (CR) models have been proposed as an objective method for controlling software inspections. CR models were originally developed to estimate the size of animal populations. In software, they have been used to estimate the number of defects in an inspected artifact. This estimate can be another source of information for deciding whether the artifact requires a reinspection to ensure that a minimal inspection effectiveness level has been attained. Little evaluative research has been performed thus far on the utility of CR models for inspections with two inspectors. We report on an extensive Monte Carlo simulation that evaluated capture-recapture models suitable for two inspectors assuming a code inspections context. We evaluate the relative error of the CR estimates as well as the accuracy of the reinspection decision made using the CR model. Our results indicate that the most appropriate capture-recapture model for two inspectors is an estimator that allows for inspectors with different capabilities. This model always produces an estimate (i.e., does not fail), has a predictable behavior (i.e., works well when its assumptions are met), will have a relatively high decision accuracy, and will perform better than the default decision of no reinspections. Furthermore, we identify the conditions under which this estimator will perform best",2001,0, 1839,Management indicators model to evaluate performance of IT organizations,"There is no arguing nowadays about the importance of IT for the growth and competitive edge of organizations. But if technology is to be a true asset for a company, it must be aligned with the business strategic goals by means of a formalized system of strategic planning, maturity of development process, technology management and corporative quality vision. The accrued benefits can be manifold: the development of training and learning environments for an effective improvement of procedures and product quality, efficient use of assets and resources, opportunity for innovation and technologic advancement, an approach to problem solving in areas critical to the organization among others. Many companies make use of these practices, but find it hard to evaluate how effective they are and what is the final quality of the achieved results at diverse customer levels both in project vision and the continuity of service. One cause of these drawbacks is failure to apply measurement models which provide objective pointers to assess how effective the IT strategies used have actually been considering the strategic business goals. To incorporate models of measures is no easy task because it entails working on several aspects: technical, processes, products and the peculiar culture of each organization. This paper presents a model of indicators to evaluate IT performance using three well known methods: balanced scorecard, GQM and PSM",2001,0, 1840,Systematic defects in software cost-estimation models harm management,"As software development becomes an increasingly important enterprise, managerial requirements for cost estimation increase, yet developmers continue a rather a long history of failing to cost software systems development adequately. Here, it is contented that poor results are due, in part, to some traditionally recognized problems and, in part, to a defect in the models themselves. Identifying the defect of software cost models is the purpose of this paper",2001,0, 1841,Documentation as a software process capability indicator,"Summary form only given. In a small software organization, a close and intense relationship with its customers is often a substitute for documentation along the software processes. Nevertheless, according to the quality standards, the inadequacy of the required documentation will retain the assessed capability of software processes on the lowest level. This article describes the interconnections between software process documentation and the maturity of the organization. The data is collected from the SPICE assessment results of small and medium sized software organizations in Finland. The aim of the article is to visualise the necessity of documentation throughout the software engineering processes in order to achieve a higher capability level. In addition we point out that processes with insufficient documentation decrease the chance to improve the quality of the processes, as it is impossible to track and analyse them",2001,0, 1842,Object oriented metrics useful in the prediction of class testing complexity,"Adequate metrics of object-oriented software enable one to determine the complexity of a system and estimate the effort needed for testing already in the early stage of system development. The metrics values enable to locate parts of the design that could be error prone. Changes in these parts could significantly, improve the quality of the final product and decrease testing complexity. Unfortunately only few of the existing Computer Aided Software Engineering tools (CASE) calculate object metrics. In this paper methods allowing proper calculation of class metrics for some commercial CASE tool have been developed. New metric, calculable on the basis of information kept in CASE repository and useful in the estimation of testing effort have also been proposed. The evaluation of all discussed metrics does not depend on object design method and on the implementation language",2001,0, 1843,Using reading techniques to focus inspection performance,"Software inspection is a quality assurance method to detect defects early during the software development process. For inspection planning there are defect detection techniques, so-called reading techniques, which let the inspection planner focus the effectiveness of individual inspectors on specific sets of defects. For realistic planning it is important to use empirically evaluated defect detection techniques. We report on the replication of a large-scale experiment in an academic environment. The experiment evaluated the effectiveness of defect detection for inspectors who use a checklist or focused scenarios on individual and team level. A main finding of the experiments is that the teams were effective to find defects: In both experiments the inspection teams found on average more than 70% of the defects in the product. The checklist consistently was overall somewhat more effective on individual level, while the scenarios traded overall defect detection effectiveness for much better effectiveness regarding their target focus, in our case specific parts of the documents. Another main result of the study is that scenario-based reading techniques can be used in inspection planning to focus individual performance without significant loss of effectiveness on team level",2001,0, 1844,"A Petri net based method for resource estimation: an approach considering data-dependency, causal and temporal precedences",This work presents a combined reachability-structural methodology for computing the number of functional units in hardware/software co-design context considering timing constraints. The proposed method extends some previous works in the sense that data-dependency has been captured and considered in the functional unit estimation methodology. The proposed hardware/software co-design framework uses the Petri net as common formalism for performing quantitative and qualitative analysis,2001,0, 1845,Mobile database procedures in MDBAS,"MDBAS is a prototype of a multidatabase management system based on mobile agents. The system integrates a set of autonomous databases distributed over a network, enables users to create a global database scheme, and manages transparent distributed execution of user requests and procedures including distributed transactions. The paper highlights the issues related to mobile database procedures, especially the MDBAS execution strategy. In order to adequately assess MDBAS's qualities and bottlenecks, we have carried out complex performance evaluation with real databases distributed in a real Internet. The evaluation included a comparison to a commercial database with distributed database capabilities. The most interesting results are presented and commented",2001,0, 1846,Avoiding faulty privileges in fast stabilizing rings,"Most conventional studies on self-stabilization have been indifferent to the safety under convergence. This paper investigates how mutual exclusion property can be achieved in self-stabilizing rings even for illegitimate configurations. We present a new method which uses a state with a large state space to detect faults. If some faults are detected, every process is reset and not given a privilege. Even if the reset values are different between processes, our protocol mimics the behavior of Dijkstra's K-state protocol (1974). Then we have a fast and safe mutual exclusion protocol. A simulation study also shows its performance",2001,0, 1847,Probabilistic model for segmentation based word recognition with lexicon,"We describe the construction of a model for off-line word recognizers based on over-segmentation of the input image and recognition of segment combinations as characters in a given lexicon word. One such recognizer, the Word Model Recognizer (WMR), is used extensively. Based on the proposed model it was possible to improve the performance of WMR",2001,0, 1848,Coordinating the simultaneous upgrade of multiple CORBA application objects,"The Eternal system provides CORBA applications with robust fault tolerance and the ability to be upgraded without a service interruption. The Eternal Evolution Manager handles both interface-preserving upgrades and interface-changing upgrades. The syntax and semantics of the application objects' IDL interfaces are unchanged by an interface-preserving upgrade. Interface-changing upgrades allow modification to one or more application object's IDL interfaces. Because modification to an object's IDL interface can affect clients of that object, an interface-changing upgrade requires coordinating the upgrade of multiple application objects at the same time",2001,0, 1849,Role of 3-D graphics in NDT data processing,"Visualisation in 3-D can significantly improve the interpretation and understanding of imaged NDT data sets, The advantages of using 3-D graphics to assist in the interpretation of ultrasonic data are discussed in the context of an advanced software environment for the reconstruction, visualisation and analysis of 3-D images within a component CAD model. The software combines the analysis features of established 2-D packages with facilities for real-time data rotation, interactive orthogonal/oblique slicing and `true' image reconstruction, where scanning-surface shape and reflection from component boundaries are accounted for through interaction with the full 3-D model of the component. A number of novel facilities exploit the graphics capability of the system. These include the overlay of 3-D images with individual control of image transparency; a floating tooltip window for interrogation of data point coordinates and amplitude; image annotation tools, including 3-D distance measurement; and automated defect sizing based on `6 dB drop' and `maximum amplitude' methods. A graphical user interface has also been designed for a well established flaw response model, which allows the user to easily specify the flaw size, shape, orientation and location; probe parameters and scan pattern on the component. The output is presented as a simulated ultrasound image",2001,0, 1850,Continual on-line training of neural networks with applications to electric machine fault diagnostics,"An online training algorithm is proposed for neural network (NN) based electric machine fault detection schemes. The algorithm obviates the need for large data memory and long training time, a limitation of most AI-based diagnostic methods for commercial applications, and in addition, does not require training prior to commissioning. Experimental results are provided for an induction machine stator winding turn-fault detection scheme that uses a feedforward NN to compensate for machine and instrumentation nonidealities, to illustrate the feasibility of the new training algorithm for real-time implementation",2001,0, 1851,Rational interpolation examples in performance analysis,"The rational interpolation approach has been applied to performance analysis of computer systems previously. In this paper, we demonstrate the effectiveness of the rational interpolation technique in the analysis of randomized algorithms and the fault probability calculation for some real-time systems",2001,0, 1852,On interference cancellation and iterative techniques,"Recent research activities in the area of mobile radio communications have moved to third generation (3G) cellular systems to achieve higher quality with variable transmission rate of multimedia information. In this paper, an overview is presented of various interference cancellation and iterative detection techniques that are believed to be suitable for 3G wireless communications systems. Key concepts are space-time processing and space-division multiple access (or SDMA) techniques. SDMA techniques are possible with software antennas. Furthermore, to reduce receiver implementation complexity, iterative detection techniques are considered. A particularly attractive method uses tentative hard decisions, made on the received positions with the highest reliability, according to some criterion, and can potentially yield an important reduction in the computational requirements of an iterative receiver, with minimum penalty in error performance. A study of the tradeoffs between complexity and performance loss of iterative multiuser detection techniques is a good research topic",2001,0, 1853,A short circuit current study for the power supply system of Taiwan railway,"The western Taiwan railway transportation system consists mainly of a mountain route and ocean route. The Taiwan Railway Administration (TRA) has conducted a series of experiments on the ocean route in recent years to identify the possible causes of unknown events which cause the trolley contact wires to melt down frequently. The conducted tests include the short circuit fault test within the power supply zone of the Ho Long substation (Zhu Nan to Tong Xiao) that had the highest probability for the melt down events. Those test results based on the actual measured maximum short circuit current provide a valuable reference for TRA when comparing against the said events. The Le Blanc transformer is the main transformer of the Taiwan railway electrification system. The Le Blanc transformer mainly transforms the Taiwan Power Company (TPC) generated three phase alternating power supply system (69 kV, 60 Hz) into a two single-phase alternating power distribution system (M phase and T phase) (26 kV, 60 Hz) needed for the trolley traction. As a unique winding connection transformer, the conventional software for fault analysis will not be able to simulate its internal current and phase difference between each phase currents. Therefore, besides extracts of the short circuit test results, this work presents a EMTP model based on a Taiwan Railway substation equivalent circuit model with the Le Blanc transformer. The proposed circuit model can simulate the same short circuit test to verify the actual fault current and accuracy of the equivalent circuit model. Moreover, the maximum short circuit current is further evaluated with reference to the proposed equivalent circuit. Preliminary inspection of trolley contact wire reveals the possible causes of melt down events based on the simulation results",2001,0,1730 1854,Benchmark the software based MPEG-4 video codec,"Software-based implementations of H.263 and MPEG-2 video standards are well documented, recently reporting faster than or close to real-time performance. Since the complexity of MPEG-4 is higher than its predecessor standards, real time video encoding and decoding can exhaust computational resource without achieving real-time speed. In this paper, we report a software-based real-time MPEG-4 video codec (encoder and decoder) on a single-processor PC, with no frame-skip, or profile simplifying tricks, or quality loss compromise. The proposed codec is an embodiment of a number of novel algorithms. Specifically, we have designed a fast binary shape coding algorithm, a fast motion estimation algorithm, and a technique for detection of all-zero quantized blocks. To enhance the computation speed, we harness Intel's SIMD (Single Instruction Stream, Multiple Data Stream) instructions to implement these algorithms. On the 800 MHz Intel Pentium III, our decoder can play real-time CIF video with less than 20% system resource consumption; and our encoder realizes up to 70 frames per second for CIF resolution video, with the similar picture quality as the reference software",2001,0, 1855,N-dimensional zonal algorithms. The future of block based motion estimation?,"The popularity of zonal based algorithms for block based motion estimation has been increasing due to their superior performance in both terms of reduced complexity and superior quality versus other preexisting algorithms. In our previous work we mainly focused on generalizing the different parameters used in these algorithms and finding the possibly most efficient implementation. In this paper a further generalization of these algorithms is presented, where instead we mainly consider the way zones can be designed and what should be the ultimate goal of such an algorithm. As a result we present a framework of algorithms which can have applications not only in video coding, but also in other video signal processing areas, such as computer vision, video analysis, salient stills etc. We do so by initially considering the dimensionality of video data and how it can be most efficiently analyzed and exploited in the context of zonal algorithms. A formulization of these algorithms is then presented, according to which different implementations for different applications can be selected. Simulation results, for the simple 3-D case using the predictive diamond search (PDS) algorithm, a low complexity zonal algorithm, demonstrate the efficacy of the proposed techniques while still having low complexity. Higher order implementations using more dimensions and more efficient zonal algorithms can also be considered",2001,0, 1856,A memory-based reasoning approach for assessing software quality,"Several methods have been explored for assuring the reliability of mission critical systems (MCS), but no single method has proved to be completely effective. This paper presents an approach for quantifying the confidence in the probability that a program is free of specific classes of defects. The method uses memory-based reasoning techniques to admit a variety of data from a variety of projects for the purpose of assessing new systems. Once a sufficient amount of information has been collected, the statistical results can be applied to programs that are not in the analysis set to predict their reliabilities and guide the testing process. The approach is applied to the analysis of Y2K defects based on defect data generated using fault-injection simulation",2001,0, 1857,Information theoretic metrics for software architectures,"Because it codifies best practices, and because it supports various forms of software reuse, the discipline of software architecture is emerging as an important branch of software engineering research and practice. Because architectural-level decisions are prone to have a profound impact on finished software products, it is important to apprehend their quality attributes and to quantify them (as much as possible). In this paper, we discuss an information-theoretic approach to the definition and validation of architectural metrics, and illustrate our approach on a sample example",2001,0, 1858,"The Exu approach to safe, transparent and lightweight interoperability","Exu is a new approach to automated support for safe, transparent and lightweight interoperability in multilanguage software systems. The approach is safe because it enforces appropriate type compatibility across language boundaries. It is transparent since it shields software developers from the details inherent in low-level language-based interoperability mechanisms. It is lightweight for developers because it eliminates tedious and error-prone coding (e.g., JNI) and lightweight at run-time since it does not unnecessarily incur the performance overhead of distributed, IDL-based approaches. The Exu approach exploits and extends the object-oriented concept of meta-object, encapsulating interoperability implementation in meta-classes so that developers can produce interoperating code by simply using meta-inheritance. An example application of Exu to the development of Java/C++ (i.e., multilanguage) programs illustrates the safety and transparency advantages of the approach. Comparing the performance of the Java/C++ programs produced by Exu to the same set of programs developed using IDL-based approaches provides preliminary evidence of the performance advantages of Exu",2001,0, 1859,Scenario-based functional regression testing,"Regression testing has been a popular quality-assurance technique. Most regression testing techniques are based on code or software design. This paper proposes a scenario-based functional regression testing, which is based on end-to-end (E2E) integration test scenarios. The test scenarios are first represented in a template model that embodies both test dependency and traceability. By using test dependency information, one can obtain a test slicing algorithm to detect the scenarios that are affected and thus they are candidates for regression testing. By using traceability information, one can find affected components and their associated test scenarios and test cases for regression testing. With the same dependency and traceability information one can use the ripple effect analysis to identify all affected, including directly or indirectly, scenarios and thus the set of test cases can be selected for regression testing. This paper also provides several alternative test-case selection approaches and a hybrid approach to meet various requirements. A web-based tool has been developed to support these regression testing tasks",2001,0, 1860,Complexity measurements of the inter-domain management system design,"Current use of software metrics in the industry focuses on the cost and effort estimation, while some research was carried out in the direction of their use as fault indicators. Empirical studies in software measurement are scarce, especially in the realm of object-oriented metrics, while there is no record of management system assessment using these metrics. We discuss an approach to using established object-oriented software metrics as complexity/coupling and thus risk indicators early in the system development lifecycle. Further, we subject a medium-scale inter-domain network and service management system, developed in UML, to the metric assessment, and present an analysis of these measurements. This system was developed in a European Commission-sponsored ACTS research project - TRUMPET. Results indicate that the highest level of complexity, and thus also risk, is exhibited at major interconnection points between autonomous management domains. Moreover, the results imply a strong ordinal correlation between the metrics.",2001,0, 1861,The effect of timeout prediction and selection on wide area collective operations,"Failure identification is a fundamental operation concerning exceptional conditions that network programs must be able to perform. In this paper, we explore the use of timeouts to perform failure identification at the application level. We evaluate the use of static timeouts and of dynamic timeouts based on forecasts using the Network Weather Service. For this evaluation, we perform experiments on a wide-area collection of 31 machines distributed in eight institutions. Though the conclusions are limited to the collection of machines used, we observe that a single static timeout is not reasonable, even for a collection of similar machines over time. Dynamic timeouts perform roughly as well as the best static timeouts and, more importantly, they provide a single methodology for timeout determination that should be effective for wide-area applications",2001,0, 1862,Software cost estimation with incomplete data,"The construction of software cost estimation models remains an active topic of research. The basic premise of cost modeling is that a historical database of software project cost data can be used to develop a quantitative model to predict the cost of future projects. One of the difficulties faced by workers in this area is that many of these historical databases contain substantial amounts of missing data. Thus far, the common practice has been to ignore observations with missing data. In principle, such a practice can lead to gross biases and may be detrimental to the accuracy of cost estimation models. We describe an extensive simulation where we evaluate different techniques for dealing with missing data in the context of software cost modeling. Three techniques are evaluated: listwise deletion, mean imputation, and eight different types of hot-deck imputation. Our results indicate that all the missing data techniques perform well with small biases and high precision. This suggests that the simplest technique, listwise deletion, is a reasonable choice. However, this will not necessarily provide the best performance. Consistent best performance (minimal bias and highest precision) can be obtained by using hot-deck imputation with Euclidean distance and a z-score standardization",2001,0, 1863,Prioritizing test cases for regression testing,"Test case prioritization techniques schedule test cases for execution in an order that attempts to increase their effectiveness at meeting some performance goal. Various goals are possible; one involves rate of fault detection, a measure of how quickly faults are detected within the testing process. An improved rate of fault detection during testing can provide faster feedback on the system under test and let software engineers begin correcting faults earlier than might otherwise be possible. One application of prioritization techniques involves regression testing, the retesting of software following modifications; in this context, prioritization techniques can take advantage of information gathered about the previous execution of test cases to obtain test case orderings. We describe several techniques for using test execution information to prioritize test cases for regression testing, including: 1) techniques that order test cases based on their total coverage of code components; 2) techniques that order test cases based on their coverage of code components not previously covered; and 3) techniques that order test cases based on their estimated ability to reveal faults in the code components that they cover. We report the results of several experiments in which we applied these techniques to various test suites for various programs and measured the rates of fault detection achieved by the prioritized test suites, comparing those rates to the rates achieved by untreated, randomly ordered, and optimally ordered suites",2001,0, 1864,"On comparisons of random, partition, and proportional partition testing","Early studies of random versus partition testing used the probability of detecting at least one failure as a measure of test effectiveness and indicated that partition testing is not significantly more effective than random testing. More recent studies have focused on proportional partition testing because a proportional allocation of the test cases (according to the probabilities of the subdomains) can guarantee that partition testing will perform at least as well as random testing. We show that this goal for partition testing is not a worthwhile one. Guaranteeing that partition testing has at least as high a probability of detecting a failure comes at the expense of decreasing its relative advantage over random testing. We then discuss other problems with previous studies and show that failure to include important factors (cost, relative effectiveness) can lead to misleading results",2001,0, 1865,The development of security system and visual service support software for on-line diagnostics,"Hitachi's CD-SEM achieves the highest tool availability in the industry. However, efforts to further our performance are continuously underway. The proposed on-line diagnostics system can allow senior technical staff to monitor and investigate tool status by connecting the equipment supplier and the device manufacturer sites through the Internet. The advanced security system ensures confidentiality by firewalls, digital certification, and advanced encryption algorithms to protect device manufacturer data from unauthorized access. Service support software, called DDS (defective part diagnosis support system), will analyze the status of mechanical, evacuation, and optical systems. Its advanced overlay function on a timing chart identifies failed components in the tool and allows on-site or remote personnel to predict potential failures prior to their occurrence. Examples of application shows that the proposed system is expected to reduce repair time, improve availability and lower cost of ownership",2001,0, 1866,Effect of code coverage on software reliability measurement,"Existing software reliability-growth models often over-estimate the reliability of a given program. Empirical studies suggest that the over-estimations exist because the models do not account for the nature of the testing. Every testing technique has a limit to its ability to reveal faults in a given system. Thus, as testing continues in its region of saturation, no more faults are discovered and inaccurate reliability-growth phenomena are predicted from the models. This paper presents a technique intended to solve this problem, using both time and code coverage measures for the prediction of software failures in operation. Coverage information collected during testing is used only to consider the effective portion of the test data. Execution time between test cases, which neither increases code coverage nor causes a failure, is reduced by a parameterized factor. Experiments were conducted to evaluate this technique, on a program created in a simulated environment with simulated faults, and on two industrial systems that contained tenths of ordinary faults. Two well-known reliability models, Goel-Okumoto and Musa-Okumoto, were applied to both the raw data and to the data adjusted using this technique. Results show that over-estimation of reliability is properly corrected in the cases studied. This new approach has potential, not only to achieve more accurate applications of software reliability models, but to reveal effective ways of conducting software testing",2001,0, 1867,A stochastic model of fault introduction and removal during software development,"Two broad categories of human error occur during software development: (1) development errors made during requirements analysis, design, and coding activities; (2) debugging errors made during attempts to remove faults identified during software inspections and dynamic testing. This paper describes a stochastic model that relates the software failure intensity function to development and debugging error occurrence throughout all software life-cycle phases. Software failure intensity is related to development and debugging errors because data on development and debugging errors are available early in the software life-cycle and can be used to create early predictions of software reliability. Software reliability then becomes a variable which can be controlled up front, viz, as early as possible in the software development life-cycle. The model parameters were derived based on data reported in the open literature. A procedure to account for the impact of influencing factors (e.g., experience, schedule pressure) on the parameters of this stochastic model is suggested. This procedure is based on the success likelihood methodology (SLIM). The stochastic model is then used to study the introduction and removal of faults and to calculate the consequent failure intensity value of a small-software developed using a waterfall software development",2001,0, 1868,Future architecture for flight control systems,"The development of fault tolerant embedded control systems, such as flight control systems, FCS, is currently highly specialized and time consuming. We introduce a conceptual architecture for the next decade control system where all control and logic is distributed to a number of computer nodes locally linked to actuators and connected via a communication network. In this way we substantially decrease the lifecycle cost of such embedded systems and acquire scalable fault tolerance. Fault tolerance is based on redundancy and in our concept permanent faults are covered by hardware replication and transient faults, fault detection and processing by software techniques. With intelligent nodes and the use of inherent redundancy a robust and simple fault tolerant system is introduced with a minimum of both hardware and bandwidth requirements. The study is based on an FCS for JAS 39 Gripen, a multirole combat aircraft that is statically unstable at subsonic speed",2001,0, 1869,Research of steep-front wave impulse voltage test effectiveness in finding internal fault of composite insulators,"In order to verify the effectiveness of steep-front impulse voltage testing in finding the internal faults of composite insulators, some insulators with faults are modeled which include conductive channel, semi-conductive airy channel and partial little air bubbles that occur separately at different places. A steep-front wave impulse voltage test (steepness of wave front is 1000-4000 kV/μs) is respectively made for these faulty and normal insulators. At the same-time the internal electric field intensity and its distribution in the insulator is calculated by making use of infinite element analysis software in order to test whether breakdown has occurred. The result is consistent with the experimental one. The final result shows that steep-front wave impulse voltage testing plays a very effective part in finding severe faults of the insulator, however, tiny faults are not easy found using this method. The result of this research gives a reference to revise the steep-front wave impulse voltage test standard",2001,0, 1870,PSP-EAT-enhancing a personal software process course,"The main objective of teaching the personal software process (PSP) is to develop in students a professional attitude towards producing software. PSP improves performance in size and effort estimation accuracy, software reusability, product quality while maintaining or increasing overall productivity. This work presents the experience of the first author in teaching PSP at graduate level for three years, and a tool, PSP-EAT, the authors have built to reduce both student and instructor clerical work in learning and teaching PSP. The tool helps also in increasing the students' data collection accuracy and their receptability to the PSP principles",2001,0, 1871,Automated video chain optimization,"Video processing algorithms found in complex video appliances such as television sets and set top boxes exhibit an interdependency that makes it is difficult to predict the picture quality of an end product before it is actually built. This quality is likely to improve when algorithm interaction is explicitly considered. Moreover, video algorithms tend to have many programmable parameters, which are traditionally tuned in manual fashion. Tuning these parameters automatically rather than manually is likely to speed up product development. We present a methodology that addresses these issues by means of a genetic algorithm that, driven by a novel objective image quality metric, finds high-quality configurations of the video processing chain of complex video products",2001,0,1872 1872,Automated video chain optimization,"Video processing algorithms found in complex video appliances such as television sets and set top boxes exhibit an interdependency that makes it is difficult to predict the picture quality of an end product before it is actually built. This quality is likely to improve when algorithm interaction is explicitly considered. Moreover, video algorithms tend to have many programmable parameters, which are traditionally tuned in manual fashion. Tuning these parameters automatically rather than manually is likely to speed up product development. We present a methodology that addresses these issues by means of a genetic algorithm that, driven by a novel objective image quality metric, finds high-quality configurations of the video processing chain of complex video products.",2001,0, 1873,Extreme programming from a CMM perspective,"Extreme programming has been advocated recently as an appropriate programming method for the high-speed, volatile world of Internet and Web software development. The author reviews XP from the perspective of the capability maturity model for software, gives overviews of both approaches, and critiques XP from a SW-CMM perspective. He concludes that lightweight methodologies such as XP advocate many good engineering practices and that both perspectives have something to offer the other",2001,0, 1874,Accelerating learning from experience: avoiding defects faster,"All programmers learn from experience. A few are rather fast at it and learn to avoid repeating mistakes after once or twice. Others are slower and repeat mistakes hundreds of times. Most programmers' behavior falls somewhere in between: They reliably learn from their mistakes, but the process is slow and tedious. The probability of making a structurally similar mistake again decreases slightly during each of some dozen repetitions. Because of this a programmer often takes years to learn a certain rule-positive or negative-about his or her behavior. As a result, programmers might turn to the personal software process (PSP) to help decrease mistakes. We show how to accelerate this process of learning from mistakes for an individual programmer, no matter whether learning is currently fast, slow, or very slow, through defect logging and defect data analysis (DLDA) techniques",2001,0, 1875,Fast IP packet classification with configurable processor,"The next generation IP routers/switches need to provide quality of service (QoS) guarantees and differentiated services. These capabilities require a packet to be classified according to multiple fields in order to determine which flow an incoming packet belongs to. We present a way of achieving fast IP packet classification with a configurable processor as a more flexible and future proof alternative to using a hard-wired ASIC. Configurable processors can be tuned by the system designer with new instructions and hardware. To accelerate table lookups and bitmap manipulation, we develop several customized instructions that are specially optimized to yield large performance improvements. It is shown that one packet needs 16 cycles in the case of cache hit and 30 cycles in the worst case of cache miss for multiple field classification (compared to hundreds of cycles for a standard RISC processor). Thus, it is demonstrated that by using two 200 MHz processors with a proper configuration, OC48 wire-speed packet classification can be achieved while matching multiple fields with sophisticated rules of ranges and/or prefixes",2001,0, 1876,Deadline based channel scheduling,"The use of deadline based channel scheduling in support of real time delivery of application data units (ADU's) is investigated. Of interest is priority scheduling where a packet with a smaller ratio of delivery deadline over number of hops to destination is given a higher priority. It has been shown that a variant of this scheduling algorithm, based on head-of-the-line priority, is efficient and effective in supporting real time delivery of ADU's. In this variant, packets with a ratio smaller than or equal to a given threshold are sent to the higher priority queue. We first present a technique to select this threshold dynamically. The effectiveness of our technique is evaluated by simulation. We then study the performance of deadline based channel scheduling for large networks, with multiple autonomous systems. For this case, accurate information on number of hops to destination may not be available. A technique to estimate this distance metric is presented. The effectiveness of our algorithm with this estimated distance metric is evaluated. In addition, we study the performance of a multi-service scenario where only a fraction of the routers deploy deadline based channel scheduling",2001,0, 1877,Measuring voice readiness of local area networks,"It is well known that company intranets are growing into ubiquitous communications media for everything. As a consequence, network traffic is notoriously dynamic, and unpredictable. In most scenarios, the data network requires tuning to achieve acceptable quality for voice integration. This paper introduces a performance measurement method based on widely used IP protocol elements, which allows measurement of network performance criteria to predict the voice transmission feasibility of a given local area network. The measurement does neither depend on special VoIP (Voice over IP) equipment, nor does it need network monitoring hardware. Rather it uses special payload samples to detect unloaded network conditions to receive reference values. These samples are followed by a typical VoIP application payload to obtain real-world measurement conditions. We successfully validate our method within a local area network and present all captured values that describe important aspects of voice quality",2001,0, 1878,A validation fault model for timing-induced functional errors,"The violation of timing constraints on signals within a complex system can create timing-induced functional errors which alter the value of output signals. These errors are not detected by traditional functional validation approaches because functional validation does not consider signal timing. Timing-induced functional errors are also not detected by traditional timing analysis approaches because the errors may affect output data values without affecting output signal timing. A timing fault model, the Mis-Timed Event (MTE) fault model, is proposed to model timing-induced functional errors. The MTE fault model formulates timing errors in terms of their effects on the lifespans of the signal values associated with the fault. We use several examples to evaluate the MTE fault model. MTE fault coverage results shows that it efficiently captures an important class of errors which are not targeted by other metrics",2001,0, 1879,Fault-based side-channel cryptanalysis tolerant Rijndael symmetric block cipher architecture,"Fault-based side channel cryptanalysis is very effective against symmetric and asymmetric encryption algorithms. Although straightforward hardware and time redundancy based Concurrent Error Detection (CED) architectures can be used to thwart such attacks, they entail significant overhead (either area or performance). In this paper we investigate systematic approaches to low-cost, low-latency CED for Rijndael symmetric encryption algorithm. These approaches exploit the inverse relationship that exists between Rijndael encryption and decryption at various levels and develop CED architectures that explore the trade-off between area overhead, performance penalty and error detection latency. The proposed techniques have been validated on FPGA implementations",2001,0, 1880,Genetic programming model for software quality classification,"We apply genetic programming techniques to build a software quality classification model based on the metrics of software modules. The model we built attempts to distinguish the fault-prone modules from non-fault-prone modules using genetic programming (GP). These GP experiments were conducted with a random subset selection for GP in order to avoid overfitting. We then use the whole fit data set as the validation data set to select the best model. We demonstrate through two case studies that the GP technique can achieve good results. Also, we compared GP modeling with logistic regression modeling to verify the usefulness of GP",2001,0, 1881,The SASHA architecture for network-clustered web servers,"We present the Scalable, Application-Space, Highly-Available (SASHA) architecture for network-clustered web servers that demonstrates high performance and fault tolerance using application-space software and Commercial-Off-The-Shelf (COTS) hardware and operating systems. Our SASHA architecture consists of an application-space dispatcher, which performs OSI layer 4 switching using layer 2 or layer 3 address translation; application-space agents that execute on server nodes to provide the capability for any server node to operate as the dispatcher, a distributed state-reconstruction algorithm; and a token-based communications protocol that supports self-configuring, detecting and adapting to the addition or removal of servers. The SASHA architecture of clustering offers a flexible and cost-effective alternative to kernel-space or hardware-based network-clustered servers with performance comparable to kernel-space implementations",2001,0, 1882,Detecting heap smashing attacks through fault containment wrappers,"Buffer overflow attacks are a major cause of security breaches in modern operating systems. Not only are overflows of buffers on the stack a security threat, overflows of buffers kept on the heap can be too. A malicious user might be able to hijack the control flow of a root-privileged program if the user can initiate an overflow of a buffer on the heap when this overflow overwrites a function pointer stored on the heap. The paper presents a fault-containment wrapper which provides effective and efficient protection against heap buffer overflows caused by C library functions. The wrapper intercepts every function call to the C library that can write to the heap and performs careful boundary checks before it calls the original function. This method is transparent to existing programs and does not require source code modification or recompilation. Experimental results on Linux machines indicate that the performance overhead is small",2001,0, 1883,Assessing inter-modular error propagation in distributed software,"With the functionality of most embedded systems based on software (SW), interactions amongst SW modules arise, resulting in error propagation across them. During SW development, it would be helpful to have a framework that clearly demonstrates the error propagation and containment capabilities of the different SW components. In this paper, we assess the impact of inter-modular error propagation. Adopting a white-box SW approach, we make the following contributions: (a) we study and characterize the error propagation process and derive a set of metrics that quantitatively represents the inter-modular SW interactions, (b) we use a real embedded target system used in an aircraft arrestment system to perform fault-injection experiments to obtain experimental values for the metrics proposed, (c) we show how the set of metrics can be used to obtain the required analytical framework for error propagation analysis. We find that the derived analytical framework establishes a very close correlation between the analytical and experimental values obtained. The intent is to use this framework to be able to systematically develop SW such that inter-modular error propagation is reduced by design",2001,0, 1884,Why is it so hard to predict software system trustworthiness from software component trustworthiness?,"When software is built from components, nonfunctional properties such as security, reliability, fault-tolerance, performance, availability, safety, etc. are not necessarily composed. The problem stems from our inability to know a priori, for example, that the security of a system composed of two components can be determined from knowledge about the security of each. This is because the security of the composite is based on more than just the security of the individual components. There are numerous reasons for this. The article considers only the factors of component performance and calendar time. It is concluded that no properties are easy to compose and some are much harder than others",2001,0, 1885,A neural network based fault detection and identification scheme for pneumatic process control valves,"This paper outlines a method for detection and identification of actuator faults in a pneumatic process control valve using a neural network. First, the valve signature and dynamic error band tests, used by specialists to determine valve performance parameters, are carried out for a number of faulty operating conditions. A commercially available software package is used to carry out the diagnostic tests, thus eliminating the need for additional instrumentation of the valve. Next, the experimentally determined valve performance parameters are used to train a multilayer feedforward network to successfully detect and identify incorrect supply pressure, actuator vent blockage, and diaphragm leakage faults",2001,0, 1886,State estimation methods applied to transformer monitoring,"State estimation methods are valuable for monitoring complex systems with not directly measurable states. One case in point is the monitoring of power transformers. The thermal condition of the transformer is very crucial in determining the life expectancy of the transformer, yet the hot spot temperature of the transformer cannot be directly measured. Other events such as inter-coil discharges cannot be directly measured unless they develop into a power fault. This paper describes the application of state estimation methods in a transformer diagnostic system. This system consists of hardware and software that continuously monitor the transformer electro-thermal states. The transformer states are obtained from redundant measurements of transformer terminal voltages, currents, tank temperature, oil temperature and ambient temperature. This data are fitted into the electro-thermal model of the transformer via state estimation methods that provide the full range of the transformer electro-thermal states. The paper describes the electrothermal model, the unique features of the data acquisition system and focuses on the application of the state estimation methods to derive important transformer quantities such as: (a) loss of life; and (b) transformer coil integrity. The system has been implemented on a trial basis on four Entergy System transformers. In the present implementation, the minimum input data requirements are: (1) phase currents and voltages at both ends of the transformer (a total of twelve); (2) top of oil and bottom of oil temperatures; and (3) load tap changer position. Actual data and the performance of the state estimation methods are presented.",2001,0, 1887,"Automated analysis of voltage sags, their causes and impacts","This paper focuses on voltage sags. It introduces an automated approach to the analysis of voltage sags, their causes and impacts. The sag analysis is performed using software tools developed for this purpose. First, the software performs detection, classification and characterization of voltage sags. Next, if the voltage sag is caused by a fault, the software finds the fault location. Finally the software allows replaying of the sag waveforms for the purpose of evaluating of the impacts on equipment operation.",2001,0, 1888,Combined use of intelligent partial discharge analysis in evaluating high voltage dielectric condition,"This paper describes the results of synthesised high voltage impulse tests, conducted on surrogate dielectric samples. The tests conducted under laboratory conditions, were performed using contoured electrodes submersed under technical grade insulating oil. An escalating level of artificial degradation within surrogate samples was assessed, this correlated against magnitude and frequency of events. Withstand of partial discharge activity up to a point of insulation breakdown was observed using a conventional elliptical display partial discharge detector. Measurements of PD activity were simultaneously captured by virtual scope relaying data array captures to a desktop computer. The captured data arrays were duly processed by an artificial neural network program, the net result of which indicated harmony between human-guided opinion and the software aptitude. This paper describes work currently being undertaken for the identification and diagnosis of faults in high voltage dielectrics in furthering development of AI techniques",2001,0, 1889,"Design of integrated software for reconfiguration, reliability, and protection system analysis","Interdependencies among software components for distribution network reconfiguration, reliability and protection system analysis are considered. Software interface specifications are presented. Required functionalities of reconfiguration for restoration are detailed. Two algorithms for reconfiguration for restoration are reviewed and compared. Use of outage analysis data to locate circuit sections in need of reliability improvements and to track predicted improvements in reliability is discussed",2001,0, 1890,Empirical validation of class diagram complexity metrics,"One of the principal objectives of software engineering is to improve the quality of software products. It is widely recognised that the quality assurance of software products must be guaranteed from the early phases of development. As a key artifact produced in the early development of object-oriented (OO) information systems (OOISs), class diagram quality has a great impact on the quality of the software product which is finally delivered. Hence, class diagram quality is a crucial issue that must be evaluated (and improved if necessary) in order to obtain quality OOISs, which is the main concern of present-day software development organisations. After a thorough review of the existing OO measures that are applicable to class diagrams at a high-level design stage, M. Genero et al. (2000) presented in a set of metrics for the structural complexity of class diagrams built using the Unified Modelling Language (UML). We focus on class diagram structural complexity, an internal quality attribute which we believe could be closely correlated with one of the most critical external quality attributes, such as class diagram maintainability. Since the main goal of this paper is the empirical validation of those metrics, we present two controlled experiments carried out to corroborate if those metrics are closed to class diagram maintainability and thus could be used as early maintainability indicators. Based on data collected in the experiments, we build a prediction model for class diagram maintainability using a method for the induction of fuzzy rules",2001,0, 1891,Empiric validation of the person to role allocation process,"A Capacities-Centered Integral Software Process Model (CCISPM) has been designed, covering and modeling all the relevant elements of the software process. In this paper an empirical validation of one of the processes is presented: the Person to Role Allocation Process. In this process the allocation of persons to fulfill roles is made according to the capacities or behavioral competencies that the persons possess and those required by the roles in the CCISPM. A set of experiments are carried out dealing with the development of the Initiation, Planning and Estimation Process, Domain Study Process, Requirements Analysis Process and Design Process of eight projects. It was proved that the estimated time deviation, as well as the errors found in the technical reviews of requirements specification, were less when the persons fulfilling the roles of planning engineer, domain analyst, requirements specificator and designer were allocated, according to the proposed model, considering the set of critical capacities",2001,0, 1892,A source-to-source compiler for generating dependable software,"Over the last years, an increasing number of safety-critical tasks have been demanded for computer systems. In particular, safety-critical computer-based applications are hitting market areas where cost is a major issue, and thus solutions are required which conjugate fault tolerance with low costs. A source-to-source compiler supporting a software-implemented hardware fault tolerance approach is proposed, based on a set of source code transformation rules. The proposed approach hardens a program against transient memory errors by introducing software redundancy: every computation is performed twice and results are compared, and control flow invariants are checked explicitly. By exploiting the tool's capabilities, several benchmark applications have been hardened against transient errors. Fault injection campaigns have been performed to evaluate the fault detection capability of the hardened applications. In addition, we analyzed the proposed approach in terms of space and time overheads",2001,0, 1893,Flow analysis to detect blocked statements,"In the context of software quality assessment, the paper proposes two new kinds of data which can be extracted from source code. The first, definitely blocked statements, can never be executed because preceding code prevents the execution of the program. The other data, called possibly blocked statements, may be blocked by blocking code. The paper presents original flow equations to compute definitely and possibly blocked statements in source code. The experimental context is described and results are shown and discussed. Suggestions for further research are also presented",2001,0, 1894,Investigation of the risk to software reliability and maintainability of requirements changes,"In order to continue to make progress in software measurement, as it pertains to reliability and maintainability, we must shift the emphasis from design and code metrics to metrics that characterize the risk of making requirements changes. Although these software attributes can be difficult to deal with due to the fuzzy requirements from which they are derived, the advantage of having early indicators of future software problems outweighs this inconvenience. We developed an approach for identifying requirements change risk factors as predictors of reliability and maintainability problems. Our case example consists of twenty-four Space Shuttle change requests, nineteen risk factors, and the associated failures and software metrics. The approach can be generalized to other domains with numerical results that would vary according to application",2001,0, 1895,Impact analysis of maintenance tasks for a distributed object-oriented system,"The work described in this paper is part of an ongoing project to improve the maintenance process in a Vienna Software-House. A repository has been constructed on the basis of a relational database and populated with metadata on a wide variety of software artifacts at each semantic level of development-concept, code and test. Now the repository is being used to perform impact analysis and cost estimation of change requests prior to implementing them. For this, hypertext techniques and ripple effects are used to identify all interdependencies. A tool has been constructed to navigate through the repository, select the impacted entities and pick up their size, complexity and quality metrics for effort estimation",2001,0, 1896,Using code metrics to predict maintenance of legacy programs: a case study,"The paper presents an empirical study on the correlation of simple code metrics and maintenance necessities. The goal of the work is to provide a method for the estimation of maintenance in the initial stages of outsourcing maintenance projects, when the maintenance contract is being prepared and there is very little available information on the software to be maintained. The paper shows several positive results related to the mentioned goal",2001,0, 1897,Modeling clones evolution through time series,"The actual effort to evolve and maintain a software system is likely to vary depending on the amount of clones (i.e., duplicated or slightly different code fragments) present in the system. This paper presents a method for monitoring and predicting clones evolution across subsequent versions of a software system. Clones are firstly identified using a metric-based approach, then they are modeled in terms of time series identifying a predictive model. The proposed method has been validated with an experimental activity performed on 27 subsequent versions of mSQL, a medium-size software system written in C. The time span period of the analyzed mSQL releases covers four years, from May 1995 (mSQL 1.0.6) to May 1999 (mSQL 2. 0. 10). For any given software release, the identified models was able to predict the clone percentage of the subsequent release with an average error below 4 %. A higher prediction error was observed only in correspondence of major system redesign",2001,0, 1898,Dynamic and static views of software evolution,"In addition to managing day-to-day maintenance, information system managers need to be able to predict and plan the longer-term evolution of software systems on an objective, quantified basis. Currently this is a difficult task, since the dynamics of software evolution, and the characteristics of evolvable software are not clearly understood. In this paper we present an approach to understanding software evolution. The approach looks at software evolution from two different points of view. The dynamic viewpoint investigates how to model software evolution trends and the static viewpoint studies the characteristics of software artefacts to see what makes software systems more evolvable. The former will help engineers to foresee the actions to be taken in the evolution process, while the latter provides an objective, quantified basis to evaluate the software with respect to its ability to evolve and will help to produce more evolvable software systems",2001,0, 1899,"Managing the maintenance of ported, outsourced, and legacy software via orthogonal defect classification","From the perspective of maintenance, software systems that include COTS software, legacy, ported or outsourced code pose a major challenge. The dynamics of enhancing or adapting a product to address evolving customer usage and the inadequate documentation of these changes over a period of time (and several generations) are just two of the factors which may have a debilitating effect on the maintenance effort. While many approaches and solutions have been offered to address the underlying problems, few offer methods which directly affect a team's ability to quickly identify and prioritize actions targeting the product which is already in front of them. The paper describes a method to analyze the information contained in the form of defect data and arrive at technical actions to address explicit product and process weaknesses which can be feasibly addressed in the current effort. The defects are classified using Orthogonal Defect Classification (ODC) and actual case studies are used to illustrate the key points",2001,0, 1900,Real-time suppression of stochastic pulse shaped noise for on-site PD measurements,Partial Discharge (PD) diagnosis and monitoring systems are state of the art for performing quality assurance or fault identification of high voltage components. However their on-site application is a rather difficult task because PD information is widely superimposed with electromagnetic disturbances. These disturbances affect very negatively all known PD diagnosis systems. Especially the detection and suppression of stochastic pulse shaped noise is a major problem. To determine such noise a fast machine intelligent signal recognition system (NeuroTEK II) was developed. This system is able to distinguish between PD pulses and noise up to a signal to noise ratio of 1:10. Therefore a measuring system acquires the input data in the VHF range (20..100MHz). The discrimination of the pulses is performed in real time in time domain using fast neural network hardware. With that signal recognition system a noise suppressed Phase Resolved Pulse Sequence (PRPS) data set in the CIGRE data format is generated that can be the input source of the most modern PD evaluation software,2001,0, 1901,A fault diagnosis system for heat pumps,"During the operation of heat pumps, faults like heat exchanger fouling, component failure, or refrigerant leakage reduce the system performance. In order to recognize these faults early, a fault diagnosis system has been developed and verified on a test bench. The parameters of a heat pump model are identified sequentially and classified during operation. For this classification, several `hard' and `soft' clustering methods have been investigated, while fuzzy inference systems or neural networks are created automatically by newly developed software. Choosing a simple black-box model structure, the number of sensors can be minimized, whereas a more advanced grey-box model yields better classification results",2001,0, 1902,Dependability analysis of fault-tolerant multiprocessor systems by probabilistic simulation,"The objective of this research is to develop a new approach for evaluating the dependability of fault-tolerant computer systems. Dependability has traditionally been evaluated through combinatorial and Markov modelling. These analytical techniques have several limitations, which can restrict their applicability. Simulation avoids many of the limitations, allowing for more precise representation of system attributes than feasible with analytical modelling. However, the computational demands of simulating a system in detail, at a low abstraction level, currently prohibit evaluation of high-level dependability metrics such as reliability and availability. The new approach abstracts a system at the architectural level, and employs life testing through simulated fault-injection to accurately and efficiently measure dependability. The simulation models needed to implement this approach are derived, in part, from the published results of computer performance studies and low-level fault-injection experiments. The developed probabilistic models of processor, memory and fault-tolerant mechanisms take such properties of real systems, as error propagation, different modes of failures, event dependency and concurrency. They have been integrated with a workload model and statistical analysis module into a generalised software tool. The effectiveness of such an approach was demonstrated through the analysis of several multiprocessor architectures",2001,0, 1903,"The application of remote sensing, geographic information systems, and Global Positioning System technology to improve water quality in northern Alabama","Recently, the water quality status in northern Alabama has been declining due to urban and agricultural growth. Throughout the years, the application of remote sensing and geographic information system technology has undergone numerous modifications and revisions to enhance their ability to control, reduce, and estimate the origin of non-point source pollution. Yet, there is still a considerable amount of uncertainty surrounding the use of this technology as well as its modifications. This research demonstrates how the application of remote sensing, geographic information system, and global positioning system technologies can be used to assess water quality in the Wheeler Lake watershed. In an effort to construct a GIS based water quality database of the study area for future use, a land use cover of the study area will be derived from LANDSAT Thermatic Mapper (TM) imagery using ERDAS IMAGINE image processing software. A Digital Elevation Model of the Wheeler Lake watershed was also from an Environmental Protection Agency Basins database. Physical and chemical properties of water samples including pH, Total Suspended Solids (TSS), Total Fecal Coliform (TC), Total Nitrogen (TN), Total Phosphorus (TP), Biological Oxygen Demand (BOD), Dissolved Oxygen (DO), and selected metal concentrations were measured",2001,0, 1904,A block algorithm for the Sylvester-observer equation arising in state-estimation,"We propose an algorithm for solving the Sylvester-observer equation arising in the construction of the Luenberger observer. The algorithm is a block-generalization of P. Van Dooren's (1981) scalar algorithm. It is more efficient than Van Dooren's algorithm and the recent block algorithm by B.N. Datta and D. Sarkissian (2000). Furthermore, the algorithm is well-suited for implementation on some of today's powerful high performance computers using the high-quality software package LAPACK",2001,0, 1905,Reliability of fault tolerant control systems: Part I,"The reliability analysis of fault-tolerant control systems is performed using Markov models. Reliability properties peculiar to fault-tolerant control systems are emphasized. As a consequence, coverage of failures through redundancy management can be severely limited. It is shown that in the early life of a system composed of highly reliable subsystems, the reliability of the overall system is affine with respect to coverage, and inadequate coverage induces dominant single point failures. The utility of some existing software tools for assessing the reliability of fault tolerant control systems is also discussed",2001,0, 1906,Combinatorial designs in multiple faults localization for battlefield networks,We present an application of combinatorial designs and variance analysis to correlating events in the midst of multiple network faults. The network fault model is based on the probabilistic dependency graph that accounts for the uncertainty about the state of network elements. Orthogonal arrays help reduce the exponential number of failure configurations to a small subset on which further analysis is performed. The preliminary results show that statistical analysis can pinpoint the probable causes of the observed symptoms with high accuracy and significant level of confidence. An example demonstrates how multiple soft link failures are localized in MIL-STD 188-220's datalink layer to explain the end-to-end connectivity problems in the network layer This technique can be utilized for the networks operating in an unreliable environment such as wireless and/or military networks.,2001,0, 1907,"RHIC insertion region, shunt power supply current errors",The Relativistic Heavy Ion Collider (RHIC) was commissioned in 1999 and 2000. RHIC requires power supplies to supply currents to highly inductive superconducting magnets. The RHIC Insertion Region contains many shunt power supplies to trim the current of different magnet elements in a large superconducting magnet circuit. Power Supply current error measurements were performed during the commissioning of RHIC. Models of these power supply systems were produced to predict and improve these power supply current errors using the circuit analysis program MicroCap V by Spectrum Software (TM). Results of the power supply current errors are presented from the models and from the measurements performed during the commissioning of RHIC,2001,0, 1908,Development of a dynamic power system load model,"The paper addresses the issue of measurement based power system load model development. The majority of power system loads respond dynamically to voltage disturbances and such contribute to overall system dynamics. Induction motors represent a major portion of system loads that exhibit dynamic behaviour following the disturbance. In this paper, the dynamic behaviours of an induction motor and a combination of induction motor and static load were investigated under different disturbances and operating conditions in the laboratory. A first order generic dynamic, load model is developed based on the test results. The model proposed is in a transfer function form and it is suitable for direct inclusion in the existing power system stability software. The robustness of the proposed model is also assessed.",2001,0, 1909,Modelling of grounding systems for better protection of communication installations against effects from electric power system and lightning,"This paper addresses three important issues in modelling of grounding systems: modelling of high frequency and transient behavior, of complex and extended grounding systems, and use of software methods. First, modelling of frequency dependent and transient behavior of grounding systems is considered. When simple grounding arrangements are considered then simple steps for improved design for better transient performance may be followed. In cases when they are subjected to fast fronted current impulses, that is, with high frequency content, generation of large peaks of the transient voltages between excitation point and neutral ground during the rise of the current impulse is possible. Described design procedures are aimed at reducing such excessive transient voltages. More complex grounding arrangements may be optimised using suitable computer software. Next, modelling of very complex grounding systems is described. As an example, protection against dangerous voltages between telephone subscriber lines and local ground near high voltage substations due to ground potential rise in case of ground faults is analysed. A computer model of the substation grounding system and connected and near buried metallic structures in an urban environment is used for estimation of the ground potential rise zone of influence on the subscription wire-line installations. Finally, computer software method for analysis of low and high frequency and transient analysis of grounding systems of arbitrary geometry are described.",2001,0, 1910,Reliability estimation for a software system with sequential independent reviews,"Suppose that several sequential test and correction cycles have been completed for the purpose of improving the reliability of a given software system. One way to quantify the success of these efforts is to estimate the probability that all faults are found by the end of the last cycle, We describe how to evaluate this probability both prior to and after observing the numbers of faults detected in each cycle and we show when these two evaluations would be the same",2001,0, 1911,A fault model for fault injection analysis of dynamic UML specifications,"Verification and validation (V&V) tasks, as applied to software specifications, enable early detection of analysis and design flaws prior to implementation. Several fault injection techniques for software V&V are proposed at the code level. In this paper, we address V&V analysis methods based on fault injection at the software specification level. We present a fault model and a fault injection process for UML dynamic specifications. We use a case study based on a cardiac pacemaker for illustrating the developed approach.",2001,0, 1912,Analysis of hypergeometric distribution software reliability model,"The article gives detailed mathematical results on the hypergeometric distribution software reliability model (HGDSRM) proposed by Y. Tohma et al. (1989; 1991). In the above papers, Tohma et al. developed the HGDSRM as a discrete-time stochastic model and derived a recursive formula for the mean cumulative number of software faults detected up to the i-th (>0) test instance in testing phase. Since their model is based on only the mean value of the cumulative number of faults, it is impossible to estimate not only the software reliability but also the other probabilistic dependability measures. We introduce the concept of cumulative trial processes, and describe the dynamic behavior of the HGDSRM exactly. In particular, we derive the probability mass function of the number of software faults detected newly at the i-th test instance and its mean as well as the software reliability defined as the probability that no faults are detected up to an arbitrary time. In numerical examples with real software failure data, we compare several HGDSRMs with different model parameters in terms of least squared sum and show that the mathematical results obtained here are very useful to assess the software reliability with the HGDSRM.",2001,0, 1913,Feedback control of the software test process through measurements of software reliability,"A closed-loop feedback control model of the software test process (STP) is described. The model is grounded in the well established theory of automatic control. It offers a formal and novel procedure for using product reliability or failure intensity as a basis for closed loop control of the STP. The reliability or the failure intensity of the product is compared against the desired reliability at each checkpoint and the difference fed back to a controller. The controller uses this difference to compute changes necessary in the process parameters to meet the reliability, or failure intensity objective at the terminal checkpoint (the deadline). The STP continues beyond a checkpoint with a revised set of parameters. This procedure is repeated at each checkpoint until the termination of the STP. The procedure accounts for the possibility of changes (during testing), in reliability or failure intensity objective, the checkpoints, and the parameters that characterize the STP. The effectiveness of this procedure was studied using commercial data available in the public domain and also from the data generated through simulation. In all cases, the use of feedback control produces adequate results allowing the achievement of the objectives.",2001,0, 1914,An empirical evaluation of statistical testing designed from UML state diagrams: the flight guidance system case study,"This paper presents an empirical study of the effectiveness of test cases generated from UML state diagrams using transition coverage as the testing criterion. The test cases production is mainly based on an adaptation of a probabilistic method, called statistical testing based on testing criteria. This technique was automated with the aid of the Rational Software Corporation's Rose RealTime tool. The test strategy investigated combines statistical test cases with (few) deterministic test cases focused on domain boundary values. Its feasibility is exemplified on a research version of an avionics system implemented in Java: the Flight Guidance System case study (14 concurrent state diagrams). Then, the results of an empirical evaluation of the effectiveness of the created test cases are presented. The evaluation was performed using mutation analysis to assess the error detection power of the test cases on more than 1500 faults seeded one by one in the Java source code (115 classes, 6500 LOC). A detailed analysis of the test results allows us to draw first conclusions on the expected strengths and weaknesses of the proposed test strategy.",2001,0, 1915,Evaluating the software test strategy for the 2000 Sydney Olympics,"The 2000 summer Olympic Games event was a major information technology challenge. With a fixed deadline for completion, its inevitable dependency on software systems and immense scope, the testing and verification effort was critical to its success. One way in which success was assured was the use of innovative techniques using ODC based analysis to evaluate planned and executed test activities. These techniques were used to verify that the plan was comprehensive, yet efficient, and ensured that progress could be accurately measured. This paper describes some of these techniques and provides examples of the benefits derived. We also discuss the applicability, of the techniques to other software projects.",2001,0, 1916,Accounting for realities when estimating the field failure rate of software,"A realistic estimate of the field failure rate of software is essential in order to decide when to release the software while maintaining an appropriate balance between reliability, time-to-market and development cost. Typically, software reliability models are applied to system test data with the hope of obtaining an estimate of the software failure rate that will be observed in the field. Unfortunately, test environments are usually quite different from field environments. In this paper, we use a calibration factor to characterize the mismatch between the system test environment and the field environment, and then incorporate the factor into a widely used software reliability model. For projects that have both system test data and field data for one or more previous releases, the calibration factor can be empirically evaluated and used to estimate the field failure rate of a new release based on its system test data. For new projects, the calibration factor can be estimated by matching the software to related projects that have both system test data and field data. In practice, isolating and removing a software fault is a complicated process. As a result, a fault may be encountered more than once before it is ultimately removed. Most software reliability growth models assume instantaneous fault removal. We relax this assumption by relating non-zero fault removal times to imperfect debugging. Finally, we distinguish between two types of faults based on whether their observed occurrence would precipitate a fix in the current or future release, respectively. Type-F faults, which are fixed in the current release, contribute to a growth component of the overall failure rate. Type-D faults, whose fix is deferred to a subsequent release, contribute to a constant component of the overall software failure rate. The aggregate software failure rate is thus the sum of a decreasing failure rate and a constant failure rate.",2001,0, 1917,An analysis of software correctness prediction methods,"Reliability is one of the most important aspects of software systems of any kind. Software development is a complex and complicated process in which software faults are inserted into the code by mistakes during the development process or maintenance. It has been shown that the pattern of the faults insertion phenomena is related to measurable attributes of the software. We introduce some methods for reliability prediction based on software metrics, present the results of using these methods in the particular industrial software environment for which we have a large database of modules in C language. Finally we compare the results and methods and give some directions and ideas for future work",2001,0, 1918,"An ordinal-time reliability model applied to ""Big-Bang"" suite-based testing","System testing is often performed by means of a comprehensive test suite that spans the functional requirements of a software product (""Big Bang"" testing). This suite is then run repeatedly after each modification or fix until a satisfactory pass rate is achieved. Such testing does not lend itself to treatment with traditional reliability models which, in keeping with their origins in hardware, assume that each fault is fixed upon discovery and that the relevant temporal variable is calendar or execution time. In this paper, a model using ordinal time, measured by baselined development configurations, is presented. The use of this model in combination with an empirical mapping of test case failures to product code faults allows for rapid tracking of testing progress. The use of the model as an aid to project management in the testing phase of a project is illustrated",2001,0, 1919,Assurance of conceptual data model quality based on early measures,"The increasing demand for quality information systems (IS), has become quality the most pressing challenge facing IS development organisations. In the IS development field it is generally accepted that the quality of an IS is highly dependent on decisions made early in its development. Given the relevant role that data itself plays in an IS, conceptual data models are a key artifact of the IS design: Therefore, in order to build ""better quality "" IS it is necessary to assess and to improve the quality of conceptual data models based on quantitative criteria. It is in this context where software measurement can help IS designers to make better decision during design activities. We focus this work on the empirical validation of the metrics proposed by Genero et al. for measuring the structural complexity of entity relationship diagrams (ERDs). Through a controlled experiment we will demonstrate that these metrics seem to be heavily correlated with three of the sub-factors that characterise the maintainability of an ERD, such as understandability, analysability and modifiability",2001,0, 1920,QUIM: a framework for quantifying usability metrics in software quality models,"The paper examines current approaches to usability metrics and proposes a new approach for quantifying software quality in use, based on modelling the dynamic relationships of the attributes that affect software usability. The Quality in Use Integrated Map (QUIM) is proposed for specifying and identifying quality in use components, which brings together different factors, criteria, metrics and data defined in different human computer interface and software engineering models. The Graphical Dynamic Quality Assessment (GDQA) model is used to analyse interaction of these components into a systematic structure. The paper first introduces a new classification scheme into a graphical logic based framework using QUIM components (factors, criteria metrics and data) to assess quality in use of interactive systems. Then we illustrate how QUIM and GDQA may be used to assess software usability using subjective measures of quality characteristics as defined in ISO/IEC 9126",2001,0, 1921,Using abstraction to improve fault tolerance,"Software errors are a major cause of outages and they are increasingly exploited in malicious attacks. Byzantine fault tolerance allows replicated systems to mask some software errors but it is expensive to deploy. The paper describes a replication technique, BFTA, which uses abstraction to reduce the cost of Byzantine fault tolerance and to improve its ability to mask software errors. BFTA reduces cost because it enables reuse of off-the-shelf service implementations. It improves availability because each replica can be repaired periodically using an abstract view of the state stored by correct replicas, and because each replica can run distinct or non-deterministic service implementations, which reduces the probability of common mode failures. We built an NFS service that allows each replica to run a different operating system. This example suggests that BFTA can be used in practice; the replicated file system required only a modest amount of new code, and preliminary performance results indicate that it performs comparably to the off-the-shelf implementations that it wraps.",2001,0, 1922,Self-tuned remote execution for pervasive computing,"Pervasive computing creates environments saturated with computing and communication capability, yet gracefully integrated with human users. Remote execution has a natural role to play, in such environments, since it lets applications simultaneously leverage the mobility of small devices and the greater resources of large devices. In this paper, we describe Spectra, a remote execution system designed for pervasive environments. Spectra monitors resources such as battery, energy and file cache state which are especially important for mobile clients. It also dynamically balances energy use and quality goals with traditional performance concerns to decide where to locate functionality. Finally, Spectra is self-tuning-it does not require applications to explicitly specify intended resource usage. Instead, it monitors application behavior, learns functions predicting their resource usage, and uses the information to anticipate future behavior.",2001,0, 1923,The case for resilient overlay networks,"This paper makes the case for Resilient Overlay Networks (RONs), an application-level routing and packet forwarding service that gives end-hosts and applications the ability to take advantage of network paths that traditional Internet routing cannot make use of, thereby improving their end-to-end reliability and performance. Using RON, nodes participating in a distributed Internet application configure themselves into an overlay network and cooperatively forward packets for each other. Each RON node monitors the quality of the links in the underlying Internet and propagates this information to the other nodes; this enables a RON to detect and react to path failures within several seconds rather than several minutes, and allows it to select application-specific paths based on performance. We argue that RON has the potential to substantially improve the resilience of distributed Internet applications to path outages and sustained overload.",2001,0, 1924,A P1500 compliant BIST-based approach to embedded RAM diagnosis,"This paper deals with the diagnosis of faulty embedded RAMs and outlines the solution which is currently under evaluation within STMicroelectronics. The proposed solution exploits a BIST module implementing a March algorithm, defines a wrapper allowing its interface with a TAP controller, and describes a diagnostic procedure running in the external ATE software environment. The approach allows one to test multiple modules in the same chip through a single TAP interface and is compliant with the proposed P1500 standard for Embedded Core Test. Some preliminary experimental results gathered using a sample circuit are reported, showing the effectiveness of the proposed solution in terms of area and time requirements",2001,0, 1925,Dimension recognition and geometry reconstruction in vectorization of engineering drawings,"This paper presents a novel approach for recognizing and interpreting dimensions in engineering drawings. It starts by detecting potential dimension frames, each comprising only the line and text components of a dimension, then verifies them by detecting the dimension symbols. By removing the prerequisite of symbol recognition from detection of dimension sets, our method is capable of handling low quality drawings. We also propose a reconstruction algorithm for rebuilding the drawing entities based on the recognized dimension annotations. A coordinate grid structure is introduced to represent and analyze two-dimensional spatial constraints between entities; this simplifies and unifies the process of rectifying deviations of entity dimensions induced during scanning and vectorization.",2001,0, 1926,Quality-assuring scheduling-using stochastic behavior to improve resource utilization,"We present a unified model for admission and scheduling, applicable for various active resources such as CPU or disk to assure a requested quality in situations of temporary overload. The model allows us to predict and control the behavior of applications based on given quality requirements. It uses the variations in the execution time, i.e., the time any active resource is needed We split resource requirements into a mandatory part which must be available and an optional part which should be available as often as possible but at least with a certain percentage. In combination with a given distribution for the execution time we can move away from worst-case reservations and drastically reduce the amount of reserved resources for applications which can tolerate occasional deadline misses. This increases the number of admittable applications. For example, with negligible loss of quality our system can admit more than two times the disk bandwidth than a system based on the worst-case. Finally, we validated the predictions of our model by measurements using a prototype real-time system and observed a high accuracy between predicted and measured values.",2001,0, 1927,Using variable-MHz microprocessors to efficiently handle uncertainty in real-time systems,"Guaranteed performance is critical in real-time systems because correct operation requires tasks complete on time. Meanwhile, as software complexity increases and deadlines tighten, embedded processors inherit high-performance techniques such as pipelining, caches, and branch prediction. Guaranteeing the performance of complex pipelines is difficult and worst-case analysis often under-estimates the microarchitecture for correctness. Ultimately, the designer must turn to clock frequency as a reliable source of performance. The chosen processor has a higher frequency than is needed most of the time, to compensate for uncertain hardware enhancements-partly defeating their intended purpose. We propose using microarchitecture simulation to produce accurate but not guaranteed-correct worst-case performance bounds. The primary clock frequency is chosen based on simulated-worst-case performance. Since static analysis cannot confirm simulated-worst-case bounds, the microarchitecture is also backed up by clock frequency reserves. When running a task, the processor periodically checks for interim microarchitecture performance failures. These are expected to be rare, but frequency reserves are available to guarantee the final deadline is met in spite of interim failures. Experiments demonstrate significant frequency reductions, e.g., -100 MHz for a peak 300 MHz processor. The more conservative worst-case analysis is, the larger the frequency reduction. The shorter the deadline, the larger the frequency reduction. And reserve frequency is generally no worse than the high frequency produced by conventional worst-case analysis, i.e., the system degrades gracefully in the presence of transient performance faults.",2001,0, 1928,An automatic restructuring approach preserving the behavior of object-oriented designs,"Work on restructuring object-oriented designs involves metrics for quality estimation, and automated transformations. However, these factors have been treated almost independently of each other. A long-term goal is to define behavior-preserving design transformations, and automate the transformations using metrics. This paper describes an automatic restructuring approach that preserves the behavior of an object-oriented design. Cohesion and coupling metrics, based on abstract models that represent design components and their relationships, are defined to quantify designs and provide criteria for comparing alternative designs. Primitive operations and semantics for restructuring are defined to validate the preservation of design behavior This approach devises a fitness function using cohesion and coupling metrics, and restructures object-oriented designs by applying a genetic algorithm using the fitness function. We empirically evaluate the capability of this approach by applying it to the designs of Java programs and compare our results with a simulated annealing-based approach. Results from our experiments demonstrate that this approach may be useful in improving object-oriented designs automatically.",2001,0, 1929,Practical automated filter generation to explicitly enforce implicit input assumptions,"Vulnerabilities in distributed applications are being uncovered and exploited faster than software engineers can, patch the security holes. All too often these weaknesses result from implicit assumptions made by an application about its inputs. One approach to defending against their exploitation is to interpose a filter between the input source and the application that verifies that the application's assumptions about its inputs actually hold. However, ad hoc design of such filters is nearly as tedious and error-prone as patching the original application itself. We have automated the filter generation process based on a simple formal description of a broad class of assumptions about the inputs to an application. Focusing on the back-end server application case, we have prototyped an easy-to-use tool that generates server-side filtering scripts. These can then be quickly installed on a front-end webs server (either in concert with the application or., when a vulnerability is uncovered), thus shielding the server application from a variety of existing and exploited, attacks, as solutions requiring changes to the applications are developed and tested. Our measurements suggest that input filtering can be done efficiently and should not be a performance concern for moderately loaded web servers. The overall approach may be generalizable to other domains, such as firewall filter generation and API wrapper filter generation.",2001,0, 1930,Evaluating low-cost fault-tolerance mechanism for microprocessors on multimedia applications,"We evaluate a low-cost fault-tolerance mechanism for microprocessors, which can detect and recover from transient faults, using multimedia applications. There are two driving forces to study fault-tolerance techniques for microprocessors. One is deep submicron fabrication technologies. Future semiconductor technologies could become more susceptible to alpha particles and other cosmic radiation. The other is the increasing popularity of mobile platforms. Recently cell phones have been used for applications which are critical to our financial security, such as flight ticket reservation, mobile banking, and mobile trading. In such applications, it is expected that computer systems will always work correctly. From these observations, we propose a mechanism which is based on an instruction reissue technique for incorrect data speculation recovery which utilizes time redundancy. Unfortunately, we found significant performance loss when we evaluated the proposal using the SPEC2000 benchmark suite. We evaluate it using MediaBench which contains more practical mobile applications than SPEC2000",2001,0, 1931,Experiences with EtheReal: a fault-tolerant real-time Ethernet switch,"We present our experiences with the implementation of a real-time Ethernet switch called EtheReal. EtheReal provides three innovations for real-time traffic over switched Ethernet networks. First, EtheReal delivers connection oriented hard bandwidth guarantees without requiring any changes to the end host operating system and network hardware/software. For ease of deployment by commercial vendors, EtheReal is implemented in software over Ethernet switches, with no special hardware requirements. QoS support is contained within two modules, switches and end-host user level libraries that expose a socket like API to real time applications. Secondly, EtheReal provides automatic fault detection and recovery mechanisms that operate within the constraints of a real-time network. Finally EtheReal supports server-side push applications with a guaranteed bandwidth link-layer multicast scheme. Performance results from the implementation show that EtheReal switches deliver bandwidth guarantees to real time-applications within 0.6% of the contracted value, even in the presence of interfering best-effort traffic between the same pair of communicating hosts.",2001,0, 1932,A simulation based approach for estimating the reliability of distributed real-time systems,"Designers of safety-critical real-time systems are often mandated by requirements on reliability as well as timing guarantees. For guaranteeing timing properties, the standard practice is to use various analysis techniques provided by hard real-time scheduling theory. The paper presents analysis based on simulation, that considers the effects of faults and timing parameter variations on schedulability analysis, and its impact on the reliability estimation of the system. We look at a wider set of scenarios than just the worst case considered in hard real-time schedulability analysis. The ideas have general applicability, but the method has been developed with modelling the effects of external interferences on the controller area network (CAN) in mind. We illustrate the method by showing that a CAN interconnected distributed system, subjected to external interference, may be proven to satisfy its timing requirements with a sufficiently high probability, even in cases when the worst-case analysis has deemed it non-schedulable.",2001,0, 1933,Real-time software based MPEG-4 video encoder,"Rapid improvements in general-purpose processors are making software-based video encoding solutions increasingly feasible. Software encoders for H.263 and MPEG-2 video standards are well documented, reporting close to real-time performance. However, MPEG-4 video, due to its higher complexity, requires more computational power, making its real-time encoding speed rather infeasible. Design of a fully standard-compliant MPEG-4 encoder with real-time speed on a PC entails optimizations at all levels. This includes designing efficient encoding algorithms, software implementation with efficient data structures, and enhancing computation speed by all possible methods such as taking advantage of the machine architecture. We report a software-based real-time MPEG-4 video encoder on a single-processor PC, without any frame skipping, profile simplifying tricks, or quality loss compromise. The encoder is a quintessence of a number of novel algorithms. Specifically, we have designed a fast motion estimation algorithm. We have also designed an algorithm for the detection of all-zero quantized blocks, which reduces the complexity of the DCT and quantization. To enhance the computation speed, we harness Intel's MMX technology to implement these algorithms in an SIMD (single instruction stream, multiple data stream) fashion within the same processor. On the 800 MHz Pentium III, our encoder yields up to 70 frames per second for CIF resolution video, with the similar picture quality as the reference VM software",2001,0, 1934,Assessing the applicability of fault-proneness models across object-oriented software projects,"A number of papers have investigated the relationships between design metrics and the detection of faults in object-oriented software. Several of these studies have shown that such models can be accurate in predicting faulty classes within one particular software product. In practice, however, prediction models are built on certain products to be used on subsequent software development projects. How accurate can these models be, considering the inevitable differences that may exist across projects and systems? Organizations typically learn and change. From a more general standpoint, can we obtain any evidence that such models are economically viable tools to focus validation and verification effort? This paper attempts to answer these questions by devising a general but tailorable cost-benefit model and by using fault and design data collected on two mid-size Java systems developed in the same environment. Another contribution of the paper is the use of a novel exploratory analysis technique - MARS (multivariate adaptive regression splines) to build such fault-proneness models, whose functional form is a-priori unknown. The results indicate that a model built on one system can be accurately used to rank classes within another system according to their fault proneness. The downside, however, is that, because of system differences, the predicted fault probabilities are not representative of the system predicted. However, our cost-benefit model demonstrates that the MARS fault-proneness model is potentially viable, from an economical standpoint. The linear model is not nearly as good, thus suggesting a more complex model is required.",2002,1, 1935,Metrics that matter,"Within NASA, there is an increasing awareness that software is of growing importance to the success of missions. Much data has been collected, and many theories have been advanced on how to reduce or eliminate errors in code. However, learning requires experience. We document a new NASA initiative to build a centralized repository of software defect data; in particular, we document one specific case study on software metrics. Software metrics are used as a basis for prediction of errors in code modules, but there are many different metrics available. McCabe is one of the more popular tools used to produce metrics, but, other metrics can be more significant.",2002,1, 1936,Predicting fault-proneness using OO metrics. An industrial case study,"Software quality is an important external software attribute that is difficult to measure objectively. In this case study, we empirically validate a set of object-oriented metrics in terms of their usefulness in predicting fault-proneness, an important software quality indicator We use a set of ten software product metrics that relate to the following software attributes: the size of the software, coupling, cohesion, inheritance, and reuse. Eight hypotheses on the correlations of the metrics with fault-proneness are given. These hypotheses are empirically tested in a case study, in which the client side of a large network service management system is studied. The subject system is written in Java and it consists of 123 classes. The validation is carried out using two data analysis techniques: regression analysis and discriminant analysis",2002,1, 1937,Assessing multi-version systems through fault injection,"Multi-version design (MVD) has been proposed as a method for increasing the dependability, of critical systems beyond current levels. However, a major obstacle to large-scale commercial usage of this approach is the lack of quantitative characterizations available. We seek to help answer this problem using fault injection. This approach has the potential for yielding highly useful metrics with regard to MVD systems, as well as giving developers a greater insight into the behaviour of each channel within the system. In this research, we develop an automatic fault injection system for multi-version systems called FITMVS. We use this si,stem to test a multi-version system, and then analyze the results produced. We conclude that this approach can yield useful metrics, including metrics related to channel sensitivity, code scope sensitivity, and the likelihood of common-mode failure occurring within a system",2002,0, 1938,Using simulation to facilitate effective workflow adaptation,"In order to support realistic real-world processes, workflow systems need to be able to adapt to changes. Detecting the need to change and deciding what changes to carry out are very difficult. Simulation analysis can play an important role in this. It can be used in tuning quality of service metrics and exploring ""what-if"" questions. Before a change is actually made, its possible effects can be explored with simulation. To facilitate rapid feedback, the workflow system (METEOR) and simulation system (JSIM) need to interoperate. In particular, workflow specification documents need to be translated into simulation model specification documents so that the new model can be executed/animated on-the-fly. Fortunately, modern Web technology (e.g., XML, DTD, XSLT) make this relatively straightforward. The utility of using simulation in adapting a workflow is illustrated with an example from a genome workflow.",2002,0, 1939,Fault detection and location on electrical distribution system,"This case study summarizes the efforts undertaken at CP and L-A Progress Energy Company over the past five years to improve distribution reliability via detection of distribution faults and determination of their location. The analysis methods used by CP and L have changed over the years as improvements were made to the various tools used. The tools used to analyze distribution faults included a feeder monitoring system (FMS), an automated outage management system (OMS), a distribution SCADA system (DSCADA), and several Excel spreadsheet applications. The latest fault detection system involves an integration of FMS, OMS, and DSCADA systems to provide distribution dispatchers with a graphical display of possible locations for faults that have locked out feeder circuit breakers",2002,0, 1940,Exploiting agent mobility for large-scale network monitoring,"As networks become pervasive, the importance of efficient information gathering for purposes such as monitoring, fault diagnosis, and performance evaluation increases. Distributed monitoring systems based on either management protocols such as SNMP or distributed object technologies such as CORBA can cope with scalability problems only to a limited extent. They are not well suited to systems that are both very large and highly dynamic because the monitoring logic, although possibly distributed, is statically predefined at design time. This article presents an active distributed monitoring system based on mobile agents. Agents act as area monitors not bound to any particular network node that can ""sense"" the network, estimate better locations, and migrate in order to pursue location optimality. Simulations demonstrate the capability of this approach to cope with large-scale systems and changing network conditions",2002,0, 1941,FPGA resource and timing estimation from Matlab execution traces,"We present a simulation-based technique to estimate area and latency of an FPGA implementation of a Matlab specification. During simulation of the Matlab model, a trace is generated that can be used for multiple estimations. For estimation the user provides some design constraints such as the rate and bit width of data streams. In our experience the runtime of the estimator is approximately only 1/10 of the simulation time, which is typically fast enough to generate dozens of estimates within a few hours and to build cost-performance trade-off curves for a particular algorithm and input data. In addition, the estimator reports on the scheduling and resource binding used for estimation. This information can be utilized not only to assess the estimation quality, but also as first starting point for the final implementation.",2002,0, 1942,Modeling the impact of preflushing on CTE in proton irradiated CCD-based detectors,"A software model is described that performs a ""real world"" simulation of the operation of several types of charge-coupled device (CCD)-based detectors in order to accurately predict the impact that high-energy proton radiation has on image distortion and modulation transfer function (MTF). The model was written primarily to predict the effectiveness of vertical preflushing on the custom full frame CCD-based detectors intended for use on the proposed Kepler Discovery mission, but it is capable of simulating many other types of CCD detectors and operating modes as well. The model keeps track of the occupancy of all phosphorous-silicon (P-V), divacancy (V-V) and oxygen-silicon (O-V) defect centers under every CCD electrode over the entire detector area. The integrated image is read out by simulating every electrode-to-electrode charge transfer in both the vertical and horizontal CCD registers. A signal level dependency on the capture and emission of signal is included and the current state of each electrode (e.g., barrier or storage) is considered when distributing integrated and emitted signal. Options for performing preflushing, preflashing, and including mini-channels are available on both the vertical and horizontal CCD registers. In addition, dark signal generation and image transfer smear can be selectively enabled or disabled. A comparison of the charge transfer efficiency (CTE) data measured on the Hubble space telescope imaging spectrometer (STIS) CCD with the CTE extracted from model simulations of the STIS CCD show good agreement",2002,0, 1943,Distributed real-time simulation of the group manager executing the multicast protocol RFRM,"Group communication in real-time computing systems is useful for building fault-tolerant distributed real-time applications. The purpose of this paper is to show the efficiency of the group membership manager executing the multicast protocol RFRM (Release-time based Fault-tolerant Real-time Multicast) which is based on the idea of attaching the official release time to each group view send message. In this approach, the group membership manager interacts with the fault detector and the client application do not send view-ack messages to the group membership manager. A real-time simulation based on the TMO structuring scheme is conducted to study its performance for both the proposed approach and the group management scheme which receives view-ack messages. Simulation results show that with a proper value of the official release time interval, our approach improves the delay in completing the group view change event",2002,0, 1944,End-to-end latency of a fault-tolerant CORBA infrastructure,"This paper presents measured probability density functions (pdfs) for the end-to-end latency, of two-way, remote method invocations from a CORBA client to a replicated CORBA server in a fault-tolerance infrastructure. The infrastructure uses a multicast group-communication protocol based on a logical token-passing ring imposed on a single local-area network. The measurements show that the peaks of the pd/s for the latency are affected by the presence of duplicate messages for active replication, and by the position of the primary server replica on the ring for semi-active and passive replication. Because a node cannot broadcast a user message until it receives the token, up to two complete token rotations can contribute to the end-to-end latency seen by the client for synchronous remote method invocations, depending on the server processing time and the interval between two consecutive client invocations. For semi-active and passive replication, careful placement of the primary server replica is necessary to alleviate this broadcast delay to achieve the best possible end-to-end latency. The client invocation patterns and the server processing time must be considered together to determine the most favorable position for the primary replica. Assuming that an effective sending-side duplicate suppression mechanism is implemented, active replication can be more advantageous than semi-active and passive replication because all replicas compete for sending and, therefore, the replica at the most favorable position will have the opportunity to send first",2002,0, 1945,Test compaction for at-speed testing of scan circuits based on nonscan test. sequences and removal of transfer sequences,"Proposes a procedure for generating compact test sets with enhanced at-speed testing capabilities for scan circuits. Compaction refers here to a reduction in the test application time, while at-speed testing refers to the application of primary input sequences that contribute to the detection. of delay defects. The proposed procedure generates an initial test set that has a low test application time and consists of long sequences of primary input vectors applied consecutively. To construct this test set, the proposed procedure transforms a test sequence To for the nonscan circuit into a scan-based test by selecting an appropriate scan-in state and removing primary input vectors from To if they do not contribute to the fault coverage. If To contains long transfer sequences, several scan-based tests with long primary input sequences may be obtained by replacing transfer sequences in To with scan operations. This helps reduce the test application time further. We demonstrate through experimental results the advantages of this approach over earlier ones as a method for generating test sets with minimal test application time and long primary input sequences",2002,0, 1946,Experiences with evaluating network QoS for IP telephony,"Successful deployment of networked multimedia applications such as IP telephony depends on the performance of the underlying data network. QoS requirements of these applications are different from those of traditional data applications. For example, while IP telephony is very sensitive to delay and jitter, traditional data applications are more tolerant of these performance metrics. Consequently, assessing a network to determine whether it can accommodate the stringent QoS requirements of IP telephony becomes critical. We describe a technique for evaluating a network for IP telephony readiness. Our technique relies on the data collection and analysis support of our prototype tool, ExamiNetâ„? It automatically discovers the topology of a given network and collects and integrates network device performance and voice quality metrics. We report the results of assessing the IP telephony readiness of a real network of 31 network devices (routers/switches) and 23 hosts via ExamiNetâ„? Our evaluation identified links in the network that were over utilized to the point at which they could not handle IP telephony.",2002,0, 1947,Software quality prediction using median-adjusted class labels,Software metrics aid project managers in predicting the quality of software systems. A method is proposed using a neural network classifier with metric inputs and subjective quality assessments as class labels. The labels are adjusted using fuzzy measures of the distances from each class center computed using robust multivariate medians,2002,0, 1948,An empirical evaluation of fault-proneness models,"Planning and allocating resources for testing is difficult and it is usually done on an empirical basis, often leading to unsatisfactory results. The possibility of early estimation of the potential faultiness of software could be of great help for planning and executing testing activities. Most research concentrates on the study of different techniques for computing multivariate models and evaluating their statistical validity, but we still lack experimental data about the validity of such models across different software applications. The paper reports on an empirical study of the validity of multivariate models for predicting software fault-proneness across different applications. It shows that suitably selected multivariate models can predict fault-proneness of modules of different software packages.",2002,0, 1949,Experiences in assessing product family software architecture for evolution,"Software architecture assessments are a means to detect architectural problems before the bulk of development work is done. They facilitate planning of improvement activities early in the lifecycle and allow limiting the changes on any existing software. This is particularly beneficial when the architecture has been planned to (or already does) support a whole product family, or a set of products that share common requirements, architecture, components or code. As the family requirements evolve and new products are added, the need to assess the evolvability of the existing architecture is vital. The author illustrates two assessment case studies in the mobile telephone software domain: the Symbian operating system platform and the network resource access control software system. By means of simple experimental data, evidence is shown of the usefulness of architectural assessment as rated by the participating stakeholders. Both assessments have led to the identification of previously unknown architectural defects, and to the consequent planning of improvement initiatives. In both cases, stakeholders noted that a number of side benefits, including improvement of communication and architectural documentation, were also of considerable importance. The lessons learned and suggestions for future research and experimentation are outlined.",2002,0, 1950,Data mining technology for failure prognostic of avionics,Adverse environmental conditions have combined cumulative effects leading to performance degradation and failures of avionics. Classical reliability addresses statistically-generic devices and is less suitable for the situations when failures are not traced to manufacturing but rather to unique operational conditions of particular hardware units. An approach aimed at the accurate assessment of the probability of failure of any avionics unit utilizing the known history-of-abuse from environmental and operational factors is presented herein. The suggested prognostic model utilizes information downloaded from dedicated monitoring systems of flight-critical hardware and stored in a database. Such a database can be established from the laboratory testing of hardware and supplemented with real operational data. This approach results in a novel knowledge discovery from data technology that can be efficiently used in a wide area of applications and provide a quantitative basis for the modern maintenance concept known as service-when-needed. An illustrative numerical example is provided,2002,0, 1951,Recent advances in digital halftoning and inverse halftoning methods,"Halftoning is the rendition of continuous-tone pictures on displays, paper or other media that are capable of producing only two levels. In digital halftoning, we perform the gray scale to bilevel conversion digitally using software or hardware. In the last three decades, several algorithms have evolved for halftoning. Examples of algorithms include ordered dither, error diffusion, blue noise masks, green noise halftoning, direct binary search (DBS), and dot diffusion. In this paper, we first review some of the algorithms which have a direct bearing on our paper and then describe some of the more recent advances in the field. The dot-diffusion method for digital halftoning has the advantage of pixel-level parallelism unlike the error-diffusion method, which is a popular halftoning method. However, the image quality offered by error diffusion is still regarded as superior to most of the other known methods. We first review error diffusion and dot diffusion, and describe a recent method to improve the image quality of the dot-diffusion algorithm which takes advantage of the Human Visual System (HVS) function. Then, we discuss the inverse halftoning problem",2002,0, 1952,CASCADE - configurable and scalable DSP environment,"As the complexity of embedded systems grows rapidly, it is common to accelerate critical tasks with hardware. Designers usually use off-the-shelf components or licensed IP cores to shorten the time to market, but the hardware/software interfacing is tedious, error-prone and usually not portable. Besides, the existing hardware seldom matches the requirements perfectly, CASCADE, the proposed design environment as an alternative, generates coprocessing datapaths from the executing algorithms specified in C/C++ and attaches these datapaths to the embedded processor with an auto-generated software driver. The number of datapaths and their internal parallel functional units are scaled to fit the application. It seamlessly integrates the design tools of the embedded processor to reduce the re-training/design efforts and maintains short product development time as the pure software approaches. A JPEG encoder is built in CASCADE successfully with an auto-generated four-MAC accelerator to achieve 623% performance boost for our video application.",2002,0, 1953,A delay/Doppler-mapping receiver system for GPS-reflection remote sensing,"A delay/Doppler-mapping receiver system, developed specifically for global positioning system (GPS)-reflection remote sensing, is described, and example delay/Doppler waveforms are presented. The high-quality data obtained with this system provide a more accurate and detailed examination of ground-based and aircraft GPS-reflection phenomenology than has been available to date. As an example, systematic effects in the reflected signal delay waveform, due to nonideal behavior of the C/A-code auto-correlation function, are presented for the first time. Both a single-channel open-loop recording system and a recently developed 16-channel recorder are presented. The open-loop data from either recorder are postprocessed with a software GPS receiver that performs the following functions: signal detection; phase and delay tracking; delay, Doppler, and delay/Doppler waveform mapping; dual-frequency (L1 and L2) processing; C/A-code and Y-code waveform extraction; coherent integrations as short as 125 μs; navigation message decoding; and precise observable time tagging. The software can perform these functions on all detectable satellite signals without dead time, and custom signal-processing features can easily be included into the system",2002,0, 1954,Software-based weighted random testing for IP cores in bus-based programmable SoCs,"Presents a software-based weighted random pattern scheme for testing delay faults in IP cores of programmable SoCs. We describe a method for determining static and transition probabilities (profiles) at the inputs of circuits with full-scan using testability metrics based on the targeted fault model, We use a genetic algorithm (GA) based search procedure to determine optimal profiles. We use these optimal profiles to generate a test program that runs on the processor core. This program applies test patterns to the target IP cores in the SoC and analyzes the test responses. This provides the flexibility of applying multiple profiles to the IP core under test to maximize fault coverage. This scheme does not incur the hardware overhead of logic BIST, since the pattern generation and analysis is done by software. We use a probabilistic approach to finding the profiles. We describe our method on transition and path-delay fault models, for both enhanced full-scan and normal full-scan circuits. We present experimental results using the ISCAS 89 benchmarks as IP cores.",2002,0, 1955,Dynamic coupling measures for object-oriented software,"The relationships between coupling and external quality factors of object-oriented software have been studied extensively for the past few years. For example, several studies have identified clear empirical relationships between class-level coupling and the fault-proneness of the classes. A common way to quantify the coupling is through static code analysis. However, the resulting static coupling measures only capture certain underlying dimensions of coupling. Other dependencies regarding the dynamic behavior of software can only be inferred from run-time information. For example, due to inheritance and polymorphism, it is not always possible to determine the actual receiver and sender classes (i.e., the objects) from static code analysis. This paper describes how several dimensions of dynamic coupling can be calculated by tracing the flow of messages between objects at run-time. As a first evaluation of the proposed dynamic coupling measures, fairly accurate prediction models of the change proneness of classes have been developed using change data from nine maintenance releases of a large SmallTalk system. Preliminary results suggest that dynamic coupling may also be useful for developing prediction models and tools supporting change impact analysis. At present, work on developing a dynamic coupling tracer and ripple-effect prediction models for Java programs is underway.",2002,0, 1956,Gemini: maintenance support environment based on code clone analysis,"Maintaining software systems is becoming a more complex and difficult task, as the scale becomes larger. It is generally said that code cloning is one of the factors that make software maintenance difficult. A code clone is a code portion in source files that is identical or similar to another. If some faults are found in a code clone, it is necessary to correct the faults in its all code clones. However for large scale software, it is very difficult to correct them completely. We develop a maintenance support environment, called Gemini, which visualizes the code clone information from a code clone detection tool, CCFinder. Using Gemini, we can specify a set of distinctive code clones through the GUI (scatter plot and metrics graph about code clones), and refer the fragments of source code corresponding to the clone on the plot or graph.",2002,0, 1957,An industrial case study to examine a non-traditional inspection implementation for requirements specifications,"Software inspection is one of the key enablers for quality improvement and defect cost reduction. Although its benefits are shown in many studies, a major obstacle to implement and use inspection technologies in software projects is the costs associated with it. Companies therefore constantly ask for the most cost-effective inspection implementation. Answering this question requires a discussion about the design of the inspection process as well as the question of how the selected process activities are to be performed. As a consequence, the tailored process and activities often result in an inspection implementation that perfectly fits into a project or environment but is different to the traditional ones presented in the existing inspection literature. We present and examine a non-traditional inspection implementation at DaimlerChrysler AG. The design of this inspection approach evolved over time as part of a continuous improvement effort and therefore integrates specific issues of the project environment as well as recent research findings. In addition to the description of the inspection approach, the paper presents quantitative results to characterize the suggested inspection implementation and investigates some of the essential hypotheses in the inspection area. Both, the technical description as well as its quantitative underpinning serves as an example for other companies that pursue improvement or adaptation efforts in the software inspection area.",2002,0, 1958,Investigating the influence of inspector capability factors with four inspection techniques on inspection performance,"We report on a controlled experiment with over 170 student subjects to investigate the influence of inspection process, i.e., the defect detection technique applied, and inspector capability factors on the effectiveness and efficiency of inspections on individual and team level. The inspector capability factors include measures on the inspector's experience, as well as a pre-test with a mini-inspection. We use sampling to quantify the gain of defects detected from selecting the best inspectors according to the pre-test results compared to the performance of an average team of inspectors. Main findings are that inspector development and quality assurance capability and experience factors do not significantly distinguish inspector groups with different inspection performance. On the other hand the mini-inspection pre-test has considerable correlation to later inspection performance. The sampling of teams shows that selecting inspectors according to the mini-inspection pretest considerably improves average inspection effectiveness by up to one third.",2002,0, 1959,A generic model and tool support for assessing and improving Web processes,"We discuss a generic quality framework, based on a generic model, for evaluating Web processes. The aim is to perform assessment and improvement of web processes by using techniques from empirical software engineering. A web development process can be broadly classified into two almost independent sub-processes: the authoring process (AUTH process) and the process of developing the infrastructure (INF process). The AUTH process concerns the creation and management of the contents of a set of nodes and the way they are linked to produce a web application, whereas the INF development process provides technological support and involves creation of databases, integration of the web application to legacy systems etc. In this paper, we instantiate our generic quality model to the AUTH process and present a measurement framework for this process. We also present a tool support to provide effective guidance to software personnel including developers, managers and quality assurance engineers.",2002,0, 1960,An empirical study of the impact of count models predictions on module-order models,"Software quality prediction models are used to achieve high software reliability. A module-order model (MOM) uses an underlying quantitative prediction model to predict this rank-order. This paper compares performances of module-order models of two different count models which are used as the underlying prediction models. They are the Poisson regression model and the zero-inflated Poisson regression model. It is demonstrated that improving a count model for prediction does not ensure a better MOM performance. A case study of a full-scale industrial software system is used to compare performances of module-order models of the two count models. It was observed that improving prediction of the Poisson count model by using zero-inflated Poisson regression did not yield module-order models with better performance. Thus, it was concluded that the degree of prediction accuracy of the underlying model did not influence the results of the subsequent module-order model. Module-order modeling is proven to be a robust and effective method even though both underlying prediction may sometimes lack acceptable prediction accuracy.",2002,0, 1961,Experience from replicating empirical studies on prediction models,"When conducting empirical studies, replications are important contributors to investigating the generality of the studies. By replicating a study in another context, we investigate what impact the specific environment has, related to the effect of the studied object. In this paper, we define different levels of replication to characterise the similarities and differences between an original study and a replication, with particular focus on prediction models for the identification of fault-prone software components. Further, we derive a set of issues and concerns which are important in order to enable replication of an empirical study and to enable practitioners to use the results. To illustrate the importance of the issues raised, a replication case study is presented in the domain of prediction models for fault-prone software components. It is concluded that the results are very divergent, depending on how different parameters are chosen, which demonstrates the need for well-documented empirical studies to enable replication and use",2002,0, 1962,How valuable is company-specific data compared to multi-company data for software cost estimation?,"This paper investigates the pertinent question whether multi-organizational data is valuable for software project cost estimation. Local, company-specific data is widely believed to provide a better basis for accurate estimates. On the other hand, multi-organizational databases provide an opportunity for fast data accumulation and shared. information benefits. Therefore, this paper trades off the potential advantages and drawbacks of using local data as compared to multi-organizational data. Motivated by the results from previous investigations, we further analyzed a large cost database from Finland that collects standard cost factors and includes information on six individual companies. Each of these companies provided data for more than ten projects. This information was used to compare the accuracy between company-specific (local) and company-external (global) cost models. They show that company-specific models seem not to yield better results than the company external models. Our results are based on applying two standard statistical estimation methods (OLS-regression, analysis of variance) and analogy-based estimation.",2002,0, 1963,"An integrated approach to flow, thermal and mechanical modeling of electronics devices","The future success of many electronics companies will depend to a large extent on their ability to initiate techniques that bring schedules, performance, tests, support, production, life-cycle-costs, reliability prediction and quality control into the earliest stages of the product creation process. Earlier papers have discussed the benefits of an integrated analysis environment for system-level thermal, stress and EMC prediction. This paper focuses on developments made to the stress analysis module and presents results obtained for an SMT resistor. Lifetime predictions are made using the Coffin-Manson equation. Comparison with the creep strain energy based models of Darveaux (1997) shows the shear strain based method to underestimate the solder joint life. Conclusions are also made about the capabilities of both approaches to predict the qualitative and quantitative impact of design changes.",2002,0, 1964,A quality assessment model for Java code,"Quality measures are extremely difficult to quantify because they depend on many parameters and factors, some of which cannot be identified or measured readily. Java is the language of choice for interoperable code segments that constitute an effective interface layer between Web servers and the user. Realizing those code segments, however, is a challenge. Reusability criteria do not apply. This paper describes a quality model that can be used directly on code, and thus during light development and in rapid development cycles. The model is based on nonquantifiable attributes of quality that then are related to specific measures found using a structured method. The measures identify statistical clusters that can be used to categorize the quality of each Java class file. The relation between quality factors and measures is proven at the mathematical level, using the representational theory of measurement, and then at the empirical level, using an independent assessment. The preliminary results collected seem to indicate that the quality model is effective in classifying Java programs. An important indication can then be obtained by the quality analysis.",2002,0, 1965,Release date prediction for telecommunication software using Bayesian Belief Networks,"Many techniques are used for cost, quality and schedule estimation in the context of software risk management. Application of Bayesian Belief Networks (BBN) in this area permits process metrics and product metrics (static code metrics) to be considered in a causal way (i.e. each variable within the model has a cause-effect relationship with other variables) and, in addition, current observations can be used to update estimates based on historical data. However, the real situation that researchers face is that process data is often inadequately, or inappropriately, collected and organized by the development organization. In this paper, we explore if BBN could be used to predict appropriate release dates for a new set of products from a telecommunication company based on static code metrics data and limited process information collected from a earlier set of the same products. Two models are evaluated with different methods involved to analyze the available metrics data.",2002,0, 1966,A probably approximately correct framework to estimate performance degradation in embedded systems,"Future design environments for embedded systems will require the development of sophisticated computer-aided design tools for compiling the high-level specifications of an application down to a final low-level language describing the embedded solution. This requires abstraction of technology-dependent aspects and requirements into behavioral entities. The paper takes a first step in this direction by introducing a high-level methodology for estimating the performance degradation of an application affected by perturbations; a special emphasis is given to accuracy performance. To grant generality it is uniquely assumed that the performance degradation function and the mathematical formulation describing the application are Lebesgue measurable. Perturbations affecting the application abstract details related to physical sources of uncertainties such as finite precision representation, faults, fluctuations of physical parameters, battery power variations, and aging effects whose impact on the computation can be treated within a high-level homogenous framework. A novel stochastic theory based on randomization is suggested to quantify the approximated nature of the perturbed environment. The outcomes are two algorithms which estimate in polynomial time the performance degradation of the application once affected by perturbations. Such information can then be exploited by HW/SW codesign methodologies to guide the subsequent partitioning between HW and SW, analog versus digital, fixed versus floating point, or used to validate architectural choices before any low-level design step takes place. The proposed method is finally applied to real designs involving neural and wavelet-based applications",2002,0, 1967,Design and implementation of a pluggable fault tolerant CORBA infrastructure,"In this paper we present the design and implementation of a Pluggable Fault Tolerant CORBA Infrastructure that provides fault tolerance for CORBA applications by utilizing the pluggable protocols framework that is available for most CORBA ORBS. Our approach does not require modification to the CORBA ORB, and requires only minimal modifications to the application. Moreover; it avoids the difficulty of retrieving and assigning the ORB state, by incorporating the fault tolerance mechanisms into the ORB. The Pluggable Fault Tolerant CORBA Infrastructure achieves performance that is similar to, or better than, that of other Fault Tolerant CORBA systems, while providing strong replica consistency.",2002,0, 1968,"Trace: an open platform for high-layer protocols, services and networked applications management","This paper presents the Trace management platform, an extension of the SNMP infrastructure based on the IETF Script MIB to support integrated, distributed and flexible management of high-layer protocols, services and networked applications. The platform is specifically geared towards running and analyzing protocol interactions, and triggering custom scripts when certain conditions are met.",2002,0, 1969,Heaps and stacks in distributed shared memory,"Software-based distributed shared memory (DSM) systems do usually not provide any means to use shared memory regions as stacks or via an efficient heap memory allocator. Instead DSM users are forced to work with very rudimentary and coarse grain memory (de-)allocation primitives. As a consequence most DSM applications have to ""reinvent the wheel"", that is to implement simple stack or heap semantics within the shared regions. Obviously, this has several disadvantages. It is error-prone, timeconsuming and inefficient. This paper presents an all in software DSM that does not suffer from these drawbacks. Stack and heap organization is adapted to the changed requirements in DSM environments and both, stacks and heaps, are transparently placed in DSM space by the operating system.",2002,0, 1970,Bond and electron beam welding quality control of the aluminum stabilized and reinforced CMS conductor by means of ultrasonic phased-array technology,"The Compact Muon Solenoid (CMS) is one of the general-purpose detectors to be provided for the LHC project at CERN. The design field of the CMS superconducting magnet is 4 T, the magnetic length is 12.5 m and the free bore is 6 m. The coils for CNIS are wound of aluminum-stabilized Rutherford type superconductors reinforced with high-strength aluminum alloy. For optimum performance of the conductor a void-free metallic bonding between the high-purity aluminum and the Rutherford type cable as well as between the electron beam welded reinforcement and the high-purity aluminum must be guaranteed. It is the main task of this development work to assess continuously the bond quality over the whole width and the total length of the conductors during manufacture. To achieve this goal we use the ultrasonic phased-array technology. The application of multi-element transducers allows an electronic scanning perpendicular to the direction of production. Such a testing is sufficiently fast in order to allow a continuous analysis of the complete bond. A highly sophisticated software allows the on-line monitoring of the bond and weld quality.",2002,0, 1971,Predicting TCP throughput from non-invasive network sampling,"In this paper, we wish to derive analytic models that predict the performance of TCP flows between specified endpoints using routinely observed network characteristics such as loss and delay. The ultimate goal of our approach is to convert network observables into representative user and application relevant performance metrics. The main contributions of this paper are in studying which network performance data sources are most reflective of session characteristics, and then in thoroughly investigating a new TCP model based on Padhye et al. (2000) that uses non-invasive network samples to predict the throughput of representative TCP flows between given end-points.",2002,0, 1972,Using version control data to evaluate the impact of software tools: a case study of the Version Editor,"Software tools can improve the quality and maintainability of software, but are expensive to acquire, deploy, and maintain, especially in large organizations. We explore how to quantify the effects of a software tool once it has been deployed in a development environment. We present an effort-analysis method that derives tool usage statistics and developer actions from a project's change history (version control system) and uses a novel effort estimation algorithm to quantify the effort savings attributable to tool usage. We apply this method to assess the impact of a software tool called VE, a version-sensitive editor used in Bell Labs. VE aids software developers in coping with the rampant use of certain preprocessor directives (similar to #if/#endif in C source files). Our analysis found that developers were approximately 40 percent more productive when using VE than when using standard text editors.",2002,0, 1973,A survey on software architecture analysis methods,"The purpose of the architecture evaluation of a software system is to analyze the architecture to identify potential risks and to verify that the quality requirements have been addressed in the design. This survey shows the state of the research at this moment, in this domain, by presenting and discussing eight of the most representative architecture analysis methods. The selection of the studied methods tries to cover as many particular views of objective reflections as possible to be derived from the general goal. The role of the discussion is to offer guidelines related to the use of the most suitable method for an architecture assessment process. We will concentrate on discovering similarities and differences between these eight available methods by making classifications, comparisons and appropriateness studies.",2002,0, 1974,Software measurement: uncertainty and causal modeling,"Software measurement can play an important risk management role during product development. For example, metrics incorporated into predictive models can give advance warning of potential risks. The authors show how to use Bayesian networks, a graphical modeling technique, to predict software defects and-perform ""what if"" scenarios.",2002,0, 1975,Comprehension of object-oriented software cohesion: the empirical quagmire,"Chidamber and Kemerer (1991) proposed an object-oriented (OO) metric suite which included the Lack of Cohesion Of Methods (LCOM) metric. Despite considerable effort both theoretically and empirically since then, the software engineering community is still no nearer finding a generally accepted definition or measure of OO cohesion. Yet, achieving highly cohesive software is a cornerstone of software comprehension and hence, maintainability. In this paper, we suggest a number of suppositions as to why a definition has eluded (and we feel will continue to elude) us. We support these suppositions with empirical evidence from three large C++ systems and a cohesion metric based on the parameters of the class methods; we also draw from other related work. Two major conclusions emerge from the study. Firstly, any sensible cohesion metric does at least provide insight into the features of the systems being analysed. Secondly however, and less reassuringly, the deeper the investigative search for a definitive measure of cohesion, the more problematic its understanding becomes; this casts serious doubt on the use of cohesion as a meaningful feature of object-orientation and its viability as a tool for software comprehension.",2002,0, 1976,Adaptive parameter tuning for relevance feedback of information retrieval,"Relevance feedback is an effective way to improve the performance of an information retrieval system. In practice, the parameters for feedback were usually determined manually without the consideration of the quality of the query. We propose a new concept (adaptiveness) to measure the quality of the query. We built two models to predict the adaptiveness of the query. The parameters for feedback were then determined by the quality of the query. Our experiments on TREC data showed that the performance was improved significantly when compared with blind relevance feedback.",2002,0, 1977,The application of a distributed system-level diagnosis algorithm in dynamic positioning system,"This paper introduces the application of a distributed system-level fault diagnosis algorithm for detecting and diagnosing faulty processors in dynamic positioning system (DPS) of an offshore vessel. The system architecture of DPS is a loose coupling distributed multiprocessor system, which adopts the technique of Intel's MULTIBUS II and develops a software application on the platform of iRMX OS. In this paper a new approach to the diagnosis problem is presented, including an adaptive PMC model, distributed diagnosis including self-diagnosis and interactive-diagnosis, and system graph-theoretic model. The self-diagnosis fully utilises the individual results of built-in self-tests as a part of diagnosis work. Interactive-diagnosis means that in the system fault-free units perform simple periodic tests on one another under the direction of the graph-theoretic model by interactively communicating, and every unit can only send the diagnosis information to its considered fault-free units. Finally, we illustrate the procedure of diagnosis verification. The results obtained show that the adaptive PMC model is applicable, the distributed system-level diagnosis algorithm is proper, and the applications of diagnosis and verification are reliable and practicable.",2002,0, 1978,Facilitating adaptation to trouble spots in wireless MANs,"This paper presents schemes that enable high level communications protocols and applications to adapt to connectivity loss and quality degradation in metropolitan area wireless networks. We postulate that the majority of these problem areas or trouble spots, which are intrinsic to wireless networks, are related to location and environmental factors. Based on this premise, we propose a mechanism that gathers semantic data pertaining to trouble spots; prior knowledge of such locations can be used by higher-level communication protocols to preemptively adapt, thereby avoiding undesirable effects at the application level. To facilitate the detection and categorization of trouble spots, we propose a list of metrics to analyze the status of a wireless service. We report on our experiences with using these metrics to identify trouble spots and present initial results from an experimental evaluation of their effectiveness.",2002,0, 1979,An adaptive distance relay and its performance comparison with a fixed data window distance relay,"This paper describes the design, implementation and testing of an adaptive distance relay. The relay uses a fault detector to determine the inception of a fault and then uses data windows of appropriate length for estimating phasors and seen impedances. Hardware and software of the proposed adaptive relay are also described in this paper. The relay was tested using a model power system and a real-time playback simulator. Performance of the relay was compared with a fixed data window distance relay. Some results are reported in this paper. These results indicate that the adaptive distance relay provides faster tripping in comparison to the fixed data window distance relay.",2002,0,2131 1980,Assessing the quality of Web-based applications via navigational structures,"We study the link validity of a Web site's navigational structure to enhance Web quality. Our approach employs the principle of statistical usage testing to develop an efficient and effective testing mechanism. Some advantages of our approach include generating test scripts systematically, providing coverage metrics, and executing hyperlinks only once.",2002,0, 1981,Calibration and estimation of redundant signals,This paper presents an adaptive filter for real-time calibration of redundant signals consisting of sensor data and/or analytically derived measurements. The measurement noise covariance matrix is adjusted as a function of the a posteriori probabilities of failure of the individual signals. An estimate of the measured variable is obtained as a weighted average of the calibrated signals. The weighting matrix is recursively updated in real time instead of being fixed a priori. The filter software is presently hosted in a Pentium platform and is portable to other commercial platforms. The filter can be used to enhance the Instrumentation and Control System Software in large-scale dynamical systems.,2002,0, 1982,Needs for communications and onboard processing in the vision era,"The NASA New Millennium Program (NMP), in conjunction with the Earth Science Enterprise Technology Office, has examined the capability needs of future NASA Earth Science missions and defined a set of high priority technologies that offer broad benefits to future missions, which would benefit from validation in space before their use in a science mission. In the area of spacecraft communications, the need for high and ultra-high data rates is driving development of communications technologies. This paper describes the current vision and roadmaps of the NMP for the technology needed to support ultra-high data rate downlink to Earth. Hyperspectral land imaging, radar imaging and multi-instrument platforms represent the most demanding classes of instruments in which large data flows place limitations upon the performance of the instrument and systems. The existing and prospective data distribution (DD) modes employ various types of links, such as DD from low-Earth-orbit (LEO) spacecraft direct to the ground, DD from geosynchronous (GEO) spacecraft, LEO to GEO relays, multi-spacecraft links, and sensor webs. Depending on the type of link, the current data rate requirements vary from 2 Mbps (LEO to GEO relay) to 150 Mbps (DD from LEO spacecraft). It is expected that in the 20-year timeframe, the link data rates may increase to 100 Gbps. To ensure such capabilities, the aggressive development of communication technologies in the optical frequency region is necessary. Current technology readiness levels (TRL) of the technology components for the space segment of communications hardware varies from 3 (proof of concept) to 5 (validation in relevant environment). Development of onboard processing represents another area driven by increasing data rates of spaceborne experiments. The technologies that need further development include data compression, event recognition and response, as well as specific hyperspectral and radar data processing. Aspects of onboard processing technologies requiring flight validation include: fault-tolerant computing and processor stability, autonomous event detection and response, situation-based data compression and processing. The required technology validation missions can be divided in two categories: hardware-related missions and softwar- e-related missions. Objectives of the first kind of missions include radiation-tolerant processors and radiation-tolerant package switching communications node/network interface. Objectives of the second kind of missions include autonomous spacecraft operations and payload (instrument-specific) system operations.",2002,0, 1983,"Estimation of net primary productivity using Boreal Ecosystem Productivity Simulator - a case study for Hokkaido Island, Japan","This paper describes a method for the estimation of the net primary productivity (NPP) by integrating remotely sensed and GIS data with a process,based ecosystem model (BEPS: Boreal Ecosystem Productivity Simulator) for Hokkaido Island, Japan. Using this method, we calculated the mean and total NPP for the study area in 1998 was 644 g C m-2 year-1 and 0.078 Pg C year-1, respectively. The effect of the quality of the model input requirements on accurate NPP estimation using a process-based model was also checked. The results show that the higher quality input data obtained from GIS data sets for a process-based model improved the NPP estimation accuracy for Hokkaido Island by about 16.6-39.7%.",2002,0, 1984,Object-oriented information extraction for the monitoring of sensitive aquatic environments,"According to the new European water framework directive, water management plans will become mandatory for all inland water bodies. The development of management plans requires information about the environmental status of the water bodies and their surroundings. Wetland vegetation is a good indicator for water quality monitoring. Remote sensing methods are assumed to be able to provide the specialist with the information needed for the monitoring of wetland vegetation. Applying methods offered by the object oriented eCognition software, an evaluation chain for sensitive aquatic environments is under development which should be used for monitoring purposes. This paper describes two approaches for wetland classification and the evaluation of reed stands. The first one is deriving the information exclusively from Ikonos data analysis, simulating the case of a basic inventory. The second one is taking advantage of existing thematic GIS layers, a situation which often appears in monitoring purposes.",2002,0, 1985,Fair triangle mesh generation with discrete elastica,"Surface fairing, generating free-form surfaces satisfying aesthetic requirements, is important for many computer graphics and geometric modeling applications. A common approach for fair surface design consists of minimization of fairness measures penalizing large curvature values and curvature oscillations. The paper develops a numerical approach for fair surface modeling via curvature-driven evolutions of triangle meshes. Consider a smooth surface each point of which moves in the normal direction with speed equal to a function of curvature and curvature derivatives. Chosen the speed function properly, the evolving surface converges to a desired shape minimizing a given fairness measure. Smooth surface evolutions are approximated by evolutions of triangle meshes. A tangent speed component is used to improve the quality of the evolving mesh and to increase computational stability. Contributions of the paper include also art improved method for estimating the mean curvature.",2002,0, 1986,Automatic detection and exploitation of branch constraints for timing analysis,"Predicting the worst-case execution time (WCET) and best-case execution time (BCET) of a real-time program is a challenging task. Though much progress has been made in obtaining tighter timing predictions by using techniques that model the architectural features of a machine, significant overestimations of WCET and underestimations of GCET can still occur. Even with perfect architectural modeling, dependencies on data values can constrain the outcome of conditional branches and the corresponding set of paths that can be taken in a program. While branch constraint information has been used in the past by some timing analyzers, it has typically been specified manually, which is both tedious and error prone. This paper describes efficient techniques for automatically detecting branch constraints by a compiler and automatically exploiting these constraints within a timing analyzer. The result is significantly tighter timing analysis predictions without requiring additional interaction with a user.",2002,0, 1987,Fast software implementation of MPEG advanced audio encoder,"An optimized software implementation of a high quality MPEG AAC-LC (low complexity) audio encoder is presented in this paper. The standard reference encoder is improved by utilizing several algorithmic optimizations (fast psycho-acoustic model, new tonality estimation, new time domain block switching, optimized quantizer and Huffman coder) and very careful code optimizations for PC CPU architectures with SIMD (single-instruction-multiple-data) instruction set. The psychoacoustic model used the MDCT filterbank for energy estimation and peak detection as a measure of tonality. Block size decision is based on local perceptual entropies as well as LPC analysis of the time signal. Algorithmic optimizations in the quantizer include loop control module modification and optimized Huffman search. Code optimization is based on parallel processing by replacing vector algebra and math junctions with their optimized equivalents with Intel® Signal Processing Library (SPL). The implemented codec outperforms consumer MP3 encoders at 30% less bitrate at the same time achieving encoding times several times faster than real-time.",2002,0, 1988,Built-in-test for processor-based modules,"This article describes a software methodology for built-in-test (BIT) for processor based modules. Firstly, an example of the hardware comprising a common processor module is given. This is followed by a description of the BIT software architecture and details of the component elements: the test executive, which integrates all the components and provides the interface to external applications and boot programs; the BIT reconfiguration table, for test control flexibility; test result handling; and the individual test functions, written in both assembly language and C. This BIT product is extremely flexible, portable, and very thorough are high fault detection, high isolation coverage, and support for unique test requirements",2002,0, 1989,A software-reliability growth model for N-version programming systems,"This paper presents a NHPP-based SRGM (software reliability growth model) for NVP (N-version programming) systems (NVP-SRGM) based on the NHPP (nonhomogeneous Poisson process). Although many papers have been devoted to modeling NVP-system reliability, most of them consider only the stable reliability, i.e., they do not consider the reliability growth in NVP systems due to continuous removal of faults from software versions. The model in this paper is the first reliability-growth model for NVP systems which considers the error-introduction rate and the error-removal efficiency. During testing and debugging, when a software fault is found, a debugging effort is devoted to remove this fault. Due to the high complexity of the software, this fault might not be successfully removed, and new faults might be introduced into the software. By applying a generalized NHPP model into the NVP system, a new NVP-SRGM is established, in which the multi-version coincident failures are well modeled. A simplified software control logic for a water-reservoir control system illustrates how to apply this new software reliability model. The s-confidence bounds are provided for system-reliability estimation. This software reliability model can be used to evaluate the reliability and to predict the performance of NVP systems. More application is needed to validate fully the proposed NVP-SRGM for quantifying the reliability of fault-tolerant software systems in a general industrial setting. As the first model of its kind in NVP reliability-growth modeling, the proposed NVP SRGM can be used to overcome the shortcomings of the independent reliability model. It predicts the system reliability more accurately than the independent model and can be used to help determine when to stop testing, which is a key question in the testing and debugging phase of the NVP system-development life cycle",2002,0, 1990,Advanced pattern recognition for detection of complex software aging phenomena in online transaction processing servers,"Software aging phenomena have been recently studied; one particularly complex type is shared memory pool latch contention in large OLTP servers. Latch contention onset leads to severe performance degradation until a manual rejuvenation of the DBMS shared memory pool is triggered. Conventional approaches to automated rejuvenation have failed for latch contention because no single resource metric has been identified that can be monitored to alert the onset of this complex mechanism. The current investigation explores the feasibility of applying an advanced pattern recognition method that is embodied in a commercially available equipment condition monitoring system (SmartSignal eCMâ„? for proactive annunciation of software-aging faults. One hundred data signals are monitored from a large OLTP server, collected at 20-60 sec. intervals over a 5-month period. Results show 13 variables consistently deviate from normal operation prior to a latch event, providing up to 2 hours early warning.",2002,0, 1991,Modeling and quantification of security attributes of software systems,"Quite often failures in network based services and server systems may not be accidental, but rather caused by deliberate security intrusions. We would like such systems to either completely preclude the possibility of a security intrusion or design them to be robust enough to continue functioning despite security attacks. Not only is it important to prevent or tolerate security intrusions, it is equally important to treat security as a QoS attribute at par with, if not more important than other QoS attributes such as availability and performability. This paper deals with various issues related to quantifying the security attribute of an intrusion tolerant system, such as the SITAR system. A security intrusion and the response of an intrusion tolerant system to the attack is modeled as a random process. This facilitates the use of stochastic modeling techniques to capture the attacker behavior as well as the system's response to a security intrusion. This model is used to analyze and quantify the security attributes of the system. The security quantification analysis is first carried out for steady-state behavior leading to measures like steady-state availability. By transforming this model to a model with absorbing states, we compute a security measure called the ""mean time (or effort) to security failure"" and also compute probabilities of security failure due to violations of different security attributes.",2002,0, 1992,Recovery and performance balance of a COTS DBMS in the presence of operator faults,"A major cause of failures in large database management systems (DBMS) is operator faults. Although most of the complex DBMS have comprehensive recovery mechanisms, the effectiveness of these mechanisms is difficult to characterize. On the other hand, the tuning of a large database is very complex and database administrators tend to concentrate on performance tuning and disregard the recovery mechanisms. Above all, database administrators seldom have feedback on how good a given configuration is concerning recovery. This paper proposes an experimental approach to characterize both the performance and the recoverability in DBMS. Our approach is presented through a concrete example of benchmarking the performance and recovery of an Oracle DBMS running the standard TPC-C benchmark, extended to include two new elements: a fault load based on operator faults and measures related to recoverability. A classification of operator faults in DBMS is proposed. The paper ends with the discussion of the results and the proposal of guidelines to help database administrators in finding the balance between performance and recovery tuning.",2002,0, 1993,A compositional approach to monitoring distributed systems,"This paper proposes a specification-based monitoring approach for automatic run-time detection of software errors and failures of distributed systems. The specification is assumed to be expressed in communicating finite state machines based formalism. The monitor observes the external I/O and partial state information of the target distributed system and uses them to interpret the specification. The approach is compositional as it achieves global monitoring by combining the component-level monitoring. The core of the paper describes the architecture and operations of the monitor The monitor includes several independent mechanisms, each tailored to detecting specific kinds of errors or failures. Their operations are described in detail using illustrative examples. Techniques for dealing with nondeterminism and concurrency issues in monitoring a distributed system are also discussed with respect to the considered model and specification. A case study describing the application of the prototype monitor to an embedded system is presented.",2002,0, 1994,ATPG for timing-induced functional errors on trigger events in hardware-software systems,We consider timing-induced functional errors in inter process communication. We present an Automatic Test Pattern Generation (ATPG) algorithm for the co-validation of hardware-software systems. Events on trigger signals (signals contained in the sensitivity list of a process) implement the basic synchronization mechanism in most hardware-software description languages. Timing faults on trigger signals can have a serious impact on system behavior. We target timing faults on trigger signals by enhancing a timing fault model proposed in previous work. The ATPG algorithm which we present targets the new timing fault model and provides significant performance benefits over manual test generation which is typically used for co-validation.,2002,0, 1995,A system level approach in designing dual-duplex fault tolerant embedded systems,"This paper presents an approach for designing embedded systems able to tolerate hardware faults, defined as an evolution of our previous work proposing an hardware/software co-design framework for realizing reliable embedded systems. The framework is extended to support the designer in achieving embedded systems with fault tolerant properties minimizing overheads and limiting power consumption. A reference system architecture is proposed; the specific hardware/software implementation and reliability methodologies (to achieve the fault tolerance properties) are the result of an enhanced hw/sw partitioning process driven by the designer' constraints and by the reliability constraints, set at the beginning of the design process. By introducing also the reliability constraints during specification, the final system can benefit from the introduced redundancy also for performance gains, while limiting area, time, performance and power consumption overheads.",2002,0, 1996,Multi-level fault injection experiments based on VHDL descriptions: a case study,"The probability of transient faults increases with the evolution of technologies. There is a corresponding increased demand for an early analysis of erroneous behaviors. This paper reports on results obtained with SEU-like fault injections in VHDL descriptions of digital circuits. Several circuit description levels are considered, as well as several fault modeling levels. These results show that an analysis performed at a very early stage in the design process can actually give a helpful insight into the response of a circuit when a fault occurs.",2002,0, 1997,Analysis of SEU effects in a pipelined processor,"Modern processors embed features such as pipelined execution units and cache memories that can hardly be controlled by programmers through the processor instruction set. As a result, software-based fault injection approaches are no longer suitable for assessing the effects of SEUs in modern processors, since they are not able to evaluate the effects of SEUs affecting pipelines and caches. In this paper we report an analysis of a commercial processor core where the effects of SEUs located in the processor pipeline and cache memories are studied. Moreover the obtained results are compared with those software-based approaches provide. Experimental results show that software-based approaches may lead to errors during the failure rate estimation of up to 400%.",2002,0, 1998,Bit flip injection in processor-based architectures: a case study,"This paper presents the principles of two different approaches for the study of the effect of transient bit flips on the behavior of processor-based digital architectures: one of them based on the on-line ""injection"" and execution of pieces of code (called CEU codes) using a suitable hardware architecture, while the other is performed using a behavioral level processor description; being based on the so-called ""saboteurs"" method. Results obtained for benchmark programs executed by a widely used commercial 8-bit microprocessor, allow to validate both approaches which provide inputs for an original error rate prediction methodology. The comparison of predictions to measured error rates issued from radiation ground testing validates the proposed error rate prediction approach.",2002,0, 1999,Error rate estimation for a flight application using the CEU fault injection approach,This paper aims at validating the efficiency of a fault injection approach to predict error rate on applications devoted to operate in radiation environment. Soft error injection experiments and radiation ground testing were performed on software modules using a digital board built on a digital signal processor which is included in a satellite instrument. The analysis of experimental results put in evidence the potentialities offered by the used methodology to predict the error rate of complex applications.,2002,0, 2000,Predictable instruction caching for media processors,"The determinism of instruction cache performance can be considered a major problem in multimedia devices which hope to maximise their quality of service. If instructions are evicted from the cache by competing blocks of code, the running application will take significantly longer to execute than if the instructions were present. Since it is difficult to predict when this interference will occur the performance of the algorithm at a given point in time is unclear We propose the use of an automatically configured partitioned cache to protect regions of the application code from each other and hence minimise interference. As well as being specialised to the purpose of providing predictable performance, this cache can be specialised to the application being run, rather than for the average case, using simple compiler algorithms.",2002,0, 2001,Air damping of mechanical microbeam resonators with optical excitation and optical detection,"This paper is dedicated to both Finite Element Method (FEM) simulations and optical measurements of the mechanical quality factor dependence on air pressure of microbeam resonators. Firstly, a summary of previous theoretical works is given, for two different air cavity cases. Different damping mechanisms are discussed. Secondly, FEM simulation results using MEMCAD software are obtained for resonators with small air gaps. Finally, prototypes of silicon resonators are fabricated using SOI technology, and characterized under a precisely controlled vacuum chamber. In order to investigate both air cavity cases, optical excitation/detection is used. Good agreements are obtained with theory and simulations.",2002,0, 2002,"Mathematical modeling, performance analysis and simulation of current Ethernet computer networks","This work describes an object oriented software, to be portable among different types of computer platforms, able to perform the simulation of different components of Ethernet local and long distance area networks by using the TCP/IP protocol. It is also able to detect problems in projects and operation of communication networks through the separate or joint analysis of these elements. The functions of the system are as follows: (a) analysis of elements from different layers and protocols of the simulated network (reference to OSI model); (b) analysis and efficiency measurement (quality of transmission, transfer rate, error rate) of the information transmitted on the network; (c) network performance evaluation and link capability analysis; (d) analysis of error detection and further correction capability as well as analysis of network failure tolerance. The software works using mathematical models that represent elements of different layers (reference to OSI model) of the network to be simulated as well as the performance of the abovementioned joint elements.",2002,0, 2003,A LabVIEW based data acquisition system for vibration monitoring and analysis,"LabVIEW (Laboratory Virtual Instrument Engineering Workbench) is gaining popularity as a graphical programming language, especially for data acquisition and measurement. This is due to the vast array of data acquisition cards and measurement systems which can be supported by LabVIEW as well as the relative ease with which advanced software can be programmed. One area of application of LabVIEW is the monitoring and analysis of vibration signals. The analysis and monitoring of the signal are of concern for fault detection and predictive maintenance. This paper describes LabVIEW based data acquisition and analysis developed specifically for vibration monitoring and used with vibration fault simulation systems (VFSS). On-line displays of time and frequency domains of the vibration signal provide a user-friendly data acquisition interface.",2002,0, 2004,Application of ANN to power system fault analysis,"This paper presents the computer architecture development using Artificial Neural Network (ANN) as an approach for predicting fault in a large interconnected transmission system. Transmission line faults can be classified using the bus voltage and line fault current. Monitoring the performance of these two factors are very useful for power system protection devices. The ANN is designed to be incorporated with a matrix based software tool MATLAB Version 6.0, which deals with fault diagnosis in power system. In MATLAB software modules, the balanced and unbalanced fault can be simulated. The data generated from this software are to be used as training and testing sets in the Neural Ware Simulator.",2002,0, 2005,Strategy to improve the indoor coverage for mobile station,"This paper presents an evaluation of whether the indoor signal strength for commercial buildings fulfills the cell planning requirement. In order to provide high quality cellular service, it is necessary to place an array of distributed antennas connected using feeder cable within the building. With the feeder cable approach, splitters and computer software, we develop using Visual Basic 6.0. The effective radiated power (ERP) at the distributed antenna can be calculated and it can also be predicted how far the signal can go with the calculated ERP. This design process can be used to obtain an estimated indoor system requirement. All of the requirements can be achieved by applying the method in this paper.",2002,0, 2006,Neighborhood selection for IDDQ outlier screening at wafer sort,"To screen defective dies, IDDQ tests require a reliable estimate of each die's defect-free measurement. The nearest-neighbor residual (NNR) method provides a straightforward, data-driven estimate of test measurements for improved identification of die outliers",2002,0, 2007,"QoS tradeoffs for guidance, navigation, and control","Future space missions will require onboard autonomy to reduce data, plan activities, and react appropriately to complex dynamic events. Software to support such behaviors is computationally-intensive but must execute with sufficient speed to accomplish mission goals. The limited processing resources onboard spacecraft must be split between the new software and required guidance, navigation, control, and communication tasks. To-date, control-related processes have been scheduled with fixed execution period, then autonomy processes are fit into remaining slack time slots. We propose the use of quality-of-service (QoS) negotiation to explicitly trade off the performance of all processing tasks, including those related to spacecraft control. We characterize controller performance based on exhaustive search and a Lyapunov optimization technique and present results that analytically predict worst-case performance degradation characteristics. The results are illustrated by application to a second-order linear system with a linear state feedback control law.",2002,0, 2008,"Strategies for fault-tolerant, space-based computing: Lessons learned from the ARGOS testbed","The Advanced Space Computing and Autonomy Testbed on the ARGOS satellite provides the first direct, on orbit comparison of a modem radiation hardened 32 bit processor with a similar COTS processor. This investigation was motivated by the need for higher capability computers for space flight use than could be met with available radiation hardened components. The use of COTS devices for space applications has been suggested to accelerate the development cycle and produce cost effective systems. Software-implemented corrections of radiation-induced SEUs (SIHFT) can provide low-cost solutions for enhancing the reliability of these systems. We have flown two 32-bit single board computers (SBCs) onboard the ARGOS spacecraft. One is full COTS, while the other is RAD-hard. The COTS board has an order of magnitude higher computational throughput than the RAD-hard board, offsetting the performance overhead of the SIHFT techniques used on the COTS board while consuming less power.",2002,0, 2009,Fault injection experiment results in space borne parallel application programs,"Development of the REE Commercial-Off-The-Shelf (COTS) based space-borne supercomputer requires a detailed knowledge of system behavior in the presence of Single Event Upset (SEU) induced faults. When combined with a hardware radiation fault model and mission environment data in a medium grained system model, experimentally obtained fault behavior data can be used to: predict system reliability, availability and performance; determine optimal fault detection methods and boundaries; and define high ROI fault tolerance strategies. The REE project has developed a fault injection suite of tools and a methodology for experimentally determining system behavior statistics in the presence of application level SEU induced transient faults. Initial characterization of science data application code for an autonomous Mars Rover geology application indicates that this code is relatively insensitive to SEUs and thus can be made highly immune to application level faults with relatively low overhead strategies.",2002,0, 2010,A new audio skew detection and correction algorithm,"The lack of synchronisation between a sender clock and a receiver audio clock in an audio application results in an undesirable effect known as ""audio skew"". This paper proposes and implements a new approach to detecting and correcting audio skew, focusing on the accuracy of measurements and on the algorithm's effect on the audio experience of the listener. The algorithms presented are shown to remove audio skew successfully, thus reducing delay and loss and hence improving audio quality.",2002,0, 2011,Watermark detection: benchmarking perspectives,"Benchmarking of watermarking algorithms is a complicated task that requires examination of a set of mutually dependent performance factors (algorithm complexity, decoding/detection performance, and perceptual quality). This paper will focus on detection/decoding performance evaluation and try to summarize its basic principles. A methodology for deriving the corresponding performance metrics will also be provided.",2002,0, 2012,Upgrading engine test cells for improved troubleshooting and diagnostics,"Upgrading military engine test cells with advanced diagnostic and troubleshooting capabilities will play a critical role in increasing aircraft availability and test cell effectiveness while simultaneously reducing engine operating and maintenance costs. Sophisticated performance and mechanical anomaly detection and fault classification algorithms utilizing thermodynamic, statistical, and empirical engine models are now being implemented as part of a United States Air Force Advanced Test Cell Upgrade Initiative. Under this program, a comprehensive set of realtime and post-test diagnostic software modules, including sensor validation algorithms, performance fault classification techniques and vibration feature analysis are being developed. An automated troubleshooting guide is also being implemented to streamline the troubleshooting process for both inexperienced and experienced technicians. This artificial intelligence based tool enhances the conventional troubleshooting tree architecture by incorporating probability of occurrence statistics to optimize the troubleshooting path. This paper describes the development and implementation of the F404 engine test cell upgrade at the Jacksonville Naval Air Station.",2002,0, 2013,Using SPIN model checking for flight software verification,"Flight software is the central nervous system of modern spacecraft. Verifying spacecraft flight software to assure that it operates correctly and safely is presently an intensive and costly process. A multitude of scenarios and tests must be devised, executed and reviewed to provide reasonable confidence that the software will perform as intended and not endanger the spacecraft. Undetected software defects on spacecraft and launch vehicles have caused embarrassing and costly failures in recent years. Model checking is a technique for software verification that can detect concurrency defects that are otherwise difficult to discover. Within appropriate constraints, a model checker can perform an exhaustive state-space search on a software design or implementation and alert the implementing organization to potential design deficiencies. Unfortunately, model checking of large software systems requires an often-too-substantial effort in developing and maintaining the software functional models. A recent development in this area, however, promises to enable software-implementing organizations to take advantage of the usefulness of model checking without hand-built functional models. This development is the appearance of ""model extractors"". A model extractor permits the automated and repeated testing of code as built rather than of separate design models. This allows model checking to be used without the overhead and perils involved in maintaining separate models. We have attempted to apply model checking to legacy flight software from NASA's Deep Space One (DS1) mission. This software was implemented in C and contained some known defects at launch that are detectable with a model checker. We describe the model checking process, the tools used, and the methods and conditions necessary to successfully perform model checking on the DS1 flight software.",2002,0, 2014,Experience of applying statistical control techniques to the function test phase of a large telecommunications system,"The software test process of a large telecommunications system is presented. The feasibility of introducing statistical process control techniques to one crucial software test phase, namely the function test is explored. This phase has been identified as a strategic one for meeting the commitments to customers with respect to quality objectives. Analysis of past released products revealed that a high percentage of the failures experienced in operation corresponded to software faults that could have been discovered during the function test phase. A brief description of the statistical estimators investigated (Classical vs. Bayesian) is provided, along with a few examples of their use over real sets of data. Far from being aimed at identifying new statistical models, the focus of this work is rather about putting measurement in practice: easy and effective steps to improve the status of control over the test process in a smooth, bottom-up approach are suggested",2002,0, 2015,Control-theoretic bandwidth-on-demand protocol for satellite networks,"In this paper, a control-theoretic DAMA (Demand Assignment Multiple Access) protocol for ATM-based geostationary (GEO) satellite networks is introduced The proposed scheme, developed for the European Union, project ""GOECAST"" (Information Society and Technology (IST), programme), deals with the lower priority traffic categories, which have to adapt their transmission rates to the traffic conditions. The novelty of this paper consists of the use of control theory concepts to model the satellite system and to generate the bandwidth requests. The proposed scheme is stable, avoids wasting the assigned capacity and is capable of dealing with congestions. Software simulations have been performed with the OPNET tool, in order to test the effectiveness of the proposed protocol.",2002,0, 2016,Quality-based tuning of cell downlink load target and link power maxima in WCDMA,"The objective of the paper is to validate the feasibility of auto-tuning WCDMA link power maxima and adjust cell downlink load level targets based on quality of service. The downlink cell load level is measured using total wideband transmission power. The quality indicators used are call-blocking probability, packet queuing probability and downlink link power outage. The objective is to improve performance and operability of the network with control software aiming for a specific quality of service. The downlink link maxima in each cell are regularly adjusted with a control method in order to improve performance under different load level targets. The approach is validated using a dynamic WCDMA system simulator. The conducted simulations support the assumption that the downlink performance can be managed and improved by the proposed cell-based automated optimization.",2002,0, 2017,First experiments relating behavior selection architectures to environmental complexity,"Assessing the performance of behavior selection architectures for autonomous robots is a complex task that depends on many factors. This paper reports a study comparing four motivated behavior-based architectures in different worlds with varying degrees and types of complexity, and analyzes performance results (in terms of viability, life span, and global life quality) relating architectural features to environmental complexity.",2002,0, 2018,Static analysis of SEU effects on software applications,"Control flow errors have been widely addressed in literature as a possible threat to the dependability of computer systems, and many clever techniques have been proposed to detect and tolerate them. Nevertheless, it has never been discussed if the overheads introduced by many of these techniques are justified by a reasonable probability of incurring control flow errors. This paper presents a static executable code analysis methodology able to compute, depending on the target microprocessor platform, the upper-bound probability that a given application incurs in a control flow error.",2002,0, 2019,Testing finite state machines based on a structural coverage metric,"Verification is a critical phase in the development of any hardware and software system. Finite state machines have been widely used to model hardware and software systems. Therefore, testing finite state machines (FSMs) is an important issue. Coverage analysis of a test suite for a system's implementation determines the adequacy and the confidence level of the verification phase. In this paper, we derive a fault coverage metric for a test suite for an FSM specification. We also extend this metric for fault coverage estimation of interconnected FSMs, and we propose symbolic input based fault coverage for large FSMs. Finally, we also study incremental construction of a test suite associated with a coverage for a given FSM specification.",2002,0, 2020,Application of high-quality built-in test to industrial designs,"This paper presents an approach for high-quality built-in test using a neighborhood pattern generator (NPG). The proposed NPG is practically acceptable because (a) its structure is independent of circuit under test, (b) it requires low area overhead and no performance degradation, and (c) it can encode deterministic test cubes, not only for stuck-at faults but also transition faults, with high probability. Experimental results for large industrial circuits illustrate the efficiency of the proposed approach.",2002,0, 2021,Measuring Web application quality with WebQEM,"This article discusses using WebQEM, a quantitative evaluation strategy to assess Web site and application quality. Defining and measuring quality indicators can help stakeholders understand and improve Web products. An e-commerce case study illustrates the methodology's utility in systematically assessing attributes that influence product quality",2002,0, 2022,Equal resistance control-a control methodology for line-connected converters that offers balanced resistive load under unbalanced input voltage conditions,"The result of several control schemes, applied to three-phase line-connected PWM converters under unbalanced input voltage conditions, is to further reduce the utility supply voltages due to the drawn currents. The proposed method, named equal resistance control (ERC), considers the health of the phases during unbalanced conditions in order to minimize the effects of the drawn power on the network voltages. The control philosophy is to make the current drawn from a particular phase proportional to the amplitude of remaining voltage on that phase and in-synchronism with its phase angle. The power transfer to the load is therefore optimal since no reactive power is drawn from the network. The ERC philosophy uses a feedforward per-phase control structure, where the voltage drop across the input impedance is estimated. An application example is included where this scheme is applied to a power quality device designed to mitigate voltage disturbances. Theoretical analysis, computer simulations and experimental measurements are included.",2002,0, 2023,Implementation and performance of cooperative control of shunt active filters for harmonic damping throughout a power distribution system,"This paper proposes the cooperative control of multiple active filters based on voltage detection for harmonic damping throughout a power distribution system. The arrangement of a real distribution system would be changed according to system operation, and/or fault conditions. In addition, shunt capacitors and loads are individually connected to, or disconnected from, the distribution system. Independent control might make multiple active filters produce unbalanced compensating currents. This paper presents hardware and software implementations of cooperative control for two active filters. Experimental results verify the effectiveness of the cooperative control with the help of a communication system.",2002,0,2181 2024,Using regression trees to classify fault-prone software modules,"Software faults are defects in software modules that might cause failures. Software developers tend to focus on faults, because they are closely related to the amount of rework necessary to prevent future operational software failures. The goal of this paper is to predict which modules are fault-prone and to do it early enough in the life cycle to be useful to developers. A regression tree is an algorithm represented by an abstract tree, where the response variable is a real quantity. Software modules are classified as fault-prone or not, by comparing the predicted value to a threshold. A classification rule is proposed that allows one to choose a preferred balance between the two types of misclassification rates. A case study of a very large telecommunications systems considered software modules to be fault-prone, if any faults were discovered by customers. Our research shows that classifying fault-prone modules with regression trees and the using the classification rule in this paper, resulted in predictions with satisfactory accuracy and robustness.",2002,0, 2025,A fuzzy logic framework to improve the performance and interpretation of rule-based quality prediction models for OO software,"Current object-oriented (OO) software systems must satisfy new requirements that include quality aspects. These, contrary to functional requirements, are difficult to determine during the test phase of a project. Predictive and estimation models offer an interesting solution to this problem. This paper describes an original approach to build rule-based predictive models that are based on fuzzy logic and that enhance the performance of classical decision trees. The approach also attempts to bridge the cognitive gap that may exist between the antecedent and the consequent of a rule by turning the latter into a chain of sub rules that account for domain knowledge. The whole framework is evaluated on a set of OO applications.",2002,0, 2026,Application of hazard analysis to software quality modelling,"Quality is a fundamental concept in software and information system development. It is also a complex and elusive concept. A large number of quality models have been developed for understanding, measuring and predicting quality of software and information systems. It has been recognised that quality models should be constructed in accordance to the specific features of the application domain. This paper proposes a systematic method for constructing quality models of information systems. A diagrammatic notation is devised to represent quality models that enclose application specific features. Techniques of hazard analysis for the development and deployment of safety related systems are adapted for deriving quality models from system architectural designs. The method is illustrated by a part of Web-based information systems.",2002,0, 2027,Rejection strategies and confidence measures for a k-NN classifier in an OCR task,"In handwritten character recognition, the rejection of extraneous patterns, like image noise, strokes or corrections, can improve significantly the practical usefulness of a system. In this paper a combination of two confidence measures defined for a k-nearest neighbors (NN) classifier is proposed. Experiments are presented comparing the performance of the same system with and without the new rejection rules.",2002,0, 2028,Edge color distribution transform: an efficient tool for object detection in images,"Object detection in images is a fundamental task in many image analysis applications. Existing methods for low-level object detection always perform the color-similarity analyses in the 2D image space. However, the crowded edges of different objects make the detection complex and error-prone. The paper proposes to detect objects in a new edge color distribution space (ECDS) rather than in the image space. In the 3D ECDS, the edges of different objects are segregated and the spatial relation of a same object is kept as well, which make the object detection easier and less error-prone. Since uniform-color objects and textured objects have different distribution characteristics in ECDS, the paper gives a 3D edge-tracking algorithm for the former and a cuboid-growing algorithm for the latter. The detection results are correct and noise-free, so they are suitable for the high-level object detection. The experimental results on a synthetic image and a real-life image are included.",2002,0, 2029,Managing software quality with defects,This paper describes two common approaches to measuring and modeling software quality across the project life cycle so that it can be made visible to management. It discusses examples of their application in real industry settings. Both of the examples presented come from CMM Level 4 organizations.,2002,0, 2030,Private information retrieval in the presence of malicious failures,"In the application domain of online information services such as online census information, health records and real-time stock quotes, there are at least two fundamental challenges: the protection of users' privacy and the assurance of service availability. We present a fault-tolerant scheme for private information retrieval (FT-PIR) that protects users' privacy and ensure service provision in the presence of malicious server failures. An error detection algorithm is introduced into this scheme to detect the corrupted results from servers. The analytical and experimental results show that the FT-PIR scheme can tolerate malicious server failures effectively and prevent any information of users front being leaked to attackers. This new scheme does not rely on any unproven cryptographic premise and the availability of tamperproof hardware. An implementation of the FT-PIR scheme on a distributed database system suggests just a modest level of performance overhead.",2002,0, 2031,A structured approach to handling on-line interface upgrades,"The integration of complex systems out of existing systems is an active area of research and development. There are many practical situations in which the interfaces of the component systems, for example belonging to separate organisations, are changed dynamically and without notification. In this paper we propose an approach to handling such upgrades in a structured and disciplined fashion. All interface changes are viewed as abnormal events and general fault tolerance mechanisms (exception handling, in particular) are applied to dealing with them. The paper outlines general ways of detecting such interface upgrades and recovering after them. An Internet Travel Agency is used as a case study",2002,0, 2032,Maintenance in joint software development,"The need to combine several efforts in software development has become critical because of the software development requirements and geographically dispersed qualified human resources, background skills, working methods, and software tools among autonomous software enterprises - joint software development. We know that the organization and the development of software depend largely on human initiatives. All human initiatives are subject to change and perpetual evolution. A variety of studies have contributed to highlight the problems arising from the change and the perpetual evolution of the software development. These studies revealed that the majority of the effort spent on the software process is spent on maintenance. To reduce the efforts of software maintenance in joint software development, it is therefore necessary to detect the features of software maintenance processes susceptible to be automatized. The software maintenance problems in joint software development are caused by the changes and perpetual evolutions at the organizational level of an enterprise or at the software development level. The complex nature of changes and perpetual evolutions at the organizational and the development levels includes the following factors: the maintenance contracts among enterprises, the abilities to develop, experiences and background in software development, methodologies of software development, tools available and localization of the organizations. This paper looks into the new software maintenance problems in joint software development by geographically dispersed virtual enterprises",2002,0, 2033,Software quality knowledge discovery: a rough set approach,"This paper presents a practical knowledge discovery approach to software quality and resource allocation that incorporated recent advances in rough set theory, parameterized approximation spaces and rough neural computing. In addition, this research utilizes the results of recent studies of software quality measurement and prediction. A software quality measure quantifies the extent, to which some specific attribute is present in a system. Such measurements are considered in the context of rough sets. This research provides a framework for making resource allocation decisions based on evaluation of various measurements of the complexity of software. Knowledge about software quality is gained when preprocessing during which, software measurements are analyzed using discretization techniques, genetic algorithms in deriving reducts, and in the derivation of training and testing sets, especially in the context of the rough sets exploration system (RSES) developed by the logic group at the Institute of Mathematics at Warsaw University. Experiments show that both RSES and rough neural network models are effective in classifying software modules.",2002,0, 2034,SPiCE in action - experiences in tailoring and extension,"Today the standard ISO/IEC TR 15504: software process assessment commonly known as SPiCE has been in use for more than 5 years, with hundreds of software process assessments performed in organizations around the world. The success of the ISO 15504 approach is demonstrated by its application in and extension to all sectors featuring software development, in particular, space, automotive, finance, healthcare, and electronics. As the current Technical Report makes the transition into an international standard, many initiatives are underway to expand the application of process assessment to areas even outside of software development. This paper reports on experiences in the use of ISO 15504 both in tailoring the standard for particular industrial sectors and in expanding the process assessment approach into new domains. In particular, three projects are discussed: SPiCE for SPACE, a ISO/IEC TR 15504 conformant method of software process assessment developed for the European space industry; SPiCE-9000 for SPACE, an assessment method for space quality management systems, based on ISO 9001:2000; and NOVE-IT, a project of the Swiss federal government to establish and assess processes covering IT procurement, development, operation, and service provision.",2002,0, 2035,Teletraffic simulation of cellular networks: modeling the handoff arrivals and the handoff delay,The paper presents an analysis of teletraffic variables in cellular networks. The variables studied are the time between two consecutive handoff arrivals and the handoff delay. These teletraffic variables are characterized by means of an advanced software simulator that models several scenarios assuming fixed channel allocation. Information about the quality of service is also provided. A large set of scenarios has been simulated and the characterization results derived from its study have been presented and analyzed.,2002,0, 2036,Study of integration method for on-line monitoring system of electrical equipment insulation and AM/FM/GIS,"After analyzing integration. of SCADA with AM/FM/GIS in distribution management system (DMS), we point out that integrated on-line SCADA/AM/FM/GIS just monitors macroscopic run specificity of the electric power system, but does not take into account insulation run state and fault diagnosis situation of electrical equipment, so it has certain limitations. Meanwhile, the classification and function of an on-line monitoring system for electrical equipment are discussed in this paper, and a thought is proposed that integrates AM/FM/GIS with on-line monitoring system of electrical equipment. The new integrated system not only comprehensively reflects the run state of electric power system and equipment but also can monitor on-line insulation state of electrical equipment, so it has quite good economic and practical value. A few of methods to integrate AM/FM/GIS with equipment insulation on-line monitor system are proposed by studying the direction of modern software exploitation technique and practical application. The practical application instance shows that it has excellent effectiveness.",2002,0, 2037,Binarization of low quality text using a Markov random field model,"Binarization techniques have been developed in the document analysis community for over 30 years and many algorithms have been used successfully. On the other hand, document analysis tasks are more and more frequently being applied to multimedia documents such as video sequences. Due to low resolution and lossy compression, the binarization of text included in the frames is a non-trivial task. Existing techniques work without a model of the spatial relationships in the image, which makes them less powerful. We introduce a new technique based on a Markov random field model of the document. The model parameters (clique potentials) are learned from training data and the binary image is estimated in a Bayesian framework. The performance is evaluated using commercial OCR software.",2002,0, 2038,An application of Bayesian reasoning to improve functional test diagnostic effectiveness,"This paper describes a software package that embodies a Bayesian reasoning engine and modeling schema to significantly improve the ability to discern the defective component causing a failed functional test. This software approach brings to functional test similar diagnostic capabilities that have become familiar to test engineers working with X-ray, automatic optical inspection (AOI) and in-circuit test (ICT) test technologies. This software package, known as Fault Detective, provides significantly improved diagnostic accuracy as compared to human efforts, and works with exactly the same data set as is currently available for diagnostic purposes. The model is based on the interaction of the functional test suite with the product functional block diagram. This approach also means that the software package is highly independent of the technology behind the system being diagnosed.",2002,0, 2039,Reducing No Fault Found using statistical processing and an expert system,"This paper describes a method for capturing avionics test failure results from Automated Test Equipment (ATE) and statistically processing this data to provide decision support for software engineers in reducing No Fault Found (NFF) cases at various testing levels. NFFs have plagued the avionics test and repair environment for years at enormous cost to readiness and logistics support. The costs in terms of depot repair and user exchange dollars that are wasted annually for unresolved cases are graphically illustrated. A diagnostic data model is presented, which automatically captures, archives and statistically processes test parameters and failure results which are then used to determine if an NFF at the next testing level resulted from a test anomaly. The model includes statistical process methods, which produce historical trend patterns for each part and serial numbered unit tested. An Expert System is used to detect statistical pattern changes and stores that information in a knowledge base. A Decision Support System (DSS) provides advisories for engineers and technicians by combining the statistical test pattern with unit performance changes in the knowledge base. Examples of specific F-16 NFF reduction results are provided.",2002,0, 2040,Structure in errors: a case study in fingerprint verification,"Measuring the accuracy of biometrics systems is important. Accuracy estimates depend very much on the quality of the test data that are used: including poor quality data will degrade the accuracy estimates. Factors that determine the good quality data and poor quality data can not be revealed by simple accuracy estimates. We propose a novel methodology to analyze how the overall accuracy estimate of a system relates to the specific quality of biometrics samples. Using a large collection of fingerprint samples, we present an analysis of system accuracy, which suggests that a significant part of the error is due to few fingers.",2002,0, 2041,Application of linguistic techniques for Use Case analysis,"The Use Case formalism is an effective way of capturing both business process and functional system requirements in a very simple and easy-to-learn way. Use Cases may be modeled in a graphical way (e.g. using the UML notation), mainly serving as a table of content for Use Cases. System behavior can more effectively be specified by structured natural language (NL) sentences. The use of NL as a way to specify the behavior of a system is however a critical point, due to the inherent ambiguity originating from different interpretations of natural language descriptions. We discuss the use of methods, based on a linguistic approach, to analyze functional requirements expressed by means of textual (NL) Use Cases. The aim is to collect quality metrics and detect defects related to such inherent ambiguity. In a series of preliminary experiments, we applied a number of tools for quality evaluation of NL text (and, in particular, of NL requirements documents) to an industrial Use Cases document. The result of the analysis is a set of metrics that aim to measure the quality of the NL textual description of Use Cases. We also discuss the application of selected linguistic analysis techniques that are provided by some of the tools to semantic analysis of NL expressed Use Case.",2002,0, 2042,Engineering real-time behavior,"This article presents a process that evaluates an application for real-time correctness throughout development and maintenance. It allows temporal correctness to be designed-in during development, rather than the more typical effort to test-in timing performance at the end of development. It avoids the costly problems that can arise when timing faults are found late in testing or, worse still, after deployment.",2002,0, 2043,Investigating the influence of software inspection process parameters on inspection meeting performance,"The question of whether inspection meetings justify their cost has been discussed in several studies. However, it is still open as to how modern defect detection techniques and team size influence meeting performance, particularly with respect to different classes of defect severity. The influence of software inspection process parameters (defect detection technique, team size, meeting effort) on defect detection effectiveness is investigated, i.e. the number of defects found for 31 teams which inspected a requirements document, to shed light on the performance of inspection meetings. The sets of defects reported by each team after the individual preparation phase (nominal-team performance) and after the team meeting (real-team performance) are compared. The main findings are that nominal teams perform significantly more effectively than real teams for all defect classes. This implies that meeting losses are on average higher than meeting gains. Meeting effort was positively correlated with meeting gains, indicating that synergy effects can only be realised if enough time is available. With regard to meeting losses, existing reports are confirmed that for a given defect, the probability of being lost in a meeting decreases with an increase in the number of inspectors who detected this defect during individual preparation.",2002,0, 2044,Requirements in the medical domain: Experiences and prescriptions,"Research shows that information flow in health care systems is inefficient and prone to error. Data is lost, and physicians must repeat tests and examinations because the results are unavailable at the right place and time. Cases of erroneous medication - resulting from misinterpreted, misunderstood, or missing information - are well known and have caused serious health problems and even death. We strongly believe that through effective use of information technology, we can improve both the quality and efficiency of the health sector's work. Introducing a new system might shift power from old to young, from doctor to nurse, or from medical staff to administration. Few people appreciate loss of power, but even fewer will admit that the loss of power is why they resist the new system. Thus, we must work hard to bring this into the open and help people realize that a new system doesn't have to threaten their positions. Again, knowledge and understanding of a hospital's organizational structure, both official and hidden, is necessary if the system's introduction is to be successful.",2002,0, 2045,Dynamic simulation inspection test for protection equipment in China,"Testing the protection equipment on dynamic simulation test systems is very valuable not only for finding software bugs and design deficiencies of protection but also for performance and characteristics understanding. To strictly test different kinds of protection relay, the test systems must been designed carefully by selecting the connection and parameters of the system. The test items are designed specially for every kind of protection. This paper introduces some special technical requirements in China, and important test items for different protection schemes. Several ideas and algorithms of some protections are also compared in this paper, such as power swing blocking methods in line protection, current transformer saturation countermeasures in transformer and busbar protection, inrush currents, blocking methods in transformer protection, current transformer single-phase open detection methods, etc. Several problems appeared in the test and the reason of these problems are described and analyzed in this paper.",2002,0, 2046,Validation of mission critical software design and implementation using model checking [spacecraft],"Over the years, the complexity of space missions has dramatically increased with more of the critical aspects of a spacecraft's design being implemented in software. With the added functionality and performance required by the software to meet system requirements, the robustness of the software must be upheld. Traditional software validation methods of simulation and testing are being stretched to adequately cover the needs of software development in this growing environment. It is becoming increasingly difficult to establish traditional software validation practices that confidently confirm the robustness of the design in balance with cost and schedule needs of the project. As a result, model checking is emerging as a powerful validation technique for mission critical software. Model checking conducts an exhaustive exploration of all possible behaviors of a software system design and as such can be used to detect defects in designs that are typically difficult to discover with conventional testing approaches.",2002,0, 2047,Applying checkers to improve the correctness and integrity of software [Air Force systems],"Testing of mission-critical systems to a high degree of reliability has been a long time problem for the Air Force. As a result system failures may occur in the field due to faults that result from unusual environmental conditions or unexpected sequences of events that were never encountered in the laboratory. To improve the validation and test process and deliver more reliable systems, under the Air Force self-checking embedded information system software (SCEISS) program, we have performed several demonstrations of self-checking systems that continuously monitor themselves to report suspicious events and software faults. Based on theoretical University results, the SCEISS program has demonstrated improvements in the reliability and quality of software-intensive systems through employing self-checking techniques. This paper presents metrics that show the value of checkers in finding errors earlier with less cost and effort on several Raytheon applications and describes a checker library that is under development to automate the inclusion of checkers in a software integration environment.",2002,0, 2048,Integrating reliability and timing analysis of CAN-based systems,"This paper presents and illustrates a reliability analysis method developed with a focus on controller-area-network-based automotive systems. The method considers the effect of faults on schedulability analysis and its impact on the reliability estimation of the system, and attempts to integrate both to aid system developers. The authors illustrate the method by modeling a simple distributed antilock braking system, and showing that even in cases where the worst case analysis deems the system unschedulable, it may be proven to satisfy its timing requirements with a sufficiently high probability. From a reliability and cost perspective, this paper underlines the tradeoffs between timing guarantees, the level of hardware and software faults, and per-unit cost.",2002,0,1625 2049,Concurrent error detection schemes for fault-based side-channel cryptanalysis of symmetric block ciphers,"Fault-based side-channel cryptanalysis is very effective against symmetric and asymmetric encryption algorithms. Although straightforward hardware and time redundancy-based concurrent error detection (CED) architectures can be used to thwart such attacks, they entail significant overheads (either area or performance). The authors investigate systematic approaches to low-cost low-latency CED techniques for symmetric encryption algorithms based on inverse relationships that exist between encryption and decryption at algorithm level, round level, and operation level and develop CED architectures that explore tradeoffs among area overhead, performance penalty, and fault detection latency. The proposed techniques have been validated on FPGA implementations of Advanced Encryption Standard (AES) finalist 128-bit symmetric encryption algorithms.",2002,0, 2050,No Java without caffeine: A tool for dynamic analysis of Java programs,"To understand the behavior of a program, a maintainer reads some code, asks a question about this code, conjectures an answer, and searches the code and the documentation for confirmation of her conjecture. However, the confirmation of the conjecture can be error-prone and time-consuming because the maintainer has only static information at her disposal. She would benefit from dynamic information. In this paper, we present Caffeine, an assistant that helps the maintainer in checking her conjecture about the behavior of a Java program. Our assistant is a dynamic analysis tool that uses the Java platform debug architecture to generate a trace, i.e., an execution history, and a Prolog engine to perform queries over the trace. We present a usage scenario based on the n-queens problem, and two real-life examples based on the Singleton design pattern and on the composition relationship.",2002,0, 2051,Model-based tests of truisms,"Software engineering (SE) truisms capture broadly-applicable principles of software construction. The trouble with truisms is that such general principles may not apply in specific cases. This paper tests the specificity of two SE truisms: (a) increasing software process level is a desirable goal; and (b) it is best to remove errors during the early parts of a software lifecycle. Our tests are based on two well-established SE models: (1) Boehm et.al.'s COCOMO II cost estimation model; and (2) Raffo's discrete event software process model of a software project life cycle. After extensive simulations of these models, the TAR2 treatment learner was applied to find the model parameters that most improved the potential performance of the real-world systems being modelled. The case studies presented here showed that these truisms are clearly sub-optimal for certain projects since other factors proved to be far more critical. Hence, we advise against truism-based process improvement. This paper offers a general alternative framework for model-based assessment of methods to improve software quality: modelling + validation + simulation + sensitivity. That is, after recording what is known in a model, that model should be validated, explored using simulations, then summarized to find the key factors that most improve model behavior.",2002,0, 2052,Combining and adapting software quality predictive models by genetic algorithms,"The goal of quality models is to predict a quality factor starting from a set of direct measures. Selecting an appropriate quality model for a particular software is a difficult, non-trivial decision. In this paper, we propose an approach to combine and/or adapt existing models (experts) in such way that the combined/adapted model works well on the particular system. Test results indicate that the models perform significantly better than individual experts in the pool.",2002,0, 2053,Predicting software stability using case-based reasoning,"Predicting stability in object-oriented (OO) software, i.e., the ease with which a software item can evolve while preserving its design, is a key feature for software maintenance. We present a novel approach which relies on the case-based reasoning (CBR) paradigm. Thus, to predict the chances of an OO software item breaking downward compatibility, our method uses knowledge of past evolution extracted from different software versions. A comparison of our similarity-based approach to a classical inductive method such as decision trees, is presented which includes various tests on large datasets from existing software.",2002,0, 2054,Distance dependent thresholding search for fast motion estimation in real world video coding application,"This paper presents a distance dependent thresholding search (DTS) block motion estimation algorithm that employs the novel concept of distance dependent thresholds. The key feature of this algorithm is its flexibility, trading-off quality and complexity with threshold variation. Whereas the performance of the existing algorithms is fixed in terms of prediction quality as well as complexity, DTS can be used as full search (FS), where high quality video entertainments require motion estimation with small prediction error, as well as fast motion estimation such as three-step search (TSS), new-three-step search (NTSS) etc. while real-time video applications, such as the speed-oriented video conferencing require fast motion estimation with sacrificing quality. Experimental results show that this DTS algorithm also achieves better peak signal-to-noise ratio (PSNR), as well as lower search times in comparison to both the TSS and NTSS algorithms.",2002,0, 2055,Asymptotics of quickest change detection procedures under a Bayesian criterion,"The optimal detection procedure for detecting changes in independent and identically distributed sequences (i.i.d.) in a Bayesian setting was derived by Shiryaev in the nineteen sixties. However, the analysis of the performance of this procedure in terms of the average detection delay and false alarm probability has been an open problem. In this paper, we investigate the performance of Shiryaev's procedure in an asymptotic setting where the false alarm probability goes to zero. The asymptotic study is performed not only in. the i.d.d. case where the Shiryaev's procedure is optimal but also in a general, non-i.i.d. case. In the latter case, we show that Shiryaev's procedure is asymptotically optimum under mild conditions. We also show that the two popular non-Bayesian detection procedures, namely the Page and Shiryaev-Roberts-Pollak procedures, are not optimal (even asymptotically) under the Bayesian criterion. The results of this study are shown to be especially important in studying the asymptotics of decentralized quickest change detection procedures.",2002,0, 2056,The design and implementation of the intel® real-time performance analyzer,"Modern PCs support growing numbers of concurrently active independently authored real-time software applications and device drivers. The non realtime nature of PC OSes (Linux, Microsoft Windows, etc.) means that robust real-time software must cope with hold-offs without degradation in user perceivable application quality of service. The open nature of the PC platform necessitates measuring OS interrupt and thread latencies under concurrent load in order to determine with how much hold-off the application must cope. The Intel® Real-Time Performance Analyzer is a toolkit for PCs running Microsoft Windows. The toolkit statistically characterizes thread and interrupt latencies plus Windows Deferred Procedure Call (DPC) and kernel Work Item latencies. The toolkit also has facilities for analyzing the causes of long latencies. These latencies can then be incorporated as additional blocking times in a real-time schedulability analysis. An isochronous workload tool is included to model thread and DPC based computation and detect missed deadlines.",2002,0, 2057,Reducing test application time through interleaved scan,This paper proposes a new method for reducing the test length for digital circuits by adopting an architecture derived from the popular scan approach. An evolutionary optimization algorithm is exploited to find the optimal solution. The proposed approach was tested on the ISCAS89 standard benchmarks and the experimental results show its effectiveness.,2002,0, 2058,A study on a garbage collector for embedded applications,"In general, embedded systems, such as cellular phones and PDAs are provided with small amounts of memory and a low power processor that is slower than desktop ones. Despite these limited resources, present technology allows designers to integrate, in a single chip, an entire system. In this scenario, software development for embedded systems is an error-prone operation. In order to develop better code in less time, Java technology has gained a lot of interest from developers of embedded systems in the last few years, mainly because of its portability, code reuse, and object-oriented paradigm. On the other hand, Java requires an automatic memory management system in Java processors. This paper presents a garbage collection technique based on a software approach for an embedded Java processor. This technique is targeted for applications that are used in portable embedded systems. This paper discusses the most suited algorithm for such applications, showing also some performance overhead results.",2002,0, 2059,Performance management in component-oriented systems using a Model Driven Architectureâ„?approach,"Developers often lack the time or knowledge to profoundly understand the performance issues in largescale component-oriented enterprise applications. This situation is further complicated by the fact that such applications are often built using a mix of in-house and commercial-off-the-shelf (COTS) components. This paper presents a methodology for understanding and predicting the performance of component-oriented distributed systems both during development and after they have been built. The methodology is based on three conceptually separate parts: monitoring, modelling and performance prediction. Performance predictions are based on UML models created dynamically by monitoring-and-analysing a live or under-development system. The system is monitored using non-intrusive methods and run-time data is collected. In addition, static data is obtained by analysing the deployment configuration of the target application. UML models enhanced with performance indicators are created based on both static and dynamic data, showing performance hot spots. To facilitate the understanding of the system, the generated models are traversable both horizontally at the same abstraction level between transactions, and vertically between different layers of abstraction using the concepts defined by the Model Driven Architecture. The system performance is predicted and performance-related issues are identified in different scenarios by generating workloads and simulating the performance models. Work is under way to implement a framework for the presented methodology with the current focus on the Enterprise Java Beans technology.",2002,0, 2060,On the evaluation of JavaSymphony for cluster applications,"In the past few years, increasing interest has been shown in using Java as a language for performance-oriented distributed and parallel computing. Most Java-based systems that support portable parallel and distributed computing either require the programmer to deal with intricate low level details of Java which can be a tedious, time-consuming and error-prone task, or prevent the programmer from controlling locality of data. In contrast to most existing systems, JavaSymphony - a class library written entirely in Java - allows to control parallelism, load balancing and locality at a high level. Objects can be explicitly distributed and migrated based on virtual architectures which impose a virtual hierarchy on a distributed/parallel system of physical computing nodes. The concept of blocking/nonblocking remote method invocation is used to exchange data among distributed objects and to process work by remote objects. We evaluate the JavaSymphony programming API for a variety of distributed/parallel algorithms which comprises backtracking, N-body, encryption/decryption algorithms and asynchronous nested optimization algorithms. Performance results are presented for both homogeneous and heterogeneous cluster architectures. Moreover, we compare JavaSymphony with an alternative well-known semi-automatic system.",2002,0, 2061,Using a pulsed supply voltage for delay faults testing of digital circuits in a digital oscillation environment,"High-performance digital circuits with aggressive timing constraints are usually very susceptible to delay faults. Much research done on delay fault detection needs a rather complicated test setup together with precise test clock requirements. In this paper, we propose a test technique based on the digital oscillation test method. The technique, which was simulated in software, consists of sensitizing a critical path in the digital circuit under test and incorporating the path into an oscillation ring. The supply voltage to the oscillation ring is then varied to detect delay and stuck-at faults in the path.",2002,0, 2062,Process activities in a project based course in software engineering,"""Studio in Software Engineering"" is a curriculum component for the undergraduate-level software engineering program at Ecole Polytechnique de Montreal. The main teaching objective is to develop in students a professional attitude towards producing high quality software. The course is based on a project approach in a collaborative learning environment. The software development process used is based on the Unified Process for EDUcation, which is customized from the Rational Unified Process. An insight into the dynamics of three teams involved in the development of the same project allows us to present and interpret data concerning the effort spent by students during particular process activities. The contribution of this paper is to illustrate an approach involving qualitative analysis of the effort spent by the students on each software process activity. Such an approach may allow the development of a model that would lead to effort prediction within a software process in order to designate the actions for improving academic projects.",2002,0, 2063,Self-commissioning training algorithms for neural networks with applications to electric machine fault diagnostics,"The main limitations of neural network (NN) methods for fault diagnostics applications are training data and data memory requirements, and computational complexity. Generally, a NN is trained offline with all the data obtained prior to commissioning, which is not possible in a practical situation. In this paper, three novel and self-commissioning training algorithms are proposed for online training of a feedforward NN to effectively address the aforesaid shortcomings. Experimental results are provided for an induction machine stator winding turn-fault detection scheme, to illustrate the feasibility of the proposed online training algorithms for implementation in a commercial product.",2002,0, 2064,Review of condition assessment of power transformers in service,"As transformers age, their internal condition degrades, which increases the risk of failure. To prevent these failures and to maintain transformers in good operating condition is a very important issue for utilities. Traditionally, routine preventative maintenance programs combined with regular testing were used. The change to condition-based maintenance has resulted in the reduction, or even elimination, of routine time-based maintenance. Instead of doing maintenance at a regular interval, maintenance is only carried out if the condition of the equipment requires it. Hence, there is an increasing need for better nonintrusive diagnostic and monitoring tools to assess the internal condition of the transformers. If there is a problem, the transformer can then be repaired or replaced before it fails. An extensive review is given of diagnostic and monitoring tests, and equipment available that assess the condition of power transformers and provide an early warning of potential failure.",2002,0, 2065,Is prior knowledge of a programming language important for software quality?,"Software engineering is human intensive. Thus, it is important to understand and evaluate the value of different types of experiences, and their relation to the quality of the developed software. Many job advertisements focus on requiring knowledge of specific programming languages. This may seem sensible at first sight, but maybe it is sufficient to have general knowledge in programming and then it is enough to learn a specific language within the new job. A key question is whether prior knowledge actually does improve software quality. This paper presents an empirical study where the programming experience of students is assessed using a survey at the beginning of a course on the Personal Software Process (PSP), and the outcome of the course is evaluated, for example, using the number of defects and development time. Statistical tests are used to analyse the relationship between programming experience and the performance of the students in terms of software quality. The results are mostly unexpected, for example, we are unable to show any significant relation between experience in the programming language used and the number of defects detected.",2002,0, 2066,An approach to experimental evaluation of software understandability,"Software understandability is an important characteristic of software quality because it can influence cost or reliability of software evolution in reuse or maintenance. However, it is difficult to evaluate software understandability in practice because understanding is an internal process of humans. This paper proposes ""software overhaul"" as a method for externalizing the process of understanding and presents a probability model to use process data of overhauling to estimate software understandability. An example describes an overhaul tool and its application.",2002,0, 2067,Estimating mixed software reliability models based on the EM algorithm,"This paper considers the mixture models to unify software reliability models (SRMs). The modeling framework is based on the randomization of a software fault detection rate. Our models, called the mixed SRMs in this paper, can include some existing SRMs such as Littlewood NHPP model and the hyperexponential SRM. We develop an efficient parameter estimation algorithm for the mixed SRMs based on the EM (Expectation-Maximization) principle, and investigate both the effectiveness of the EM algorithm and the validity of the mixed SRMs through the numerical examples with real software failure data.",2002,0, 2068,An approach for estimation of software aging in a Web server,"A number of recent studies have reported the phenomenon of ""software aging"", characterized by progressive performance degradation or a sudden hang/crash of a software system due to exhaustion of operating system resources, fragmentation and accumulation of errors. To counteract this phenomenon, a proactive technique called ""software rejuvenation"" has been proposed. This essentially involves stopping the running software, cleaning its internal state and then restarting it. Software rejuvenation, being preventive in nature, begs the question as to when to schedule it. Periodic rejuvenation, while straightforward to implement, may not yield the best results. A better approach is based on actual measurement of system resource usage and activity that detects and estimates resource exhaustion times. Estimating the resource exhaustion times makes it possible for software rejuvenation to be initiated or better planned so that the system availability is maximized in the face of time-varying workload and system behavior. We propose a methodology based on time series analysis to detect and estimate resource exhaustion times due to software aging in a Web server while subjecting it to an artificial workload. We first collect and log data on several system resource usage and activity parameters on a Web server. Time-series ARMA models are then constructed from the data to detect aging and estimate resource exhaustion times. The results are then compared with previous measurement-based models and found to be more efficient and computationally less intensive. These models can be used to develop proactive management techniques like software rejuvenation which are triggered by actual measurements.",2002,0, 2069,The detection of faulty code violating implicit coding rules,"In the field of legacy software maintenance, there unexpectedly arises a large number of implicit coding rules, which we regard as a cancer in software evolution. Since such rules are usually undocumented and each of them is recognized only by a few members in a maintenance team, a person who is not aware of a rule often violates it while doing various maintenance activities such as adding a new functionality or repairing faults. The problem here is not only such a violation introduces a new fault but also the same kind of fault will be generated again and again in the future by different maintainers. This paper proposes a method for detecting code fragments that violate implicit coding rules. In the method, an expert maintainer, firstly, investigates the cause of each failure, described in the past failure reports, and identifies all the implicit coding rules that lie behind the faults. Then, the code patterns violating the rules (which we call ""faulty code patterns"") are described in a pattern description language. Finally, the potential faulty code fragments are automatically detected by a pattern matching technique. The result of a case study with large legacy software showed that 32.7% of the failures, which have been reported during a maintenance process, were due to the violation of implicit coding rules. Moreover, 152 faults existed in 772 code fragments detected by the prototype matching system, while 111 of them were not reported.",2002,0, 2070,Elimination of crucial faults by a new selective testing method,"Recent software systems contain a lot of functions to provide various services. According to this tendency, software testing becomes more difficult than before and cost of testing increases so much, since many test items are required. In this paper we propose and discuss such a new selective software testing method that is constructed from previous testing method by simplifying testing specification. We have presented, in the previous work, a selective testing method to perform highly efficient software testing. The selective testing method has introduced an idea of functional priority testing and generated test items according to their functional priorities. Important functions with high priorities are tested in detail, and functions with low priorities are tested less intensively. As a result, additional cost for generating testing instructions becomes relatively high. In this paper in order to reduce its cost, we change the way of giving information, with respect to priorities. The new method gives the priority only rather than generating testing instructions to each test item, which makes the testing method quite simple and results in cost reduction. Except for this change, the new method is essentially the same as the previous method. We applied this new method to actual development of software tool and evaluated its effectiveness. From the result of the application experiment, we confirmed that many crucial faults can be detected by using the proposed method.",2002,0, 2071,Empirical validation of class diagram metrics,"As a key early artefact in the development of OO software, the quality of class diagrams is crucial for all later design work and could be a major determinant for the quality of the software product that is finally delivered. Quantitative measurement instruments are useful to assess class diagram quality in an objective way, thus avoiding bias in the quality evaluation process. This paper presents a set of metrics - based on UML relationships $which measure UML class diagram structural complexity following the idea that it is related to the maintainability of such diagrams. Also summarized are two controlled experiments carried out in order to gather empirical evidence in this sense. As a result of all the experimental work, we can conclude that most of the metrics we proposed (NAssoc, NAgg, NaggH, MaxHAgg, NGen, NgenH and MaxDIT) are good indicators of class diagram maintainability. We cannot, however, draw such firm conclusions regarding the NDep metric.",2002,0, 2072,Modeling the cost-benefits tradeoffs for regression testing techniques,"Regression testing is an expensive activity that can account for a large proportion of the software maintenance budget. Because engineers add tests into test suites as software evolves, over time, increased test suite size makes revalidation of the software more expensive. Regression test selection, test suite reduction, and test case prioritization techniques can help with this, by reducing the number of regression tests that must be run and by helping testers meet testing objectives more quickly. These techniques, however can be expensive to employ and may not reduce overall regression testing costs. Thus, practitioners and researchers could benefit from cost models that would help them assess the cost-benefits of techniques. Cost models have been proposed for this purpose, but some of these models omit important factors, and others cannot truly evaluate cost-effectiveness. In this paper, we present new cost-benefits models for regression test selection, test suite reduction, and test case prioritization, that capture previously omitted factors, and support cost-benefits analyses where they were not supported before. We present the results of an empirical study assessing these models.",2002,0, 2073,Combining software quality predictive models: an evolutionary approach,"During the last ten years, a large number of quality models have been proposed in the literature. In general, the goal of these models is to predict a quality factor starting from a set of direct measures. The lack of data behind these models makes it hard to generalize, cross-validate, and reuse existing models. As a consequence, for a company, selecting an appropriate quality model is a difficult, non-trivial decision. In this paper, we propose a general approach and a particular solution to this problem. The main idea is to combine and adapt existing models (experts) in such a way that the combined model works well on the particular system or in the particular type of organization. In our particular solution, the experts are assumed to be decision tree or rule-based classifiers and the combination is done by a genetic algorithm. The result is a white-box model: for each software component, not only does the model give a prediction of the software quality factor, it also provides the expert that was used to obtain the prediction. Test results indicate that the proposed model performs significantly better than individual experts in the pool.",2002,0, 2074,A table reduction approach for software structure testing,"One major aspect of software structural testing emphasizes the coverage criterion. Not only procedure-oriented programs, but also member functions of object-oriented programs can be dealt with by structure testing. A program with higher degree of testing coverage indicates that the program might achieve better reliability. There is a conventional metric for testing coverage measurement. However, the metric is affected inadvertently by including undesirable entities. The measurement of coverage rate increases rapidly when the initial groups of test data are executed, which results in the overestimation of the software reliability. The goal of this research is to improve the conventional testing coverage metric, then to use it to precisely estimate the ratio of elements tested. In this paper, a set of rules is developed to identify dominant tested elements. Measuring the coverage rate of the dominant elements can rationalize the measurement results.",2002,0, 2075,Cost-sensitive boosting in software quality modeling,"Early prediction of the quality of software modules prior to software testing and operations can yield great benefits to the software development teams, especially those of high-assurance and mission-critical systems. Such an estimation allows effective use of the testing resources to improve the modules of the software system that need it most and achieve high reliability. To achieve high reliability, by the means of predictive methods, several tools are available. Software classification models provide a prediction of the class of a module, i.e., fault-prone or not fault-prone. Recent advances in the data mining field allow to improve individual classifiers (models) by using the combined decision from multiple classifiers. This paper presents a couple of algorithms using the concept of combined classification. The algorithms provided useful models for software quality modeling. A comprehensive comparative evaluation of the boosting and cost-boosting algorithms is presented. We demonstrate how the use of boosting algorithms (original and cost-sensitive) meets many of the specific requirements for software quality modeling. C4.5 decision trees and decision stumps were used to evaluate these algorithms with two large-scale case studies of industrial software systems.",2002,0, 2076,Discrete availability models to rejuvenate a telecommunication billing application,"Software rejuvenation is a proactive fault management technique that has been extensively studied in the recent literature. We focus on an example for a telecommunication billing application considered in Huang et al. (1995) and develop the discrete-time stochastic models to estimate the optimal software rejuvenation schedule. More precisely, two software availability models with rejuvenation are formulated via the discrete semi-Markov processes, and the optimal software rejuvenation schedules which maximize the steady-state availabilities are derived analytically. Further, we develop statistically nonparametric algorithms to estimate the optimal software rejuvenation schedules, provided that the complete sample data of failure times are given. Then, a new statistical device, called discrete total time on test statistics, is introduced. Finally, we examine asymptotic properties for the statistical estimation algorithms proposed in this paper through a simulation experiment.",2002,0, 2077,Object-oriented system decomposition quality,"Object-oriented design is becoming very popular in today's software development. An object-oriented information system is decomposed into subjects; each subject is decomposed into classes of objects. Good object-oriented system design should exhibit high cohesion inside subjects and low coupling among subjects. Yet, few quantitative studies of the actual use of cohesion and coupling have been conducted at the system level. These two concepts are defined qualitatively, and only at the class level, not at the system level. In this paper, metrics are introduced for cohesion and coupling and used to define a quality metric at the system level. The feasibility of the approach is demonstrated by an example using a real information system.",2002,0, 2078,"Automatic failure detection, logging, and recovery for high-availability Java servers","Many systems and techniques exist for detecting application failures. However, previously known generic failure detection solutions are only of limited use for Java applications because they do not take into consideration the specifics of the Java language and the Java execution environment. In this article, we present the application-independent Java Application Supervisor (JAS). JAS can automatically detect, log, and resolve a variety of execution problems and failures in Java applications. In most cases, JAS requires neither modifications nor access to the source code of the supervised application. A set of simple user-specified policies guides the failure detection, logging, and recovery process in JAS. A JAS configuration manager automatically generates default policies from the bytecode of an application. The user can modify these default policies as needed. Our experimental studies show that JAS typically incurs little execution time and memory overhead for the target application. We describe an experiment with a Web proxy that exhibits reliability and performance problems under heavy load and demonstrate an increase in the rate of successful requests to the server by almost 33% and a decrease in the average request processing time by approximately 22% when using JAS.",2002,0, 2079,On estimating testing effort needed to assure field quality in software development,"In practical software development, software quality is generally evaluated by the number of residual defects. To keep the number of residual defects within a permissible value, too much effort is often assigned to software testing. We try to develop a statistical model to determine the amount of testing effort which is needed to assure the field quality. The model explicitly includes design, review, and test (including debug) activities. Firstly, we construct a linear multiple regression model that can clarify the relationship among the number of residual defects and the efforts assigned to design, review, and test activities. We then confirm the applicability of the model by statistical analysis using actual project data. Next, we obtain an equation based on the model to determine the test effort. As parameters in the equation, the permissible number of residual defects, the design effort, and the review effort are included. Then, the equation determines the test effort that is needed to assure the permissible residual defects. Finally, we conduct an experimental evaluation using actual project data and show the usefulness of the equation.",2002,0, 2080,Dependability analysis of a client/server software system with rejuvenation,"Long running software systems are known to experience an aging phenomenon called software aging, one in which the accumulation of errors during the execution of software leads to performance degradation and eventually results in failure. To counteract this phenomenon an active fault management approach, called software rejuvenation, is particularly useful. It essentially involves gracefully terminating an application or a system and restarting it in a clean internal state. We deal with dependability analysis of a client/server software system with rejuvenation. Three dependability measures in the server process, steady-state availability, loss probability of requests and mean response time on tasks, are derived from the well-known hidden Markovian analysis under the time-based software rejuvenation scheme. In numerical examples, we investigate the sensitivity of some model parameters to the dependability measures.",2002,0, 2081,Toward semi-automatic construction of training-corpus for text classification,"Text classification is becoming more and more important with the rapid growth of on-line information available. It was observed that the quality of the training corpus impacts the performance of the trained classifier. This paper proposes an approach to build high-quality training corpuses for better classification performance by first exploring the properties of training corpuses, and then giving an algorithm for constructing training corpuses semi-automatically. Preliminary experimental results validate our approach: classifiers based on the training corpuses constructed by our approach can achieve good performance while the training corpus' size is significantly compressed. Our approach can be used for building an efficient and lightweight classification system.",2002,0, 2082,Automatic synthesis of dynamic fault trees from UML system models,"The reliability of a computer-based system may be as important as its performance and its correctness of computation. It is worthwhile to estimate system reliability at the conceptual design stage, since reliability can influence the subsequent design decisions and may often be pivotal for making trade-offs or in establishing system cost. In this paper we describe a framework for modeling computer-based systems, based on the Unified Modeling Language (UML), that facilitates automated dependability analysis during design. An algorithm to automatically synthesize dynamic fault trees (DFTs) from the UML system model is developed. We succeed both in embedding information needed for reliability analysis within the system model and in generating the DFT Thereafter, we evaluate our approach using examples of real systems. We analytically compute system unreliability from the algorithmically developed DFT and we compare our results with the analytical solution of manually developed DFTs. Our solutions produce the same results as manually generated DFTs.",2002,0, 2083,Mutation of Java objects,"Fault insertion based techniques have been used for measuring test adequacy and testability of programs. Mutation analysis inserts faults into a program with the goal of creating mutation-adequate test sets that distinguish the mutant from the original program. Software testability is measured by calculating the probability that a program will fail on the next test input coming from a predefined input distribution, given that the software includes a fault. Inserted faults must represent plausible errors. It is relatively easy to apply standard transformations to mutate scalar values such as integers, floats, and character data, because their semantics are well understood. Mutating objects that are instances of user defined types is more difficult. There is no obvious way to modify such objects in a manner consistent with realistic faults, without writing custom mutation methods for each object class. We propose a new object mutation approach along with a set of mutation operators and support tools for inserting faults into objects that instantiate items from common Java libraries heavily used in commercial software as well as user defined classes. Preliminary evaluation of our technique shows that it should be effective for evaluating real-world software testing suites.",2002,0, 2084,A research on multi-level networked RAID based on cluster architecture,"Storage networks is a popular solution to constraint servers in storage field. As described by Gibson's metrics, the performance of multi-level networked RAID (redundant arrays of inexpensive disks) based on cluster is almost the same to that of improved 2D-parity. Compared with other schemes, it is lower cost and easier to realize.",2002,0, 2085,Numerical simulation of car crash analysis based on distributed computational environment,"Automobile CAE software is mainly used to assess the performance quality of vehicles. As the automobile is a product of technology intensive complexity, its design analysis involves a broad range of CAE simulation techniques. An integrated CAE solution of automobiles can include comfort analysis (vibration and noise analysis), safety analysis (car body collision analysis), process-cycle analysis, structural analysis, fatigue analysis, fluid dynamics analysis, test analysis, material data information system and system integration. We put an emphasis on simulation of a whole automobile collision process, which will bring a breakthrough to the techniques of CAE simulation based on high performance computing. In addition, we carry out simulation for a finite-element car model in a distributed computation environment and accomplish coding-and-programming of DAYN3D. We also provide computational examples and a user handbook. Our research collects almost ten numerical automobile models such as Honda, Ford, etc. Moreover, we also deal with different computational scales for the same auto model and some numerical models of air bags are included. Based on the numerical auto model, referring to different physical parameters and work conditions of the auto model, we can control the physical parameters for the numerical bump simulation and analyze the work condition. The result of our attempt conduces to the development of new auto models.",2002,0, 2086,Immune mechanism based computer security design,"Referring to the mechanism of biological immune system, a novel model of computer security system is proposed, which is a dynamic, multi-layered and co-operational system. Through dynamically supervising abnormal behaviors with multi agent immune systems, a two-level defense system is set up for improving the whole performance: one is based on a host and mainly used for. detecting viruses; and the other is based on a network for supervising potential attacks. On the other hand, a pseudo-random technology is adopted for designing the sub-system of data transmission, in order to increase the ability of protecting information against intended interference and monitoring. Simulations on information transmission show that this system has good robustness, error tolerance and self-adaptiveness, although more practice is needed.",2002,0, 2087,Integrated analysis and design method for distributed real-time systems based on computational grid,"Advances in networking infrastructure have led to the development of a new type of ""computational grid"" infrastructure that provides predictable, consistent and uniform access to geographically distributed resources such as computers, data repositories, scientific instruments, and advanced display devices. Such Grid environments are being used to construct sophisticated, performance-sensitive applications in such areas as dynamic, distributed real-time applications. We propose an integrated approach for analyzing and designing distributed real-time systems based on the computational grid. The proposed approach is based on a new methodology and integrates the models that the methodology proposed for analyzing and designing real-time systems.",2002,0, 2088,An integrated approach for automation of distribution system,"Normally the distribution automation functions are studied individually. In this paper the step-by-step approach to automate the distribution system is discussed. It cant be applied to existing distribution systems as well as for new design of distribution systems. This integrated approach begins with radial distribution systems, preparation of data base, load flow, location of sectionalizing, tie switches, capacitor switching system reconfiguration, fault identification, supply restoration and metering at substation. Current and voltage signals are sampled and a digital signal processing algorithm is used for phasor estimation. The status of DA system equipment can also be viewed and monitored by user-friendly software.",2002,0, 2089,A DSP-based FFT-analyzer for the fault diagnosis of rotating machine based on vibration analysis,"A DSP-based measurement system dedicated to the vibration analysis on rotating machines was designed and realized. Vibration signals are on-line acquired and processed to obtain a continuous monitoring of the machine status. In case of a fault, the system is capable of isolating the fault with a high reliability. The paper describes in detail the approach followed to built up fault and non-fault models together with the chosen hardware and software solutions. A number of tests carried out on small-size three-phase asynchronous motors highlight the excellent promptness in detecting faults, low false alarm rate, and very good diagnostic performance.",2002,0,1796 2090,Basic theory of adaptive data transmission,"This paper presents the basic theory of adaptive data transmission. First we describe a bit stream which has to be transmitted on line or in the air. In the transmission of the bit stream we use digital modulation methods symbol by symbol. We describe one symbol with a waveform, that contains several bits. The adaptive modulation method is used. It means that we can adapt the generated symbol waveform to the analogue channel used. The modulation is made by a software algorithm which converts each symbol to a specific amplitude-phase constellation point of the known carrier and uses a proper symbol time. Depending on the channel bandwidth B (wired or radio) we have one or several transmission carriers in use for the optimal Shannon capacity. Depending on the channel characteristics or signal to noise ratio we can select the amplitude phase constellation optimally. We describe then the detection and the demodulation of the symbol waveform and use there the discrete Fourier transform (DFT). First the received waveform is sampled and the number of samples is selected by the sampling frequency. We evaluate the proper sample number used in the DFT calculations, which give the estimates of the transmitted symbols of each carrier (amplitude, phase and frequency). Finally we summarize our results using the Shannon capacity formula B*2log(S/N+1) and bit error rate (BER) calculations in the evaluations. We discuss the adaptive modulation method properties and functionality found in the simulations and field tests. We find out, that the high performance of the adaptive modulation method is generated by the waveforms designed in the way that they occupy fully the channel in use. The full capacity of any channel is used by selection of proper multiple carriers (bandwidth). The full QoS (S/N) of the channel is applied by the selection of the best amplitude-phase constellation and symbol time selection. These features of the modulation method makes it adaptive, which we could design only using software modem technology. The result is the full practical Shannon's capacity for any analog transmission case. The channel may be a radio, satellite, and wired telecommunication channel which has a limited bandwidth for transmission. Also any coded digital voice transmission c- an be used for data transmission with the adaptive modem because it produces and decodes data as voice signal designed within the wanted bandwidth. We believe that the adaptive modem is a cornerstone for modern data transmission and thus it may be used in many applications.",2002,0, 2091,Portable real-time protocols to make real-time communications affordable,"The cost and capabilities of networking technology is rapidly improving. As a result, the world is rapidly becoming networked for virtually every application. As networking becomes more important, there is a growing corresponding demand for networks to support real-time applications. Current planning for future military engagement networks serves as an example. Taking advantage of the new communications technology involves both technical and economic issues. Economic issues will slow the pace of network deployment especially for real-time systems. This paper offers a solution to make real-time communications affordable. For the purpose of this presentation we describe real-time applications as applications that have demanding performance requirements. This includes applications where timeline requirements must be predictable (deterministic) particularly those that ""require"" rapid response to an input, those that are safety critical, those that require a high degree of fault tolerance, those that ensure quality of service, and those with custom security considerations. The deterministic requirements of real-time software are generally categorized as applications that must meet deadlines most of the time (soft real-time) or 100 percent of the time within human and hardware limitations (hard real-time). The bottom line is that real-time is not defined as highest speed but rather on predictability within defined timing limits.",2002,0, 2092,A fault-tolerant approach to secure information retrieval,"Several private information retrieval (PIR) schemes were proposed to protect users' privacy when sensitive information stored in database servers is retrieved. However, existing PIR schemes assume that any attack to the servers does not change the information stored and any computational results. We present a novel fault-tolerant PIR scheme (called FT-PIR) that protects users' privacy and at the same time ensures service availability in the presence of malicious server faults. Our scheme neither relies on any unproven cryptographic assumptions nor the availability of tamper-proof hardware. A probabilistic verification function is introduced into the scheme to detect corrupted results. Unlike previous PIR research that attempted mainly to demonstrate the theoretical feasibility of PIR, we have actually implemented both a PIR scheme and our FT-PIR scheme in a distributed database environment. The experimental and analytical results show that only modest performance overhead is introduced by FT-PIR while comparing with PIR in the fault-free cases. The FT-PIR scheme tolerates a variety of server faults effectively. In certain fail-stop fault scenarios, FT-PIR performs even better than PIR. It was observed that 35.82% less processing time was actually needed for FT-PIR to tolerate one server fault.",2002,0, 2093,Optimistic Byzantine agreement,"The paper considers the Byzantine agreement problem in a fully asynchronous network, where some participants may be actively malicious. This is an important building block for fault-tolerant applications in a hostile environment, and a non-trivial problem: An early result by Fischer et al. (1985) shows that there is no deterministic solution in a fully asynchronous network subject to even a single crash failure. The paper introduces an optimistic protocol that combines the two best known techniques to solve agreement, randomization and timing. The timing information is used only to increase performance; safety and liveness of the protocol are guaranteed independently of timing. Under certain ""normal"" conditions, the protocol decides quickly and deterministically without using public-key cryptography, approximately as fast as a timed protocol subject to crash failures does. Otherwise, a randomized fallback protocol ensures safety and liveness. For this, we present an optimized version of the randomized Byzantine agreement protocol of Cachin et al. (2000), which is computationally less expensive and not only tolerates malicious parties, but also some loss of messages; it might therefore be of independent interest.",2002,0, 2094,Algorithm to optimize the operational spectrum effectiveness (OSE) of wireless communication systems,"The advent of technically agile"" communication devices, such as software-defined radio (SDR), presents the military services with both a regulatory challenge and a technological opportunity for spectrum sharing. For OSAM, we have proposed a method to transform the technical parameters of a system into a set of nonlinear utility functions, which measure operational spectrum effectiveness (OSE). This paper describes our investigations into appropriate algorithms for optimizing the derived OSE metrics. By adapting equilibrium concepts from microeconomic theory, we demonstrate that, with the spectrum management authority acting as ""market arbiter"", OSE optimization for individual entities can lead to efficient group-level spectrum utilization solutions. We further contend that the solutions thus obtained will be of at least equal quality to those derived by traditional methods, with the added advantage of near real-time adaptability.",2002,0, 2095,Software measurement data analysis using memory-based reasoning,"The goal of accurate software measurement data analysis is to increase the understanding and improvement of software development process together with increased product quality and reliability. Several techniques have been proposed to enhance the reliability prediction of software systems using the stored measurement data, but no single method has proved to be completely effective. One of the critical parameters for software prediction systems is the size of the measurement data set, with large data sets providing better reliability estimates. In this paper, we propose a software defect classification method that allows defect data from multiple projects and multiple independent vendors to be combined together to obtain large data sets. We also show that once a sufficient amount of information has been collected, the memory-based reasoning technique can be applied to projects that are not in the analysis set to predict their reliabilities and guide their testing process. Finally, the result of applying this approach to the analysis of defect data generated from fault-injection simulation is presented.",2002,0, 2096,Software quality classification modeling using the SPRINT decision tree algorithm,"Predicting the quality of system modules prior to software testing and operations can benefit the software development team. Such a timely reliability estimation can be used to direct cost-effective quality improvement efforts to the high-risk modules. Tree-based software quality classification models based on software metrics are used to predict whether a software module is fault-prone or not fault-prone. They are white box quality estimation models with good accuracy, and are simple and easy to interpret. This paper presents an in-depth study of calibrating classification trees for software quality estimation using the SPRINT decision tree algorithm. Many classification algorithms have memory limitations including the requirement that data sets be memory resident. SPRINT removes all of these limitations and provides a fast and scalable analysis. It is an extension of a commonly used decision tree algorithm, CART, and provides a unique tree-pruning technique based on the minimum description length (MDL) principle. Combining the MDL pruning technique and the modified classification algorithm, SPRINT yields classification trees with useful prediction accuracy. The case study used comprises of software metrics and fault data collected over four releases from a very large telecommunications system. It is observed that classification trees built by SPRINT are more balanced and demonstrate better stability in comparison to those built by CART.",2002,0, 2097,A case for exploiting self-similarity of network traffic in TCP congestion control,"Analytical and empirical studies have shown that self-similar traffic can have a detrimental impact on network performance including amplified queuing delay and packet loss ratio. On the flip side, the ubiquity of scale-invariant burstiness observed across diverse networking contexts can be exploited to design better resource control algorithms. We explore the issue of exploiting the self-similar characteristics of network traffic in TCP congestion control. We show that the correlation structure present in long-range dependent traffic can be detected on-line and used to predict future traffic. We then devise an novel scheme, called TCP with traffic prediction (TCP-TP), that exploits the prediction result to infer, in the context of AIMD (additive increase, multiplicative decrease) steady-state dynamics, the optimal operational point for a TCP connection. Through analytical reasoning, we show that the impact of prediction errors on fairness is minimal. We also conduct ns-2 simulation and FreeBSD 4.1-based implementation studies to validate the design and to demonstrate the performance improvement in terms of packet loss ratio and throughput attained by connections.",2002,0, 2098,A framework for performability modeling of messaging services in distributed systems,"Messaging services are a useful component in distributed systems that require scalable dissemination of messages (events) from suppliers to consumers. These services decouple suppliers and consumers, and take care of client registration and message propagation, thus relieving the burden on the supplier Recently performance models for the configurable delivery and discard policies found in messaging services have been developed, that can be used to predict response time distributions and discard probabilities under failure-free conditions. However, these messaging service models do not include the effect of failures. In a distributed system, supplier, consumer and messaging services can fail independently leading to different consequences. In this paper we consider the expected loss rate associated with messaging services as a performability measure and derive approximate closed-form expressions for three different quality of service settings. These measures provide a quantitative framework that allows different messaging service configurations to be compared and design trade-off decisions to be made.",2002,0, 2099,Fault detection effectiveness of spathic test data,"This paper presents an approach for generating test data for unit-level, and possibly integration-level, testing based on sampling over intervals of the input probability distribution, i.e., one that has been divided or layered according to criteria. Our approach is termed ""spathic"" as it selects random values felt to be most likely or least likely to occur from a segmented input probability distribution. Also, it allows the layers to be further segmented if additional test data is required later in the test cycle. The spathic approach finds a middle ground between the more difficult to achieve adequacy criteria and random test data generation, and requires less effort on the part of the tester. It can be viewed as guided random testing, with the tester specifying some information about expected input. The spathic test data generation approach can be used to augment ""intelligent"" manual unit-level testing. An initial case study suggests that spathic test sets defect more faults than random test data sets, and achieve higher levels of statement and branch coverage.",2002,0, 2100,Maximum distance testing,"Random testing has been used for years in both software and hardware testing. It is well known that in random testing each test requires to be selected randomly regardless of the tests previously generated. However, random testing could be inefficient for its random selection of test patterns. This paper, based on random testing, introduces the concept of Maximum Distance Testing (MDT) for VLSI circuits in which the total distance among all test patterns is chosen maximal so that the set of faults detected by one test pattern is as different as possible from that of faults detected by the tests previously applied. The procedure for constructing a Maximum Distance Testing Sequence (MDTS) is described in detail. Experimental results on Benchmark as well as other circuits are also given to evaluate the performances of our new approach.",2002,0, 2101,Statistical analysis of time series data on the number of faults detected by software testing,"According to a progress of the software process improvement, the time series data on the number of faults detected by the software testing are collected extensively. In this paper, we perform statistical analyses of relationships between the time series data and the field quality of software products. At first, we apply the rank correlation coefficient τ to the time series data collected from actual software testing in a certain company, and classify these data into four types of trends: strict increasing, almost increasing, almost decreasing, and strict decreasing. We then investigate, for each type of trend, the field quality of software products developed by the corresponding software projects. As a result of statistical analyses, we showed that software projects having trend of almost or strict decreasing in the number of faults detected by the software testing could produce the software products with high quality.",2002,0, 2102,An analytic software testability model,"Software testability, which has been discussed in the past decade, has been defined as provisions that can be taken into consideration at the early step of software development. This paper gives software testability, previously defined by Voas, a new model and measurement without performing testing with respect to a particular input distribution.",2002,0, 2103,Robust playout mechanism for Internet audio applications,"In Internet audio applications, delay and delay jitter affect mostly the applications' quality of service. Since packet delays are different and changing over time, the receiver needs to buffer some amount of packets before playout. Therefore, the amount of buffered packets and the timing of playout are very important for the performance of applications. We adopt an autoregressive (AR) model for estimation of packet delay and deploy a robust identification algorithm for adjustment of the parameters of the AR process. In our preliminary experiments, this robust algorithm leads to better performance when the noise is correlated and/or non-stationary, and also it is robust to model uncertainties.",2002,0, 2104,Reliable file transfer in Grid environments,Grid-based computing environments are becoming increasingly popular for scientific computing. One of the key issues for scientific computing is the efficient transfer of large amounts of data across the Grid. In this paper we present a reliable file transfer (RFT) service that significantly improves the efficiency of large-scale file transfer. RFT can detect a variety of failures and restart the file transfer from the point of failure. It also has capabilities for improving transfer performance through TCP tuning.,2002,0, 2105,Performance guarantee for cluster-based Internet services,"As Web-based transactions become an essential element of everyday corporate and commerce activity, it becomes increasingly important for the performance of Web application services to be predictable and adequate even in the presence of wildly fluctuating input loads. In this work we propose a general implementation framework to provide quality of service (QoS) guarantee for cluster-based Web application services, such as e-commerce or directory services, that is largely independent of the Web application and the hardware/software platform used in the cluster. This paper describes the design, implementation, and evaluation of a Web request distribution system called Gage, which is able to guarantee a service subscriber a pre-defined number of generic Web requests serviced per second regardless of the total input loads at run time. Gage is one of the first, if not the first system that can support QoS guarantees which involves multiple system resources, i.e., CPU, disk, and network. The fully operational Gage prototype shows that the proposed architecture can indeed provide a guaranteed level of service for specific classes of Web accesses according to their QoS requirements in the presence of excessive input loads. In addition, empirical measurement on the Gage prototype demonstrates that the additional performance overhead associated with Gage's QoS guarantee support for Web service is merely 3.06%.",2002,0, 2106,Reliability evaluation of multi-state systems subject to imperfect coverage using OBDD,"This paper presents an efficient approach based on OBDD for the reliability analysis of a multi-state system subject to imperfect fault-coverage with combinatorial performance requirements. Since there exist dependencies between combinatorial performance requirements, we apply the multi-state dependency operation (MDO) of OBDD to deal with these dependencies in a multi-state system. In addition, this OBDD-based approach is combined with the conditional probability methods to find solutions for the multi-state imperfect coverage models. Using conditional probabilities, we can also apply this method for modular structures. The main advantage of this algorithm is that it will take computational time that is equivalent to the same problem without assuming imperfect coverage (i.e. with perfect coverage). This algorithm is very important for complex systems such as fault-tolerant computer systems, since it can obtain the complete results quickly and accurately even when there exist a number of dependencies such as shared loads (reconfiguration), degradation and common-cause failures.",2002,0, 2107,Definition of fault loads based on operator faults for DMBS recovery benchmarking,"The characterization of database management system (DBMS) recovery mechanisms and the comparison of recovery features of different DBMS require a practical approach to benchmark the effectiveness of recovery in the presence of faults. Existing performance benchmarks for transactional and database areas include two major components: a workload and a set of performance measures. The definition of a benchmark to characterize DBMS recovery needs a new component the faultload. A major cause of failures in large DBMS is operator faults, which make them an excellent starting point for the definition of a generic faultload. This paper proposes the steps for the definition of generic faultloads based on operator faults for DBMS recovery benchmarking. A classification for operator faults in DBMS is proposed and a comparative analysis among three commercially DBMS is presented. The paper ends with a practical example of the use of operator faults to benchmark different configurations of the recovery mechanisms of the Oracle 8i DBMS.",2002,0, 2108,Delivering error detection capabilities into a field programmable device: the HORUS processor case study,"Designing a complete SoC or reuse SoC components to create a complete system is a common task nowadays. The flexibility offered by current design flows offers the designer an unprecedented capability to incorporate more and more demanded features like error detection and correction mechanisms to increase the system dependability. This is especially true for programmable devices, were rapid design and implementation methodologies are coupled with testing environments that are easily generated and used. This paper describes the design of the HORUS processor, a RISC processor augmented with a concurrent error mechanism, the architectural modifications needed on the original design to minimize the resulting performance penalty.",2002,0, 2109,Application of neural networks and filtered back projection to wafer defect cluster identification,"During an electrical testing stage, each die on a wafer must be tested to determine whether it functions as it was originally designed. In the case of a clustered defect on the wafer, such as scratches, stains, or localized failed patterns, the tester may not detect all of the defective dies in the flawed area. To avoid the defective dies proceeding to final assembly, an existing tool is currently used by a testing factory to detect the defect cluster and mark all the defective dies in the flawed region or close to the flawed region; otherwise, the testing factory must assign five to ten workers to check the wafers and hand mark the defective dies. This paper proposes two new wafer-scale defect cluster identifiers to detect the defect clusters, and compares them with the existing tool used in the industry. The experimental results verify that one of the proposed algorithms is very effective in defect identification and achieves better performance than the existing tool.",2002,0, 2110,gMeasure: a group-based network performance measurement service for peer-to-peer applications,"To enhance the efficiency of resource discovery and distribution, it is necessary to effectively estimate the network performance, in terms of metrics such as round-trip time and hop-count, between any peer-pair in a peer-to-peer application. Measurement in large scale network for peer-to-peer applications and services faces many challenges, such as scalability and quality of service (QoS) support. To address these issues, we present an architecture of a group-based performance measurement service (gMeasure), which disseminates and estimates network performance information for peer-to-peer applications. The key concept in gMeasure is Measurement Groups (MeGroups), in which hosts are self-organized to form a scalable hierarchical structure. A set of heuristic algorithms are proposed to optimize the group organization to reduce system cost and improve system measurement accuracy.",2002,0, 2111,Forward resource reservation for QoS provisioning in OBS systems,"This paper addresses the issue of providing QoS services for optical burst switching (OBS) systems. We propose a linear predictive filter (LPF)-based forward resource reservation method to reduce the burst delay at edge routers. An aggressive reservation method is proposed to increase the successful forward reservation probability and to improve the delay reduction performance. We also discuss a QoS strategy that achieves burst delay differentiation for different classes of traffic by extending the FRR scheme. We analyze the latency reduction improvement gained by our FRR scheme, and evaluate the bandwidth cost of the FRR-based QoS strategy. Our scheme yields significant delay reduction for time-critical traffic, while maintaining the bandwidth overhead within limits.",2002,0, 2112,Dealing with increasing data volumes and decreasing resources,"The US Naval Oceanographic Office (NAVOCEANO) has recently updated its survey vessels and launches to include the latest generation of high-resolution multibeam and digital side-scan sonar systems, along with state-of-the-art ancillary sensors. This has resulted in NAVOCEANO possessing a tremendous ocean observing and mapping capability. However, these systems produce massive amounts of data that must be validated prior to inclusion in various bathymetry, hydrography, and imagery products. It is estimated that the amount of data to be processed will increase by an overwhelming 2000 times above present data quantities. NAVOCEANO is meeting this challenge on a number of fronts that include a series of hardware and software improvements. The key to meeting the challenge of the massive data volumes was to change the approach that required every data point to be viewed and validated. This was achieved with the replacement of the traditional line-by-line editing approach with an automated cleaning module, and an area-based editor (ABE) integrated with existing commercial off-the-shelf processing and visualization packages. NAVOCEANO has entered into two cooperative research and development agreements (CRADAs) - one with the Science Applications International Corporation (SAIC), Newport, RI, USA, and the other with Interactive Visualization Systems (IVS), Fredericton, N.B., Canada, to integrate the ABE with SAIC's SABER product and IVS's Fledermaus 3D visualization product. This paper presents an overview of the new approach and data results and metrics of the effort required to process data, including editing, quality control, and product generation for multibeam data utilizing targets from digital imagery data and automated techniques.",2002,0, 2113,Intelligent multi-agent approach to fault location and diagnosis on railway 10kv automatic blocking and continuous power lines,"This paper discusses the intelligent multi-agent technology, and proposes an intelligent multi-agent based accurate fault location detection and fault diagnosis system applied in 10kv automatic blocking and continuous power transmission lines along the railway. Agents are software processes capable of searching for information in the networks, interacting with pieces of equipment and performing tasks on behalf of their owner(device). Moreover, they are autonomous and cooperative. Intelligent agents also have the capability to learn as the power supply network topology or environment changed. The system architecture is proposed, the features of each agents are described. Analysis brings forth the merits of this fault location and diagnosis system.",2002,0, 2114,Air quality data remediation by means of ANN,"We present an application of neural networks to air quality time series remediation. The focus has been set on photochemical pollutants, and particularly on ozone, considering statistical correlations between precursors and secondary pollutants. After a preliminary study of the phenomenon, we tried to adapt a predictive MLP (multi layer perceptron) network to fulfill data gaps. The selected input was, along with ozone series, ozone precursors (NOx) and meteorological variables (solar radiation, wind velocity and temperature). We then proceeded in selecting the most representative periods for the ozone cycle. We ran all tests for a 80-hours validation set (the most representative gap width in our data base) and an accuracy analysis with respect to gap width as been performed too. In order to maximize the process automation, a software tool has been implemented in the Matlabâ„?environment. The ANN validation showed generally good results but a considerable instability in data prediction has been found out. The re-introduction of predicted data as input of following simulations generates an uncontrolled error propagation scarcely highlighted by the error autocorrelation analysis usually performed.",2002,0, 2115,Proactive detection of software aging mechanisms in performance critical computers,"Software aging is a phenomenon, usually caused by resource contention, that can cause mission critical and business critical computer systems to hang, panic, or suffer performance degradation. If the incipience or onset of software aging mechanisms can be reliably detected in advance of performance degradation, corrective actions can be taken to prevent system hangs, or dynamic failover events can be triggered in fault tolerant systems. In the 1990 's the U.S. Dept. of Energy and NASA funded development of an advanced statistical pattern recognition method called the multivariate state estimation technique (MSET) for proactive online detection of dynamic sensor and signal anomalies in nuclear power plants and Space Shuttle Main Engine telemetry data. The present investigation was undertaken to investigate the feasibility and practicability of applying MSET for realtime proactive detection of software aging mechanisms in complex, multiCPU servers. The procedure uses MSET for model based parameter estimation in conjunction with statistical fault detection and Bayesian fault decision processing. A realtime software telemetry harness was designed to continuously sample over 50 performance metrics related to computer system load, throughput, queue lengths, and transaction latencies. A series of fault injection experiments was conducted using a ""memory leak"" injector tool with controllable parasitic resource consumption rates. MSET was able to reliably detect the onset of resource contention problems with high sensitivity and excellent false-alarm avoidance. Spin-off applications of this NASA-funded innovation for business critical eCommerce servers are described.",2002,0, 2116,A rigorous approach to reviewing formal specifications,"A new approach to rigorously reviewing formal specifications to ensure their internal consistency and validity is forwarded. This approach includes four steps: (1) deriving properties as review targets based on the syntax and semantics of the specification, (2) building a review task tree to present all the necessary review tasks for each property, (3) carrying out reviews based on the review task tree, and (4) analyzing the review results to determine whether faults are detected or not. We apply this technique to the SOFL specification language, which is an integrated formalism of VDM, Petri nets, and data flow diagrams to discuss how each step is performed.",2002,0, 2117,Software reliability corroboration,"We suggest that subjective reliability estimation from the development lifecycle, based on observed behavior or the reflection of one's belief in the system quality, be included in certification. In statistical terms, we hypothesize that a system failure occurs with the estimated probability. Presumed reliability needs to be corroborated by statistical testing during the reliability certification phase. As evidence relevant to the hypothesis increases, we change the degree of belief in the hypothesis. Depending on the corroboration evidence, the system is either certified or rejected. The advantage of the proposed theory is an economically acceptable number of required system certification tests, even for high assurance systems so far considered impossible to certify.",2002,0, 2118,A process for software architecture evaluation using metrics,"Software systems often undergo changes. Changes are necessary not only to fix defects but also to accommodate new features demanded by users. Most of the time, changes are made under schedule and budget constraints and developers lack time to study the software architecture and select the best way to implement the changes. As a result, the code degenerates, making it differ from the planned design. The time spent on the planned design to create architecture to satisfy certain properties is lost, and the systems may not satisfy those properties any more. We describe an approach to systematically detect and correct deviations from the planned design as soon as possible based on architectural guidelines. We also describe a case study, in which the process was applied.",2002,0, 2119,Application of neural network for predicting software development faults using object-oriented design metrics,"In this paper, we present the application of neural network for predicting software development faults including object-oriented faults. Object-oriented metrics can be used in quality estimation. In practice, quality estimation means either estimating reliability or maintainability. In the context of object-oriented metrics work, reliability is typically measured as the number of defects. Object-oriented design metrics are used as the independent variables and the number of faults is used as dependent variable in our study. Software metrics used include those concerning inheritance measures, complexity measures, coupling measures and object memory allocation measures. We also test the goodness of fit of neural network model by comparing the prediction result for software faults with multiple regression model. Our study is conducted on three industrial real-time systems that contain a number of natural faults that has been reported for three years (Mei-Huei Tang et al., 1999).",2002,0, 2120,A SOM-based method for feature selection,"This paper presents a method, called feature competitive algorithm (FCA), for feature selection, which is based on an unsupervised neural network, the self-organising map (SOM). The FCA is capable of selecting the most important features describing target concepts from a given whole set of features via the unsupervised learning. The FCA is simple to implement and fast in feature selection as the learning can be done automatically and no need for training data. A quantitative measure, called average distance distortion ratio, is figured out to assess the quality of the selected feature set. An asymptotic optimal feature set can then be determined on the basis of the assessment. This addresses an open research issue in feature selection. This method has been applied to a real case, a software document collection consisting of a set of UNIX command manual pages. The results obtained from a retrieval experiment based on this collection demonstrated some very promising potential.",2002,0, 2121,Things can reunite,"
First Page of the Article
",2002,0, 2122,Using functional view of software to increase performance in defect testing,"Summary form only given, as follows. Software at its primitive level may be viewed as function or mapping according to some specification from set of inputs and their related outputs. In this view the system is considered as a `black box?? whose behavior can only be determined by analyzing its inputs and outputs. Within this domain higher performance in defect testing can be achieved by using techniques that can be executed within resource limitations, thus understanding the software requirements adequately enough to expose majority of errors in system function, performance and behavior. The paper uncovers some innovative input and test reduction approaches considering the functional view of software, leading to efficient and increased fault detection. Functional view of software, black box testing, test reduction, domain to range ratio, input output analysis, testability, equivalence partitioning.",2002,0, 2123,Voice over asynchronous transfer modc(ATM),"
First Page of the Article
",2002,0, 2124,The harmonic impact of self-generating in power factor correction equipment of industrial loads: real cases studies,"This paper shows the impact of the self-generating installation in industrial loads, the problems occurred in field and the proposed solutions based from the harmonic point of view. To illustrate these points, the paper describes two facilities that installed self-generating and all the measurements and studies performed to analyze the electrical problems detected. Some studies results are shown and the implemented solutions are also described.",2002,0, 2125,Identification of voltage sag characteristics from the measured responses,The paper proposes new approach for extraction of voltage sag characteristics from the recorded waveforms. A dedicated software was developed in Matlab for the extraction of voltage sag characteristics from recorded waveforms. The method and capability are illustrated on voltage waveforms obtained from IEEE P1159.2 Working Group on Power Quality Event Characterisation.,2002,0, 2126,Subjective evaluation of synthetic intonation,"The paper describes a method for evaluating the quality of synthetic intonation using subjective techniques. This perceptual method of assessing intonation, not only evaluates the quality of synthetic intonation, but also allows us to compare different models of intonation to know which one is the most natural from a perceptual point of view. This procedure has been used to assess the quality of an implementation of Fujisaki's intonation model (Fujisaki, H. and Hirose, K., 1984) for the Basque language (Navas, E. et al., 2000). The evaluation involved 30 participants and results show that the intonation model developed has introduced a considerable improvement and that the overall quality achieved is good.",2002,0, 2127,Development and evaluation of physical properties for digital intra-oral radiographic system,"As a part of the development of a dental digital radiographic (DDR) system using a CMOS sensor, we developed hardware and software based on a graphical user interface (GUI) to acquire and display intra-oral images. The aims of this study were to develop the DDR system and evaluate its physical properties. Electric signals generated from the CMOS sensor were transformed to digital images through a control computer equipped with a USB board. The distance between the X-ray tube and the CMOS sensor was varied between 10-40 cm for optimal image quality. To evaluate the image quality according to dose variance, phantom images (60 kVp, 7 mA) were obtained at 0.03, 0.05, 0.08, 0.10, and 0.12 s of exposure time and signal-to-noise ratio (SNR) was calculated form the phantom image data. The modulation transfer function (MTF) was obtained as the Fourier transform of the line spread function (LSF), a derivative of the edge spread function (ESF) of sharp edge images acquired at exposure conditions of 60 kVp and 0.56 mA. The most compatible contrast and distinct focal point length was recorded at 20 cm. The resolution of the DDR system was approximately 6.2 line pair per mm. The developed DDR system could be used for clinical diagnosis with improvement of acquisition time and resolution. Measurement of other physical factors such as detected quantum efficiency (DQE) would be necessary to evaluate the physical properties of the DDR system.",2002,0, 2128,New directions in measurement for software quality control,"Assessing and controlling software quality is still an immature discipline. One of the reasons for this is that many of the concepts and terms that are used in discussing and describing quality are overloaded with a history from manufacturing quality. We argue in this paper that a quite distinct approach is needed to software quality control as compared with manufacturing quality control. In particular, the emphasis in software quality control is in design to fulfill business needs, rather than replication to agreed standards. We will describe how quality goals can be derived from business needs. Following that, we will introduce an approach to quality control that uses rich causal models, which can take into account human as well as technological influences. A significant concern of developing such models is the limited sample sizes that are available for eliciting model parameters. In the final section of the paper we will show how expert judgment can be reliably used to elicit parameters in the absence of statistical data. In total this provides an agenda for developing a framework for quality control in software engineering that is freed from the shackles of an inappropriate legacy.",2002,0, 2129,The importance of life cycle modeling to defect detection and prevention,"In many low mature organizations dynamic testing is often the only defect detection method applied. Thus, defects are detected rather late in the development process. High rework and testing effort, typically under time pressure, lead to unpredictable delivery dates and uncertain product quality. This paper presents several methods for early defect detection and prevention that have been in existence for quite some time, although not all of them are common practice. However, to use these methods operationally and scale them to a particular project or environment, they have to be positioned appropriately in the life cycle, especially in complex projects. Modeling the development life cycle, that is the construction of a project-specific life cycle, is an indispensable first step to recognize possible defect injection points throughout the development project and to optimize the application of the available methods for defect detection and prevention. This paper discusses the importance of life cycle modeling for defect detection and prevention and presents a set of concrete, proven methods that can be used to optimize defect detection and prevention. In particular, software inspections, static code analysis, defect measurement and defect causal analysis are discussed. These methods allow early, low cost detection of defects, preventing them from propagating to later development stages and preventing the occurrence of similar defects in future projects.",2002,0, 2130,FPGA Realization of Wavelet Transform for Detection of Electric Power System Disturbances,"Realization of wavelet transform on field-programmable gate array (FPGA) devices for the detection of power system disturbances is proposed in this paper. This approach provides an integral signal-processing paradigm, where its embedded wavelet basis serves as a window function to monitor the signal variations very efficiently. By using this technique, the time information and frequency information can be unified as a visualization scheme, facilitating the supervision of electric power signals. To improve its computation performance, the proposed method starts with the software simulation of wavelet transform in order to formulate the mathematical model. This is followed by the circuit synthesis and timing analysis for the validation of the designated circuit. Then, the designated portfolio can be programmed into the FPGA chip through the download cable. The completed prototype will be tested through software-generated signals and utility-sampled signals, in which test scenarios covering several kinds of electric power quality disturbances are examined thoroughly. From the test results, they support the practicality and advantages of the proposed method for the applications.",2002,0, 2131,An Adaptive Distance Relay and Its Performance Comparison with a Fixed Data Window Distance Relay,"This paper describes the design, implementation, and testing of an adaptive distance relay. The relay uses a fault detector to determine the inception of a fault and then, uses data windows of appropriate length for estimating phasors and seen impedances. Hardware and software of the proposed adaptive relay are described in the paper. The relay was tested using a model power system and a real-time playback simulator. Performance of the relay was compared with a fixed data window distance relay. Some results are reported in the paper. The results indicate that the adaptive distance relay provides faster tripping in comparison to the fixed data window distance relay.",2002,0, 2132,Pitch estimation based on Circular AMDF,"Pitch period is a key parameter in speech compression, synthesis and recognition. The AMDF is often used to determine this parameter. Due to the falling trend of the AMDF curve, however, it is easy to make the estimated pitch doubled or halved To correct these errors, the algoritlun has to increase its complexity. In this paper, we propose a new function, Circular AMDF (CAMDF), and describe a new pitch detection algoritlnn based on it. The CAMDF conquers the defect of AMDF, and brings us some useful features. which not only simplifies the procedure of estimation, but also efficiently reduces the estimation errors and improves the estimation precision. Lots of experiment shows that the performance of CAMDF outperforms the others AMDF-based functions, such as HRAMDF and LVAMDF.",2002,0, 2133,Low-cost error containment and recovery for onboard guarded software upgrading and beyond,"Message-driven confidence-driven (MDCD) error containment and recovery, a low-cost approach to mitigating the effect of software design faults in distributed embedded systems, is developed for onboard guarded software upgrading for deep-space missions. In this paper, we first describe and verify the MDCD algorithms in which we introduce the notion of ""confidence-driven"" to complement the ""communication-induced"" approach employed by a number of existing checkpointing protocols to achieve error containment and recovery efficiency. We then conduct a model-based analysis to show that the algorithms ensure low performance overhead. Finally, we discuss the advantages of the MDCD approach and its potential utility as a general-purpose, low-cost software fault tolerance technique for distributed embedded computing",2002,0, 2134,Test generation and testability alternatives exploration of critical algorithms for embedded applications,"Presents an analysis of the behavioral descriptions of embedded systems to generate behavioral test patterns that are used to perform the exploration of design alternatives based on testability. In this way, during the hardware/software partitioning of the embedded system, testability aspects can be considered. This paper presents an innovative error model for algorithmic (behavioral) descriptions, which allows for the generation of behavioral test patterns. They are converted into gate-level test sequences by using more-or-less accurate procedures based on scheduling information or both scheduling and allocation information. The paper shows, experimentally, that such converted gate-level test sequences provide a very high stuck-at fault coverage when applied to different gate-level implementations of the given behavioral specification. For this reason, our behavioral test patterns can be used to explore testability alternatives, by simply performing fault simulation at the gate level with the same set of patterns, without regenerating them for each circuit. Furthermore, whenever gate-level ATPGs are applied on the synthesized gate-level circuits, they obtain lower fault coverage with respect to our behavioral test patterns, in particular when considering circuits with hard-to-detect faults",2002,0, 2135,Projecting advanced enterprise network and service management to active networks,"Active networks is a promising technology that allows us to control the behavior of network nodes by programming them to perform advanced operations and computations. Active networks are changing considerably the scenery of computer networks and, consequently, affect the way network management is conducted. Current management techniques can be enhanced and their efficiency can be improved, while novel techniques can be deployed. This article discusses the impact of active networks on current network management practice by examining network management through the functional areas of fault, configuration, accounting, performance and security management. For each one of these functional areas, the limitations of the current applications and tools are presented, as well as how these limitations can be overcome by exploiting active networks. To illustrate the presented framework, several applications are examined. The contribution of this work is to analyze, classify, and assess the various models proposed in this area, and to outline new research directions",2002,0, 2136,Utility of popular software defect models,Numerical models can be used to track and predict the number of defects in developmental and operational software. This paper introduces techniques to critically assess the effectiveness of software defect reduction efforts as the data is being gathered. This can be achieved by observing the fundamental shape of the cumulative defect discovery curves and by judging how quickly the various defect models converge to common predictions of long term software performance,2002,0, 2137,Validation of guidance control software requirements specification for reliability and fault-tolerance,"A case study was performed to validate the integrity of a software requirements specification (SRS) for guidance control software (GCS) in terms of reliability and fault-tolerance. A partial verification of the GCS specification resulted. Two modeling formalisms were used to evaluate the SRS and to determine strategies for avoiding design defects and system failures. Z was applied first to detect and remove ambiguity from a part of the natural language based (NL-based) GCS SRS. Next, statecharts and activity-charts were constructed to visualize the Z description and make it executable. Using this formalism, the system behavior was assessed under normal and abnormal conditions. Faults were seeded into the model (i.e., an executable specification) to probe how the system would perform. The result of our analysis revealed that it is beneficial to construct a complete and consistent specification using this method (Z-to-statecharts). We discuss the significance of this approach, compare our work with similar studies, and propose approaches for improving fault tolerance. Our findings indicate that one can better understand the implications of the system requirements using Z-statecharts approach to facilitate their specification and analysis. Consequently, this approach can help to avoid the problems that result when incorrectly specified artifacts (i.e., in this case requirements) force corrective rework",2002,0, 2138,Vision-model-based impairment metric to evaluate blocking artifacts in digital video,"In this paper investigations are conducted to simplify and refine a vision-model-based video quality metric without compromising its prediction accuracy. Unlike other vision-model-based quality metrics, the proposed metric is parameterized using subjective quality assessment data recently provided by the Video Quality Experts Group. The quality metric is able to generate a perceptual distortion map for each and every video frame. A perceptual blocking distortion metric (PBDM) is introduced which utilizes this simplified quality metric. The PBDM is formulated based on the observation that blocking artifacts are noticeable only in certain regions of a picture. A method to segment blocking dominant regions is devised, and perceptual distortions in these regions are summed up to form an objective measure of blocking artifacts. Subjective and objective tests are conducted and the performance of the PBDM is assessed by a number of measures such as the Spearman rank-order correlation, the Pearson correlation, and the average absolute error The results show a strong correlation between the objective blocking ratings and the mean opinion scores on blocking artifacts",2002,0, 2139,Body of knowledge for software quality measurement,"Measuring quality is the key to developing high-quality software. The author describes two approaches that help to identify the body of knowledge software engineers need to achieve this goal. The first approach derives knowledge requirements from a set of issues identified during two standards efforts: the IEEE Std. 1061-1998 for a Software Quality Metrics Methodology and the American National Standard Recommended Practice for Software Reliability (ANSI/AIAA R-013-1992). The second approach ties these knowledge requirements to phases in the software development life cycle. Together, these approaches define a body of knowledge that shows software engineers why and when to measure quality. Focusing on the entire software development life cycle, rather than just the coding phase, gives software engineers the comprehensive knowledge they need to enhance software quality and supports early detection and resolution of quality problems. The integration of product and process measurements lets engineers assess the interactions between them throughout the life cycle. Software engineers can apply this body of knowledge as a guideline for incorporating quality measurement in their projects. Professional licensing and training programs will also find it useful",2002,0, 2140,Test case prioritization: a family of empirical studies,"To reduce the cost of regression testing, software testers may prioritize their test cases so that those which are more important, by some measure, are run earlier in the regression testing process. One potential goal of such prioritization is to increase a test suite's rate of fault detection. Previous work reported results of studies that showed that prioritization techniques can significantly improve rate of fault detection. Those studies, however, raised several additional questions: 1) Can prioritization techniques be effective when targeted at specific modified versions; 2) what trade-offs exist between fine granularity and coarse granularity prioritization techniques; 3) can the incorporation of measures of fault proneness into prioritization techniques improve their effectiveness? To address these questions, we have performed several new studies in which we empirically compared prioritization techniques using both controlled experiments and case studies",2002,0, 2141,UMTS EASYCOPE: a tool for UMTS network and algorithm evaluation,"For the UMTS radio access network, the problems of dimensioning/planning and algorithm evaluation/optimization is a challenging task, which requires methods and tools that are essentially different from the ones applied for 2nd generation systems. In order to generate reliable results for the highly flexible and dynamic UMTS, sophisticated simulation tools have to be applied. A UMTS model and its implementation is described. It allows an efficient evaluation of UMTS in terms of system performance figures such as capacity, coverage and QoS",2002,0, 2142,An approach for intelligent detection and fault diagnosis of vacuum circuit breakers,"In this paper, an approach for intelligent detection and fault diagnosis of vacuum circuit breakers is introduced, by which, the condition of a vacuum circuit breaker can be monitored on-line, and the detectable faults can be identified, located, displayed and saved for the use of analyzing their change tendencies. The main detecting principles and diagnostics are described. Both the hardware structure and software design are also presented.",2002,0, 2143,On the performance of a survivability architecture for networked computing systems,"This research focusses on the performance and timing behavior of a two level survivability architecture. The lower level of the architecture involves attack analysis based on kernel attack signatures and survivability handlers. Higher level survivability mechanisms are implemented using migratory autonomous agents. The potential for fast response to, and recovery from, malicious attacks is the main motivation to implement attack detection and survivability mechanisms at the kernel level. A timing analysis is presented that suggests the real-time feasibility of the two level approach. The limits to real-time response are identified from the host and network point of view. The experimental data derived is important for risk management and analysis in the presence of malicious network and computer attacks.",2002,0, 2144,Estimation of clock offset from one-way delay measurement on asymmetric paths,"As the Internet is shifting towards a reliable QoS-aware network, accurately synchronized clocks distributed on the Internet are becoming more significant. The network time protocol (NTP) is broadly deployed on the Internet for clock synchronization among distributed hosts, but is weak in asymmetric paths, i.e., it cannot accurately estimate the clock offset between two hosts when the forward and backward paths between them have different one-way delays. In this paper, we focus on estimating the offset and skew of a clock from one-way delay measurement between two hosts, and propose an idea for improvement of such estimations, which reduces estimation errors when the forward and backward paths have different bandwidths, a major factor in asymmetric delays",2002,0, 2145,Detecting changes in XML documents,"We present a diff algorithm for XML data. This work is motivated by the support for change control in the context of the Xyleme project that is investigating dynamic warehouses capable of storing massive volumes of XML data. Because of the context, our algorithm has to be very efficient in terms of speed and memory space even at the cost of some loss of quality. Also, it considers, besides insertions, deletions and updates (standard in diffs), a move operation on subtrees that is essential in the context of XML. Intuitively, our diff algorithm uses signatures to match (large) subtrees that were left unchanged between the old and new versions. Such exact matchings are then possibly propagated to ancestors and descendants to obtain more matchings. It also uses XML specific information such as ID attributes. We provide a performance analysis of the algorithm. We show that it runs in average in linear time vs. quadratic time for previous algorithms. We present experiments on synthetic data that confirm the analysis. Since this problem is NP-hard, the linear time is obtained by trading some quality. We present experiments (again on synthetic data) that show that the output of our algorithm is reasonably close to the optimal in terms of quality. Finally we present experiments on a small sample of XML pages found on the Web",2002,0, 2146,Approximating a data stream for querying and estimation: algorithms and performance evaluation,"Obtaining fast and good-quality approximations to data distributions is a problem of central interest to database management. A variety of popular database applications, including approximate querying, similarity searching and data mining in most application domains, rely on such good-quality approximations. Histogram-based approximation is a very popular method in database theory and practice to succinctly represent a data distribution in a space-efficient manner. In this paper, we place the problem of histogram construction into perspective and we generalize it by raising the requirement of a finite data set and/or known data set size. We consider the case of an infinite data set in which data arrive continuously, forming an infinite data stream. In this context, we present single-pass algorithms that are capable of constructing histograms of provable good quality. We present algorithms for the fixed-window variant of the basic histogram construction problem, supporting incremental maintenance of the histograms. The proposed algorithms trade accuracy for speed and allow for a graceful tradeoff between the two, based on application requirements. In the case of approximate queries on infinite data streams, we present a detailed experimental evaluation comparing our algorithms with other applicable techniques using real data sets, demonstrating the superiority of our proposal",2002,0, 2147,Error detection by duplicated instructions in super-scalar processors,"This paper proposes a pure software technique ""error detection by duplicated instructions"" (EDDI), for detecting errors during usual system operation. Compared to other error-detection techniques that use hardware redundancy, EDDI does not require any hardware modifications to add error detection capability to the original system. EDDI duplicates instructions during compilation and uses different registers and variables for the new instructions. Especially for the fault in the code segment of memory, formulas are derived to estimate the error-detection coverage of EDDI using probabilistic methods. These formulas use statistics of the program, which are collected during compilation. EDDI was applied to eight benchmark programs and the error-detection coverage was estimated. Then, the estimates were verified by simulation, in which a fault injector forced a bit-flip in the code segment of executable machine codes. The simulation results validated the estimated fault coverage and show that approximately 1.5% of injected faults produced incorrect results in eight benchmark programs with EDDI, while on average, 20% of injected faults produced undetected incorrect results in the programs without EDDI. Based on the theoretical estimates and actual fault-injection experiments, EDDI can provide over 98% fault-coverage without any extra hardware for error detection. This pure software technique is especially useful when designers cannot change the hardware, but they need dependability in the computer system. To reduce the performance overhead, EDDI schedules the instructions that are added for detecting errors such that ""instruction-level parallelism"" (ILP) is maximized. Performance overhead can be reduced by increasing ILP within a single super-scalar processor. The execution time overhead in a 4-way super-scalar processor is less than the execution time overhead in the processors that can issue two instructions in one cycle",2002,0, 2148,Providing packet-loss guarantees in DiffServ architectures,"Differentiated Services (DiffServ) is a proposed architecture for the Internet in which various applications are supported using a simple classification scheme. Packets entering the DiffServ domain are marked depending on the packets' class. Premium service and assured service are the first two types of services proposed within the DiffServ architecture other than the best effort service. Premium service provides a strict guarantee on users' peak rates, hence delivering the highest quality of service (QoS). However, it expects to charge at high prices and also has low bandwidth utilization. The assured service provides high priority packets with preferential treatment over low priority packets but without any quantitative QoS guarantees. In this paper, we propose a new service, which is called loss guaranteed (LG) service for DiffServ architectures. This service can provide a quantitative QoS guarantee in terms of loss rate. A measurement-based admission control scheme and a signaling protocol are designed to implement the LG service. An extensive simulation model has been developed to study the performance and viability of the LG service model. We have tested a variety of traffic conditions and measurement parameter settings in our simulation. The results show that the LG can achieve a high level of utilization while still reliably keeping the traffic within the maximum loss rate requirement. Indeed, we show that the DiffServ architecture can provide packet-loss guarantees without the need for explicit resource reservation",2002,0, 2149,Diagnosing quality of service faults in distributed applications,"QoS management refers to the allocation and scheduling of computing resources. Static QoS management techniques provide a guarantee that resources will be available when needed. These techniques allocate resources based on worst-case needs. This is especially important for applications with hard QoS requirements. However, this approach can waste resources. In contrast, a dynamic approach allocates and deallocates resources during the lifetime of an application. In the dynamic approach the application is started with an initial resource allocation. If the application does not meet its QoS requirements, a resource manager attempts to allocate more resources to the application until the application's QoS requirement is met. While this approach offers the opportunity to better manage resources and meet application QoS requirements, it also introduces a new set of problems. In particular, a key problem is detecting why a QoS requirement is not being satisfied and determining the cause and, consequently, which resource needs to be adjusted. This paper investigates a policy-based approach for addressing these problems. An architecture is presented and a prototype described. This is followed by a case study in which the prototype is used to diagnose QoS problems for a web application based on Apache",2002,0, 2150,Adapting extreme programming for a core software engineering course,"Over a decade ago, the manufacturing industry determined it needed to be more agile to thrive and prosper in a changing, nonlinear, uncertain and unpredictable business environment The software engineering community has come to the same realization. A group of software methodologists has created a set of software development processes, termed agile methodologies that have been specifically designed to respond to the demands of the turbulent software industry. Each of the processes in the set of agile processes comprises a set of practices. As educators, we must assess the emerging agile practices, integrate them into our courses (carefully), and share our experiences and results from doing so. The paper discusses the use of extreme programming, a popular agile methodology, in a senior software engineering course at North Carolina State University. It then provides recommendations for integrating agile principles into a core software engineering course",2002,0, 2151,Energizing software engineering education through real-world projects as experimental studies,"Our experience shows that a typical industrial project can enhance software engineering research and bring theories to life. The University of Kentucky (UK) is in the initial phase of developing a software engineering curriculum. The first course, a graduate-level survey of software engineering, strongly emphasized quality engineering. assisted by the UK clinic, the students undertook a project to develop a phenylalanine milligram tracker. It helps phenylketonuria (PKU) sufferers to monitor their diet as well as assists PKU researchers to collect data. The project was also used as an informal experimental study. The applied project approach to teaching software engineering appears to be successful thus far. The approach taught many important software and quality engineering principles to inexperienced graduate students in an accurately simulated industrial development environment. It resulted in the development of a framework for describing and evaluating such a real-world project, including evaluation of the notion of a user advocate. It also resulted in interesting experimental trends, though based on a very small sample. Specifically, estimation skills seem to improve over time and function point estimation may be more accurate than LOC estimation",2002,0, 2152,SUE inspection: an effective method for systematic usability evaluation of hypermedia,"SUE inspection is a novel usability evaluation method for hypermedia, which falls into the category of inspection methods. Its primary goal is to favor a systematic usability evaluation (SUE), for supplying hypermedia usability inspectors with a structured flow of activities, allowing them to obtain more reliable, comparable, and cost-effective evaluation results. This is obtained especially due to the use of evaluation patterns, called abstract tasks (AT), which describe in details the activities that evaluators must carry out during inspection. AT helps share and transfer evaluation know-how among different evaluators, thus making it easier to learn the SUE inspection method by newcomers. A further notable advantage provided by the SUE inspection over other existent approaches is that it focuses on navigation and information structures, making evident some problems that other ""surface-oriented"" approaches might not reveal. This paper describes the SUE inspection for hypermedia. It also reports on a validation experiment, involving 28 evaluators, that has shown the major effectiveness and efficiency of the SUE inspection for hypermedia, with respect to traditional heuristic evaluation techniques",2002,0, 2153,Architecture-centric software evolution by software metrics and design patterns,"It is shown how software metrics and architectural patterns can be used for the management of software evolution. In the presented architecture-centric software evolution method the quality of a software system is assured in the software design phase by computing various kinds of design metrics from the system architecture, by automatically exploring instances of design patterns and anti-patterns from the architecture, and by reporting potential quality problems to the designers. The same analysis is applied in the implementation phase to the software code, thus ensuring that it matches the quality and structure of the reference architecture. Finally, the quality of the ultimate system is predicted by studying the development history of previous projects with a similar composition of characteristic software metrics and patterns. The architecture-centric software evolution method is supported by two integrated software tools, the metrics and pattern-mining tool Maisa and the reverse-engineering tool Columbus",2002,0, 2154,On the predictability of program behavior using different input data sets,"Smaller input data sets such as the test and the train input sets are commonly used in simulation to estimate the impact of architecture/micro-architecture features on the performance of SPEC benchmarks. They are also used for profile feedback compiler optimizations. In this paper, we examine the reliability of reduced input sets for performance simulation and profile feedback optimizations. We study the high level metrics such as IPC and procedure level profiles as well as lower level measurements such as execution paths exercised by various input sets on the SPEC2000int benchmark. Our study indicates that the test input sets are not suitable to be used for simulation because they do not have an execution profile similar to the reference input runs. The train data set is better than the test data sets at maintaining similar profiles to the reference input set. However, the observed execution paths leading to cache misses are very different between using the smaller input sets and the reference input sets. For current profile based optimizations, the differences in quality of profiles may not have a significant impact on performance, as tested on the Itanium processor with an Intel compiler. However, we believe the impact of profile quality will be greater for more aggressive profile guided optimizations, such as cache prefetching",2002,0, 2155,Condition based maintenance of PE/XLPE-insulated medium voltage cable networks-verification of the IRC-analysis to determine the cable age,For a preventive maintenance and a more economical operation of polymer insulated cable networks a destruction free estimation of the status of laid PE/XLPE-cables is a basic need. This contribution presents a condition based maintenance concept with the IRC-Analysis applied on a part of the 20 kV-cable network of a major German utility. With this concept the distributor was able to reduce the amount of insulation failures-especially for cables manufactured before 1984. The general tendency of increasing faults was broken and a higher quality of energy supply was achieved.,2002,0, 2156,VC rating and quality metrics: why bother? [SoC],"System-on-a-chip (SoC) is the paramount challenge of the electronic industry for the next millennium. The semiconductor industry has delivered what we were expecting and what was predicted: silicon availability for over 10 million gates. The VSIA (Virtual Socket Initiative Alliance) has defined industry standards and data formats for SoC. The reuse methodology manual, first 'how-to-do' book to create reusable IPs (intellectual properties) for SoC designs has been published. EDA tool providers understand the issues and are proposing new tools and solutions on a quarterly basis. The last stage needs to be run: consolidate the experience and know-how of VSIA and IP OpenMORE rating system into an industry adopted VC (virtual component) quality metrics, and then pursue to tackle the next challenges: formal system specifications and VC transfer infrastructure. The objective of this paper is to set the stage for the final step towards a VC quality metrics effort that the industry needs to adopt, and define the next achievable goals.",2002,0, 2157,Improving the efficiency and quality of simulation-based behavioral model verification using dynamic Bayesian criteria,"In order to improve the effectiveness of simulation-based behavioral verification, it is important to determine when to stop the current test strategy and to switch to an expectantly more rewarding test strategy. The location of a stopping point is dependent on the statistical model one chooses to describe the coverage behavior during verification. In this paper, we present dynamic Bayesian (DB) and confidence-based dynamic Bayesian (CDB) stopping rules for behavioral VHDL model verification. The statistical assumptions of the proposed stopping rules are based on experimental evaluation of probability distribution functions and correlation functions. Fourteen behavioral VHDL models were experimented with to determine the high efficiency of the proposed stopping rules over the existing ones. Results show that the DB and the CDB stopping rules outperform all the existing stopping rules with an average improvement of at least 69% in coverage per testing patterns used.",2002,0, 2158,Integrated inductors modeling and tools for automatic selection and layout generation,In this work we propose new equivalent circuit models for integrated inductors based on the conventional lumped element model. Automatic tools to assist the designers in selecting and automatically laying-out integrated inductors are also reported. Model development is based on measurements taken from more than 100 integrated spiral inductors designed and fabricated in a standard silicon process. We demonstrate the capacity of the proposed models to accurately predict the integrated inductor behavior in a wider frequency range than the conventional model. Our equations are coded in a set of tools that requests the desired inductance value at a determined frequency and gives back the geometry of the better inductors available in a particular technology.,2002,0, 2159,A performance model of a PC based IP software router,"We can define a software router as a general-purpose computer that executes a computer program capable of forwarding IP datagrams among network interface cards attached to its I/O bus. This paper presents a parametrical model of a PC based IP software router. Validation results clearly show that the model accurately estimates the performance of the modeled system at different levels of detail. On the other hand, the paper presents experimental results that provide insights about the detailed functioning of such a system and demonstrate the model is valid not only for the characterized systems but for a reasonably range of CPU, memory and I/O bus operation speeds.",2002,0, 2160,FPGA realization of wavelet transform for detection of electric power system disturbances,"Realization of wavelet transform on field-programmable gate array (FPGA) device for the detection of power system disturbances is proposed in this paper. This approach provides an integral signal-processing paradigm, where its embedded wavelet basis serves as a window function to monitor the signal variations very efficiently. By using this technique, the time information and frequency information can be unified as a visualization scheme, facilitating the supervision of electric power signals. To improve its computation performance, the proposed method starts with the software simulation of wavelet transform in order to formulate the mathematical model. This is followed by the circuit synthesis and timing analysis for the validation of the designated circuit. Then, the designated portfolio can be programmed into the FPGA chip through the download cable. And the completed prototype will be tested through software-generated signals and utility-sampled signals, in which test scenarios covering several kinds of electric power quality disturbances are examined thoroughly. From the test results, they support the practicality and advantages of the proposed method for the applications",2002,0,2130 2161,A novel software implementation concept for power quality study,"A novel concept for power quality study is proposed. The concept integrates the power system modeling, classifying and characterizing of power quality events, studying equipment sensitivity to the event disturbance, and locating point of event occurrence into one unified frame. Both Fourier and wavelet analyzes are applied far extracting distinct features of various types of events as well as for characterizing the events. A new fuzzy expert system for classifying power quality events based on such features is presented with improved performance over previous neural network based methods. A novel simulation method is outlined for evaluating the operating characteristics of the equipment during specific events. A software prototype implementing the concept has been developed in MATLAB. The voltage sag event is taken as an example for illustrating the analysis methods and software implementation issues. It is concluded that the proposed approach is feasible and promising for real world applications",2002,0,1729 2162,Recording and analyzing eye movements during ocular fixation in schizophrenic subjects,"Previous studies have been shown that schizophrenic patient compared to healthy subject present abnormality in eye fixation tasks. But in these studies the evaluations of the eye movement are not objective. They are based on visual inspection of the records. The quality of fixation is assessed in term of absence of saccades. By using a predefined scale the records are rating from the best to worst. In this paper, we propose a new method to quantify eye fixation. in this method, our analyze examine the metric proprieties of each component of eye fixation movement (saccades, square wave jerks, and drift). A computer system is developed to record, stimulate and analyze the eye movement. A variety of software tools are developed to assist a clinician in the analysis of the data",2002,0, 2163,Application of task-specific metrics in JPEG2000 ROI compression,"In this paper we consider the problem of quantifying target distortion in lossy compression using the JPEG2000 Region Of Interest (ROI) option. This approach brings about new problems that need to be addressed, such as automated selection of target regions and ROI preservation parameters. By combining variations of traditional metrics and newly developed segmentation-based image quality metrics, we illustrate several ways to quantify target distortion using ROI preserving compression for various methods including JPEG2000",2002,0, 2164,AQuA: an adaptive architecture that provides dependable distributed objects,"Building dependable distributed systems from commercial off-the-shelf components is of growing practical importance. For both cost and production reasons, there is interest in approaches and architectures that facilitate building such systems. The AQuA architecture is one such approach; its goal is to provide adaptive fault tolerance to CORBA applications by replicating objects. The AQuA architecture allows application programmers to request desired levels of dependability during applications' runtimes. It provides fault tolerance mechanisms to ensure that a CORBA client can always obtain reliable services, even if the CORBA server object that provides the desired services suffers from crash failures and value faults. AQuA includes a replicated dependability manager that provides dependability management by configuring the system in response to applications' requests and changes in system resources due to faults. It uses Maestro/Ensemble to provide group communication services. It contains a gateway to intercept standard CORBA IIOP messages to allow any standard CORBA application to use AQuA. It provides different types of replication schemes to forward messages reliably to the remote replicated objects. All of the replication schemes ensure strong, data consistency among replicas. This paper describes the AQuA architecture and presents, in detail, the active replication pass-first scheme. In addition, the interface to the dependability manager and the design of the dependability manager replication are also described. Finally, we describe performance measurements that were conducted for the active replication pass-first scheme, and we present results from our study of fault detection, recovery, and blocking times.",2003,0, 2165,Automatic QoS control,"User sessions, usually consisting of sequences of consecutive requests from customers, comprise most of an e-commerce site's workload. These requests execute e-business functions such as browse, search, register, login, add to shopping cart, and pay. Once we properly understand and characterize a workload, we must assess its effect on the site's quality of service (QoS), which is defined in terms of response time, throughput, the probability that requests will be rejected, and availability. We can assess an e-commerce site's QoS in many different ways. One approach is by measuring the site's performance, which we can determine from a production site using a real workload or from a test site using a synthetic workload (as in load testing). Another approach consists of using performance models. I look at the approach my colleagues at George Mason and I took that uses performance models in the design and implementation of automatic QoS controller for e-commerce sites.",2003,0, 2166,Multiparadigm scheduling for distributed real-time embedded computing,"Increasingly complex requirements, coupled with tighter economic and organizational constraints, are making it hard to build complex distributed real-time embedded (DRE) systems entirely from scratch. Therefore, the proportion of DRE systems made up of commercial-off-the-shelf (COTS) hardware and software is increasing significantly. There are relatively few systematic empirical studies, however, that illustrate how suitable COTS-based hardware and software have become for mission-critical DRE systems. This paper provides the following contributions to the study of real-time quality-of-service (QoS) assurance and performance in COTS-based DRE systems: it presents evidence that flexible configuration of COTS middleware mechanisms, and the operating system (OS) settings they use, allows DRE systems to meet critical QoS requirements over a wider range of load and jitter conditions than statically configured systems; it shows that in addition to making critical QoS assurances, noncritical QoS performance can be improved through flexible support for alternative scheduling strategies; and it presents an empirical study of three canonical scheduling strategies; specifically the conditions that predict success of a strategy for a production-quality DRE avionics mission computing system. Our results show that applying a flexible scheduling framework to COTS hardware, OSs, and middleware improves real-time QoS assurance and performance for mission-critical DRE systems.",2003,0, 2167,Assessing the quality of a cross-national e-government Web site: a study of the forum on strategic management knowledge exchange,"As organizations have begun increasingly to communicate and interact with consumers via the Web, so the appropriate design of offerings has become a central issue. Attracting and retaining consumers requires acute understanding of the requirements of users and appropriate tailoring of solutions. Recently, the development of Web offerings has moved beyond the commercial domain to government, both national and international. In this paper, we examine the results of a quality survey of a cross-national e-government Web site provided by the OECD. The site is examined before and after a major redesign process. The instrument, WebQual, draws on previous work in three areas: Web site usability, information quality, and service interaction quality to provide a rounded framework for assessing e-commerce and e-government offerings. The metrics and findings demonstrate not only the strengths and weaknesses of the sites before and after design, but the very different impressions of users in different member countries. These findings have implications for cross-national e-government Web site offerings.",2003,0, 2168,Perfect failure detection in timed asynchronous systems,"Perfect failure detectors can correctly decide whether a computer is crashed. However, it is impossible to implement a perfect failure detector in purely asynchronous systems. We show how to enforce perfect failure detection in timed asynchronous systems with hardware watchdogs. The two main system model assumptions are: 1) each computer can measure time intervals with a known maximum error and 2) each computer has a watchdog that crashes the computer unless the watchdog is periodically updated. We have implemented a system that satisfies both assumptions using a combination of off-the-shelf software and hardware. To implement a perfect failure detector for process crash failures, we show that, in some systems, a hardware watchdog is actually not necessary.",2003,0, 2169,Transparent recovery from intermittent faults in time-triggered distributed systems,"The time-triggered model, with tasks scheduled in static (off line) fashion, provides a high degree of timing predictability in safety-critical distributed systems. Such systems must also tolerate transient and intermittent failures which occur far more frequently than permanent ones. Software-based recovery methods using temporal redundancy, such as task reexecution and primary/backup, while incurring performance overhead, are cost-effective methods of handling these failures. We present a constructive approach to integrating runtime recovery policies in a time-triggered distributed system. Furthermore, the method provides transparent failure recovery in that a processor recovering from task failures does not disrupt the operation of other processors. Given a general task graph with precedence and timing constraints and a specific fault model, the proposed method constructs the corresponding fault-tolerant (FT) schedule with sufficient slack to accommodate recovery. We introduce the cluster-based failure recovery concept which determines the best placement of slack within the FT schedule so as to minimize the resulting time overhead. Contingency schedules, also generated offline, revise this FT schedule to mask task failures on individual processors while preserving precedence and timing constraints. We present simulation results which show that, for small-scale embedded systems having task graphs of moderate complexity, the proposed approach generates FT schedules which incur about 30-40 percent performance overhead when compared to corresponding non-fault-tolerant ones.",2003,0, 2170,Building survivable services using redundancy and adaptation,"Survivable systems-that is, systems that can continue to provide service despite failures, intrusions, and other threats-are increasingly needed in a wide variety of civilian and military application areas. As a step toward realizing such systems, this paper advocates the use of redundancy and adaptation to build survivable services that can provide core functionality for implementing survivability in networked environments. An approach to building such services using these techniques is described and a concrete example involving a survivable communication service is given. This service is based on Cactus, a system for building highly configurable network protocols that offers the flexibility needed to easily add redundant and adaptive components. Initial performance results for a prototype implementation of the communication service built using Cactus/C2.1 running on Linux are also given.",2003,0, 2171,Nonblocking k-fold multicast networks,"Multicast communication involves transmitting information from a single source to multiple destinations and is a requirement in high-performance networks. Current trends in networking applications indicate an increasing demand in future networks for multicast capability. Many multicast applications require not only multicast capability, but also predictable communication performance such as guaranteed multicast latency and bandwidth. In this paper, we present a design for a nonblocking k-fold multicast network, in which any destination node can be involved in up to k simultaneous multicast connections in a nonblocking manner. We also develop an efficient routing algorithm for the network. As can be seen, a k-fold multicast network has significantly lower network cost than that of k copies of ordinary 1-fold multicast networks and is a cost effective choice for supporting arbitrary multicast communication.",2003,0, 2172,Optimal cost-effective design of parallel systems subject to imperfect fault-coverage,"Computer-based systems intended for critical applications are usually designed with sufficient redundancy to be tolerant of errors that may occur. However, under imperfect fault-coverage conditions (such as the system cannot adequately detect, locate, and recover from faults and errors in the system), system failures can result even when adequate redundancy is in place. Because parallel architecture is a well-known and powerful architecture for improving the reliability of fault-tolerant systems, this paper presents the cost-effective design policies of parallel systems subject to imperfect fault-coverage. The policies are designed by considering (1) cost of components, (2) failure cost of the system, (3) common-cause failures, and (4) performance levels of the system. Three kinds of cost functions are formulated considering that the total average cost of the system is based on: (1) system unreliability, (2) failure-time of the system, and (3) total processor-hours. It is shown that the MTTF (mean time to failure) of the system decreases by increasing the spares beyond a certain limit. Therefore, this paper also presents optimal design policies to maximize the MTTF of these systems. The results of this paper can also be applied to gracefully degradable systems.",2003,0, 2173,Robust reliability design of diagnostic systems,"Diagnostic systems are software-based built-in-test systems which detect, isolate and indicate the failures of the prime systems. The use of diagnostic systems reduces the losses due to the failures of the prime systems and facilitates subsequent repairs. Thus diagnostic systems have found extensive applications in industry. The algorithms performing operations for diagnosis are important parts of the diagnostic systems. If the algorithms are not adequately designed, the systems will be sensitive to noise sources, and commit type I error (a) and type II error (β). This paper is to improve the robustness and reliability of the diagnostic systems through robust design of the algorithms by using reliability as an experimental response. To conduct the design, we define the reliability and robustness of the systems, and propose their metrics. The influences of α and β errors on reliability are evaluated and discussed. The effects of noise factors on robustness are assessed. The classical P-diagram is modified; a generic P-diagram containing both prime and diagnostic systems is created. Based on the proposed dynamic reliability metric, we describe the steps for robust reliability design and develop a method for experimental data analysis. The robustness and reliability of the diagnostic systems are maximized by choosing optimal levels of algorithm parameters. An automobile example is presented to illustrate how the proposed design method is used. The example shows that the method is efficient in defining, measuring and building robustness and reliability.",2003,0, 2174,"Comparing reliability predictions to field data for plastic parts in a military, airborne environment","This paper examines two popular prediction methods and compares the results to field data collected on plastic encapsulated microcircuits (PEMs) operating in a military, airborne environment. The comparison study focused on three digital circuit card assemblies (CCAs) designed primarily with plastic, surface mount parts. Predictions were completed using MIL-HDBK-217 models and PRISW®, the latest software tool developed by the Reliability Analysis Center (RAC). The MIL-HDBK-217 predictions which correlated best to the field data were based on quality levels (πQ) of 2 and 3, rather than the typical πQ values of 5 or higher, traditionally assigned per the handbook's screening classifications for commercial, plastic parts. The initial findings from the PRISM® tool revealed the predictions were optimistic in comparison to the observed field performance, meaning the predictions yielded higher mean time to failure (MTTF) values than demonstrated. Further evaluation of the PRISM® models showed how modifying default values could improve the prediction accuracy. The impact of the system level multiplier was also determined to be a major contributor to the difference between PRISM® predictions and field data. Finally, experience data proved valuable in refining the prediction results. The findings from this study provide justification to modify specific modeling factors to improve the predictions for PEMs, and also serve as a baseline to evaluate future alternative prediction methods.",2003,0, 2175,Reliability Centered Maintenance Maturity Level Roadmap,"Numerous maintenance organizations are implementing various forms of reliability centered maintenance (RCM). Whether it is a classic or a streamlined RCM program, the challenge is to do it fast, but with predictable performance, quality, cost, and schedule. Hence, organizations need guidance to ensure their RCM programs are consistently implemented across the company and to improve their ability to manage key RCM process areas such as analysis, training, and metrics. The RCM Maturity Level Roadmap provides the structure for an organization to assess its RCM maturity and key process area capability. In addition, the Roadmap helps establish priorities for improvement and guide the implementation of these improvements.",2003,0, 2176,A unified scheme of some Nonhomogenous Poisson process models for software reliability estimation,"In this paper, we describe how several existing software reliability growth models based on Nonhomogeneous Poisson processes (NHPPs) can be comprehensively derived by applying the concept of weighted arithmetic, weighted geometric, or weighted harmonic mean. Furthermore, based on these three weighted means, we thus propose a more general NHPP model from the quasi arithmetic viewpoint. In addition to the above three means, we formulate a more general transformation that includes a parametric family of power transformations. Under this general framework, we verify the existing NHPP models and derive several new NHPP models. We show that these approaches cover a number of well-known models under different conditions.",2003,0, 2177,Parametric fault tree for the dependability analysis of redundant systems and its high-level Petri net semantics,"In order to cope efficiently with the dependability analysis of redundant systems with replicated units, a new, more compact fault-tree formalism, called Parametric Fault Tree (PFT), is defined. In a PFT formalism, replicated units are folded and indexed so that only one representative of the similar replicas is included in the model. From the PFT, a list of parametric cut sets can be derived, where only the relevant patterns leading to the system failure are evidenced regardless of the actual identity of the component in the cut set. The paper provides an algorithm to convert a PFT into a class of High-Level Petri Nets, called SWN. The purpose of this conversion is twofold: to exploit the modeling power and flexibility of the SWN formalism, allowing the analyst to include statistical dependencies that could not have been accommodated into the corresponding PFT and to exploit the capability of the SWN formalism to generate a lumped Markov chain, thus alleviating the state explosion problem. The search for the minimal cut sets (qualitative analysis) can be often performed by a structural T-invariant analysis on the generated SWN. The advantages that can be obtained from the translation of a PFT into a SWN are investigated considering a fault-tolerant multiprocessor system example.",2003,0, 2178,Analysis and FPGA implementation of image restoration under resource constraints,"Programmable logic is emerging as an attractive solution for many digital signal processing applications. In this work, we have investigated issues arising due to the resource constraints of FPGA-based systems. Using an iterative image restoration algorithm as an example we have shown how to manipulate the original algorithm to suit it to an FPGA implementation. Consequences of such manipulations have been estimated, such as loss of quality in the output image. We also present performance results from an actual implementation on a Xilinx FPGA. Our experiments demonstrate that, for different criteria, such as result quality or speed, the best implementation is different as well.",2003,0, 2179,Scientific management meets the personal software process,"There are several ways that the personal software process (PSP) can bolster the principles of scientific management to provide needed artifacts. PSP data can be gathered and then collated to make early time and budget estimates more accurate. Also, PSP data indicates that the oft-repeated belief that projects should spend more time on planning and design is correct. In both cases, calculating time across cyclic development is important. A century apart, two pioneers meet to improve the often-black art of management.",2003,0, 2180,Estimation of bus performance for a tuplespace in an embedded architecture [factory automation application],"This paper describes a design methodology for the estimation of bus performance of a tuplespace for factory automation. The need of a tuplespace is motivated by the characteristics of typical embedded architectures for factory automation. We describe the features of a bus for embedded applications and the problem of estimating its performance, and present a rapid prototyping design methodology developed for a qualitative and quantitative estimation. The methodology is based on a mix of different modeling languages such as Java, C++, SystemC and Network Simulator2 (NS2). Its application allows the estimation of the expected performance of the bus under design in relation to the developed tuplespace.",2003,0, 2181,Implementation and performance of cooperative control of shunt active filters for harmonic damping throughout a power distribution system,"This paper proposes cooperative control of multiple active filters based on voltage detection for harmonic damping throughout a power distribution system. The arrangement of a real distribution system would be changed according to system operation, and/or fault conditions. In addition, shunt capacitors and loads are individually connected to, or disconnected from, the distribution system. Independent control might make multiple active filters produce unbalanced compensating currents. This paper presents hardware and software implementations of cooperative control for two active filters. Experiment results verify the effectiveness of the cooperative control with the help of a communication system.",2003,0, 2182,Empirical evaluation of capacity estimation tools,"Bandwidth estimation is an important task because the knowledge of the bandwidth of a path is useful in a wide variety of contexts. Clients, applications and servers benefit greatly from knowing the bandwidth of a route. In this paper we describe and discuss experiments carried out to provide bandwidth estimates using five tools: bprobe, clink, nettimer pathrate, and pchar The tools are evaluated and compared according to the criteria accuracy, statistical robustness and time to estimation. We show several results for short and long network paths. The topologies tested include links which capacities range from 2 Mb/s to 1 Gb/s.",2003,0, 2183,"Assessing attitude towards, knowledge of, and ability to apply, software development process","Software development is one of the most economically critical engineering activities. It is unsettling, therefore, that regularly published analyses reveal that the percentage of projects that fail, by coming in far over budget or far past schedule, or by being cancelled with significant financial loss, is considerably greater in software development than in any other branch of engineering. The reason is that successful software development requires expertise in both state of the art (software technology) and state of the practice (software development process). It is widely recognized that failure to follow best practice, rather than technological incompetence, is the cause of most failures. It is critically important, therefore, that (i) computer science departments be able assess the quality of the software development process component of their curricula and that industry be able to assess the efficacy of SPI (software process improvement) efforts. While assessment instruments/tools exist for knowledge of software technology, none exist for attitude toward, knowledge of, or ability to use, software development process. We have developed instruments for measuring attitude and knowledge, and are working on an instrument to measure ability to use. The current version of ATSE, the instrument for measuring attitude toward software engineering, is the result of repeated administrations to both students and software development professionals, post-administration focus groups, rewrites, and statistical reliability analyses. In this paper we discuss the development of ATSE, results, both expected an unexpected, of recent administrations of ATSE to students and professionals, the various uses to which ATSE is currently being put and to which it could be put, and ATSE's continuing development and improvement.",2003,0, 2184,A frame-level measurement apparatus for performance testing of ATM equipment,"Performance testing of asynchronous transfer mode (ATM) equipment is dealt with here. The attention is principally paid to frame-level metrics, recently proposed by the ATM Forum because of their suitability to reflect user-perceived performance better than traditional cell-level metrics. Following the suggestions of the ATM Forum, more and more network engineers and production managers are interested today in these metrics, thus increasing the need of instruments and measurement solutions appropriate to their estimation. Trying to satisfy this exigency, a new VME extension for instrumentation (VXI) based measurement apparatus is proposed in the paper. The apparatus features a suitable software, developed by the authors, which allows the evaluation of the aforementioned metrics by simply making use of common ATM analyzers; only two VXI line interfaces, capable of managing the physical and ATM layers, are, in fact, adopted. Some details concerning ATM technology and its hierarchical structure, as well as the main differences between frames, specific to the ATM adaptation layer, and cells, characterizing the underlying ATM layer, are first given. Both the hardware and software solutions of the measurement apparatus are then described in detail, paying particular attention to the measurement procedures implemented. In the end, the performance of a new ATM device is assessed through the proposed apparatus.",2003,0,1798 2185,Code optimization for code compression,"With the emergence of software delivery platforms such as Microsoft's .NET, the reduced size of transmitted binaries has become a very important system parameter, strongly affecting system performance. We present two novel pre-processing steps for code compression that explore program binaries' syntax and semantics to achieve superior compression ratios. The first preprocessing step involves heuristic partitioning of a program binary into streams with high auto-correlation. The second preprocessing step uses code optimization via instruction rescheduling in order to improve prediction probabilities for a given compression engine. We have developed three heuristics for instruction rescheduling that explore tradeoffs of the solution quality versus algorithm run-time. The pre-processing steps are integrated with the generic paradigm of prediction by partial matching (PPM) which is the basis of our compression codec. The compression algorithm is implemented for x86 binaries and tested on several large Microsoft applications. Binaries compressed using our compression codec are 18-24% smaller than those compressed using the best available off-the-shelf compressor.",2003,0, 2186,Intelligent algorithms for QoS management in modern communication networks,"This paper discusses the application of fuzzy logic and neural network based algorithms to enhance and improve the quality of service (QOS) in modern communication networks. In the last decade, a rich variety of different networking technologies have been developed to meet the diverse and ever growing service demands for multimedia communications. Some are inherent with absolute and quantitative QoS guarantee while others provide relative and qualitative QoS. Essential measures for QoS provision and guarantee include buffer/queue control and management, bandwidth estimation and allocation, traffic flow and congestion control, etc. These are adopted across networking technologies in various forms and mechanisms. This paper intends to provide an overview of the benefits that intelligent neuro-fuzzy algorithms may bring to these commonly used QoS measures.",2003,0, 2187,A metric-based approach to enhance design quality through meta-pattern transformations,"During the evolution of object-oriented legacy systems, improving the design quality is. most often a highly demanded objective. For such systems which have a large number of classes and are subject to frequent modifications, detection and correction of design defects is a complex task. The use of automatic detection and correction tools can be helpful for this task. Various research approaches have proposed transformations that improve the quality of an object-oriented systems while preserving its behavior This paper proposes a framework where a catalogue of object-oriented metrics can be used-as indicators for automatically detecting situations where a particular transformation can be applied to improve the quality of an object-oriented legacy system. The correction process is based on analyzing the impact of various meta-pattern transformations on these object-oriented metrics.",2003,0, 2188,Fast flow analysis to compute fuzzy estimates of risk levels,"In the context of software quality assessment, this paper proposes original flow analyses which propagate numerical estimates of blocking risks along an inter-procedural control flow graph (CFG) and which combine these estimates along the different CFG paths using fuzzy logic operations. Two specialized analyses can be further defined in terms of definite and possible flow analysis. The definite analysis computes the minimum blocking risk levels that statements may encounter on every path, while the possible analysis computes the highest blocking risk levels encountered by statements on at least one path. This paper presents original flow equations to compute the definite and possible blocking risk levels for statements in source code. The described fix-point algorithm presents a linear execution time and memory complexity and it is also fast in practice. The experimental context used to validate the presented approach is described and results are reported and discussed for eight publicly available systems written in C whose total size is about 300 KLOC Results show that the analyses can be used to compute, identify, and compare definite and possible blocking risks in software systems. Furthermore, programs which are known to be synchronized like ""samba"" show a relatively high level of blocking risks. On the other hand, the approach allows to identify even low levels of blocking risks as those presented by programs like ""gawk"".",2003,0, 2189,A tamper-resistant framework for unambiguous detection of attacks in user space using process monitors,"Replication and redundancy techniques rely on the assumption that a majority of components are always safe and voting is used to resolve any ambiguities. This assumption may be unreasonable in the context of attacks and intrusions. An intruder could compromise any number of the available copies of a service resulting in a false sense of security. The kernel based approaches have proven to be quite effective but they cause performance impacts if any code changes are in the critical path. We provide an alternate user space mechanism consisting of process monitors by which such user space daemons can be unambiguously monitored without causing serious performance impacts. A framework that claims to provide such a feature must itself be tamper-resistant to attacks. We theoretically analyze and compare some relevant schemes and show their fallibility. We propose our own framework that is based on some simple principles of graph theory and well-founded concepts in topological fault tolerance, and show that it can not only unambiguously detect any such attacks on the services but is also very hard to subvert. We also present some preliminary results as a proof of concept.",2003,0, 2190,A simple system for detection of EEG artifacts in polysomnographic recordings,"We present an efficient parametric system for automatic detection of electroencephalogram (EEG) artifacts in polysomnographic recordings. For each of the selected types of artifacts, a relevant parameter was calculated for a given epoch. If any of these parameters exceeded a threshold, the epoch was marked as an artifact. Performance of the system, evaluated on 18 overnight polysomnographic recordings, revealed concordance with decisions of human experts close to the interexpert agreement and the repeatability of expert's decisions, assessed via a double-blind test. Complete software (Matlab source code) for the presented system is freely available from the Internet at http://brain.fuw.edu.pl/artifacts.",2003,0, 2191,A pragmatic approach to managing APC FDC in high volume logic production,"At Infineon Technologies APC fault detection is now implemented in many process areas in its high volume fabs. With the APC Software ""APC-Trend"" process engineers and maintenance can detect and classify anomalies in machine and process parameters and supervise them on the basis of an automated alarming system. An overview of the current usage of APC FDC at Infineon is given.",2003,0, 2192,Advanced analysis of dynamic neural control advisories for process optimization and parts maintenance,"This paper details an advanced set of analyses designed to drive specific process variable setpoint adjustments or maintenance actions required for cost effective process control using the Dynamic Neural Controllerâ„?(DNC) wafer-to-wafer advisories for semiconductor manufacturing advanced process control. The new analytic displays and metrics are illustrated using data obtained on a LAM 4520XL at STMicroelectronics as part of a SEMATECH SPIT beta test evaluation. The DNC represents a comprehensive modeling environment that uses as its input extensive process chamber information and history of the time since maintenance actions occurred. The DNC uses a neural network to predict multiple quality output metrics and a closed-loop risk-based optimization to maximize process quality performance while minimizing overall cost of tool operation and machine downtime. The software responds in an advisory mode on a wafer-to-wafer basis as to the optimal actions to be taken. In this paper, we present three specific instances of patterns arising during wafer processing over time that signal the process or equipment engineer to the need for corrective action: either a process setpoint adjustment or specific maintenance actions. Based on the controller's recommended corrective action set with the overall risk reduction predicted by such actions, a metric of corrective action ""urgency"" can be created. The tracking of this metric over time yields different pattern types that signify a quantified need for a specific type of corrective action. Three basic urgency patterns are found: 1. a pattern in a given maintenance action over time showing increasing urgency or ""risk reduction"" capability for the action; 2. a pattern in a process variable specific to a given recipe indicating a chronic request over time to only adjust the variable setpoint either above or below the current target; 3. a pattern in a process variable existing over all recipes processed through the chamber indicating chronic request to adjust the variable setpoint in either or both directions over time. This pattern is a pointer to the need for a maintenance action that is either corroborated by the urgency graph for that maintenance action, or if no such action has been previously take- n, a guide to the source of the equipment malfunction.",2003,0, 2193,Design of Very Lightweight Agents for reactive embedded systems,"Large-scale real-time high-performance data acquisition computing systems often require to be fault-tolerant and adaptive to changes. We consider a multi-agent system based approach to achieve these goals. This research is a part of ongoing research efforts to build a triggering and data acquisition system (known as BTeV) for particle-accelerator-based high energy physics experiments at Fermi National Laboratory. The envisioned hardware consists of pixel detectors and readout sensors embedded in the accelerator, which are connected to specialized FPGAs (field-programmable gate arrays). The FPGAs are connected to approximately 2,500 digital signal processors (DSPs). After initial filtering and processing of data by the DSPs, a farm of approximately 2,500 Linux computers are responsible for post-processing a large amount of high speed data input. To support the adaptive fault-tolerance feature, we introduce the notion of very lightweight agents (VLAs), which are designed to be adaptive but small in footprint and extremely efficient. Each digital signal processor will run a very lightweight agent along with a physics application program that collects and processes input data from the corresponding FPGA. Since VLAs can be proactive or reactive, Brooks' subsumption architecture is a good basis for the design. In this paper we present several necessary changes in the original subsumption architecture to better serve the BTeV architecture.",2003,0, 2194,A foundation for adaptive fault tolerance in software,"Software requirements often change during the operational lifetime of deployed systems. To accommodate requirements not conceived during design time, the system must be able to adapt its functionality and behavior. The paper examines a formal model for reconfigurable software processes that permits adaptive fault tolerance by adding or removing specific fault tolerance techniques during runtime. A distributed software-implemented fault tolerance (SIFT) environment for managing user applications has been implemented using ARMOR processes that conform to the formal model of reconfigurability. Because ARMOR processes are reconfigurable, they can tailor the fault tolerance services that they provide to themselves and to the user applications. We describe two fault tolerance techniques: microcheckpointing and assertion checking, that have been incorporated into ARMOR process via reconfigurations to the original ARMOR design. Experimental evaluations of the SIFT environment on a testbed cluster at the Jet Propulsion Laboratory demonstrate the effectiveness of these two fault tolerance techniques in limiting data error propagation among the ARMOR processes. These experiments validate the concept of using an underlying reconfigurable process architecture as the basis for implementing replaceable error detection and recovery services.",2003,0, 2195,Software-based erasure codes for scalable distributed storage,"This paper presents a new class of erasure codes, Lincoln Erasure codes (LEC), applicable to large-scale distributed storage that includes thousands of disks attached to multiple networks. A high-performance software implementation that demonstrates the capability to meet these anticipated requirements is described. A framework for evaluation of candidate codes was developed to support in-depth analysis. When compared with erasure codes based on the work of Reed-Solomon and Luby (2000), tests indicate LEC has a higher throughput for encoding and decoding and lower probability of failure across a range of test conditions. Strategies are described for integration with storage-related hardware and software.",2003,0, 2196,System health and intrusion monitoring (SHIM): project summary,"Computer systems and networks today are vulnerable to attacks. In addition to preventive strategies, intrusion detection has been used to further improve the security of computers and networks. Nevertheless, current intrusion detection and response system can detect only known attacks and provide primitive responses. The System Health and Intrusion Monitoring (SHIM) project aims at developing techniques to monitor and assess the health of a large distributed system. SHIM can accurately detect novel attacks and provide strategic information for further correlation, assessments, and response management.",2003,0, 2197,Statistical process control to improve coding and code review,"Software process comprises activities such as estimation, planning, requirements analysis, design, coding, reviews, and testing, undertaken when creating a software product. Effective software process management involves proactively managing each of these activities. Statistical process control tools enable proactive software process management. One such tool, the control chart, can be used for managing, controlling, and improving the code review process.",2003,0, 2198,Tests and tolerances for high-performance software-implemehted fault detection,"We describe and test a software approach to fault detection in common numerical algorithms. Such result checking or algorithm-based fault tolerance (ABFT) methods may be used, for example, to overcome single-event upsets in computational hardware or to detect errors in complex, high-efficiency implementations of the algorithms. Following earlier work, we use checksum methods to validate results returned by a numerical subroutine operating subject to unpredictable errors in data. We consider common matrix and Fourier algorithms which return results satisfying a necessary condition having a linear form; the checksum tests compliance with this condition. We discuss the theory and practice of setting numerical tolerances to separate errors caused by a fault from those inherent in finite-precision floating-point calculations. We concentrate on comprehensively defining and evaluating tests having various accuracy/computational burden tradeoffs, and we emphasize average-case algorithm behavior rather than using worst-case upper, bounds on error.",2003,0, 2199,Auditory morphing based on an elastic perceptual distance metric in an interference-free time-frequency representation,"An elastic spectral distance measure based on a F0 adaptive pitch synchronous spectral estimation and selective elimination of periodicity interferences, that was developed for a high-quality speech modification procedure STRAIGHT [1], is introduced to provide a basis for auditory morphing. The proposed measure is implemented on a low dimensional piecewise bilinear time-frequency mapping between the target and the original speech representations. A preliminary test results of morphing emotional speech samples indicated that proposed procedure provides perceptually monotonic and high-quality interpolation and extrapolation of CD quality speech samples.",2003,0, 2200,Investigating the defect detection effectiveness and cost benefit of nominal inspection teams,"Inspection is an effective but also expensive quality assurance activity to find defects early during software development. The defect detection process, team size, and staff hours invested can have a considerable impact on the defect detection effectiveness and cost-benefit of an inspection. In this paper, we use empirical data and a probabilistic model to estimate this impact for nominal (noncommunicating) inspection teams in an experiment context. Further, the analysis investigates how cutting off the inspection after a certain time frame would influence inspection performance. Main findings of the investigation are: 1) Using combinations of different reading techniques in a team is considerably more effective than using the best single technique only (regardless of the observed level of effort). 2) For optimizing the inspection performance, determining the optimal process mix in a team is more important than adding an inspector (above a certain team size) in our model. 3) A high level of defect detection effectiveness is much more costly to achieve than a moderate level since the average cost for the defects found by the inspector last added to a team increases more than linearly with growing effort investment. The work provides an initial baseline of inspection performance with regard to process diversity and effort in inspection teams. We encourage further studies on the topic of time usage with defect detection techniques and its effect on inspection effectiveness in a variety of inspection contexts to support inspection planning with limited resources.",2003,0, 2201,Identifying comprehension bottlenecks using program slicing and cognitive complexity metrics,Achieving and maintaining high software quality is most dependent on how easily the software engineer least familiar with the system can understand the system's code. Understanding attributes of cognitive processes can lead to new software metrics that allow the prediction of human performance in software development and for assessing and improving the understandability of text and code. In this research we present novel metrics based on current understanding of short-term memory performance to predict the location of high frequencies of errors and to evaluate the quality of a software system. We further enhance these metrics by applying static and dynamic program slicing to provide programmers with additional guidance during software inspection and maintenance efforts.,2003,0, 2202,A lightweight middleware protocol for ad hoc distributed object computing in ubiquitous computing environments,"Devices in ubiquitous computing environments are usually embedded, wearable, and handheld, have resource constraints, and are all connected to each other through wireless connections and other computers possibly through fixed network infrastructures, such as the Internet. These devices may form numerous webs of short-range and often low-power mobile ad hoc networks to exchange information. Distributed object computing (DOC) middleware technologies have been successful in promoting high quality and reusable distributed software for enterprise-oriented environments. In order to reap the same benefit in ubiquitous computing environments, it is important to note that the natural interactions among distributed objects in ubiquitous computing environments are quite different due to various factors, such as bandwidth constraints, unpredictable device mobility, network topology change, and context-sensitivity (or situation-awareness) of application objects. Hence, the interactions among distributed objects tend to be more spontaneous and short-lived rather than predictable and long-term. In this paper, a middleware protocol, RKF, to facilitate distributed object-based application software to interact in an ad hoc fashion in ubiquitous computing environments is presented. RKF addresses both spontaneous object discovery and context-sensitive object data exchange. Our experimental results, based on RKF's implementation and evaluation inside the object request broker of our RCSM middleware test bed, indicate that it is lightweight, has good performance, and can be easily used in PDA-like devices.",2003,0, 2203,A performance oriented migration framework for the grid,"At least three factors in the existing migration frameworks make them less suitable in Grid systems especially when the goal is to improve the response times for individual applications. These factors are the separate policies for suspension and migration of executing applications employed by these migration frameworks, the use of pre-defined conditions for suspension and migration and the lack of knowledge of the remaining execution time of the applications. In this paper we describe a migration framework for performance oriented Grid systems that implements tightly coupled policies for both suspension and migration of executing applications and takes into account both system load and application characteristics. The main goal of our migration framework is to improve the response times for individual applications. We also present some results that demonstrate the usefulness of our migration framework.",2003,0, 2204,Performance evaluation of a perceptual ringing distortion metric for digital video,"This paper evaluates a perceptual impairment measure for ringing artifacts, which are common in hybrid MC/DPCM/DCT coded video, as a predictor of the mean opinion score (MOS) obtained in the standard subjective assessment. The perceptual ringing artifacts measure is based on a vision model and a ringing distortion region segmentation algorithm, which is converted into a new perceptual ringing distortion metric (PRDM) on a scale of 0 to 5. This scale corresponds to a modified double-stimulus impairment scale variant II (DSIS-II) method. The Pearson correlation, the Spearman rank order correlation and the average absolute error are used to evaluate the performance of the PRDM compared with the subjective test data. The results show a strong correlation between the PRDM and the MOS with respect to ringing artifacts.",2003,0,2241 2205,High performance generalized cylinders visualization,"This paper studies the problem of real-time rendering of highly deformable generalized cylinders. Some efficient schemes for high axis curvature detection are presented, as well as an incremental non-uniform sampling process. We also show how the recent 3D card ""skinning"" feature, classical in character animation, can be derived in order to allow for very high frame-rate when rendering such generalized cylinders. Finally, an algorithm is presented, that permits the object to dynamically adapt its display process, for guaranteed frame-rate purposes. This algorithm dynamically modifies the different sampling parameters in order to achieve optimal quality visualization for a given pre-imposed frame-rate.",2003,0, 2206,"Gate oxide degradation due to plasma damage related charging while ILD cap oxide deposition - detection, localization and resolution","The article reports on the flow of detection, localization and resolution of a plasma damage related problem in a logic chip production line. The problem was observed on standard 0.25 μm logic technology. The introduction and optimization of a voltage breakdown (VBD) test in ILT (in line test) routines led to the detection of an insufficient gate oxide quality. Using data-mining application software and taking into consideration the structure of the test routine, the root cause for the degradation of the gate-oxide was found to be ILD (inter-layer-dielectric) cap oxide deposition. A matrix design of experiment was used to optimize the plasma deposition process in order to minimize charging effects by paying attention to wafer uniformity and reproducibility. It is shown that the principal detractor for the quality of gate oxide was eliminated by introducing the new ILD cap oxide process.",2003,0, 2207,Fragment class analysis for testing of polymorphism in Java software,"Adequate testing of polymorphism in object-oriented software requires coverage of all possible bindings of receiver classes and target methods at call sites. Tools that measure this coverage need to use class analysis to compute the coverage requirements. However, traditional whole-program class analysis cannot be used when testing partial programs. To solve this problem, we present a general approach for adapting whole-program class analyses to operate on program fragments. Furthermore, since analysis precision is critical for coverage tools, we provide precision measurements for several analyses by determining which of the computed coverage requirements are actually feasible. Our work enables the use of whole-program class analyses for testing of polymorphism in partial programs, and identifies analyses that compute precise coverage requirements and therefore are good candidates for use in coverage tools.",2003,0,2458 2208,An analysis of the fault correction process in a large-scale SDL production model,"Improvements in the software development process depend on our ability to collect and analyze data drawn from various phases of the development life cycle. Our design metrics research team was presented with a largescale SDL production model plus the accompanying problem reports that began in the requirements phase of development. The goal of this research was to identify and measure the occurrences of faults and the efficiency of their removal by development phase in order to target software development process improvement strategies. The number and severity of problem reports were tracked by development phase and fault class. The efficiency of the fault removal process using a variety of detection methods was measured Through our analysis of the system data, the study confirms that catching faults in the phase of origin is an important goal. The faults that migrated to future phases are on average eight times more costly to repair. The study also confirms that upstream faults are the most critical faults and more importantly it identifies detailed design as the major contributor of faults, including critical faults.",2003,0, 2209,"The impact of pair programming on student performance, perception and persistence","This study examined the effectiveness of pair programming in four lecture sections of a large introductory programming course. We were particularly interested in assessing how the use of pair programming affects student performance and decisions to pursue computer science related majors. We found that students who used pair programming produced better programs, were more confident in their solutions, and enjoyed completing the assignments more than students who programmed alone. Moreover, pairing students were significantly more likely than non-pairing students to complete the course, and consequently to pass it. Among those who completed the course, pairers performed as well on the final exam as non-pairers, were significantly more likely to be registered as computer science related majors one year later, and to have taken subsequent programming courses. Our findings suggest that not only does pairing not compromise students' learning, but that it may enhance the quality of their programs and encourage them to pursue computer science degrees.",2003,0, 2210,Beyond the Personal Software Process: Metrics collection and analysis for the differently disciplined,"Pedagogues such as the Personal Software Process (PSP) shift metrics definition, collection, and analysis from the organizational level to the individual level. While case study research indicates that the PSP can provide software engineering students with empirical support for improving estimation and quality assurance, there is little evidence that many students continue to use the PSP when no longer required to do so. Our research suggests that this ""PSP adoption problem"" may be due to two problems: the high overhead of PSP-style metrics collection and analysis, and the requirement that PSP users ""context switch"" between product development and process recording. This paper overviews our initial PSP experiences, our first attempt to solve the PSP adoption problem with the LEAP system, and our current approach called Hackystat. This approach fully automates both data collection and analysis, which eliminates overhead and context switching. However, Hackystat changes the kind of metrics data that is collected, and introduces new privacy-related adoption issues of its own.",2003,0, 2211,Architectural level risk assessment tool based on UML specifications,"Recent evidences indicate that most faults in software systems are found in only a few of a system's components [1]. The early identification of these components allows an organization to focus on defect detection activities on high risk components, for example by optimally allocating testing resources [2], or redesigning components that are likely to cause field failures. This paper presents a prototype tool called Architecture-level Risk Assessment Tool (ARAT) based on the risk assessment methodology presented in [3]. The ARAT provides risk assessment based on measures obtained from Unified Modeling Language (UML) artifacts [4]. This tool can be used in the design phase of the software development process. It estimates dynamic metrics [5] and automatically analyzes the quality of the architecture to produce architectural-level software risk assessment [3].",2003,0, 2212,Feedback control with queueing-theoretic prediction for relative delay guarantees in web servers,"The use of feedback control theory for performance guarantees in QoS-aware systems has gained much attention in recent years. In this paper, we investigate merging, within a single framework, the predictive power of queueing theory with the reactive power of feedback control to produce software systems with a superior ability to achieve QoS specifications in highly unpredictable environments. The approach is applied to the problem of achieving relative delay guarantees in high-performance servers. Experimental evaluation of this approach on an Apache web server shows that the combined schemes perform significantly better in terms of keeping the relative delay on target compared to feedback control or queueing prediction alone.",2003,0, 2213,Partial lookup services,"Lookup services are used in many Internet applications to translate a key (e.g., a file name) into an associated set of entries (e.g., the location of file copies). The key lookups can often be satisfied by returning just a few entries instead of the entire set. However, current implementations of lookup services do not take advantage of this usage pattern. In this paper, we formalize the notion of a partial lookup service that explicitly supports returning a subset of the entries per lookup. We present four schemes for building a partial lookup service, and propose various metrics for evaluating the schemes. We show that a partial lookup service may have significant advantages over conventional ones in terms of space usage, fairness, fault tolerance, and other factors.",2003,0, 2214,Voltage flicker calculations for single-phase AC railroad electrification systems,"Rapid load variations can cause abrupt changes in the utility voltage, so-called voltage flicker. The voltage flicker may result in light flickering and, in extreme cases, damage to electronic equipment. Electrified railroads are just one example where such rapid load variation occurs as trains accelerate, decelerate, and encounter and leave grades. For balanced loads, the voltage flicker is easily determined using per-phase analysis. AC electrification system substations operating at a commercial frequency, however, are supplied from only two phases of utility three-phase transmission system. In order to calculate the voltage flicker for such an unbalanced system, symmetrical component method needs to be used. In this paper, a procedure is developed for evaluating the effects of short-time traction load variation onto utility system. Applying the symmetrical component method, voltage flicker equations are developed for loads connected to A-B, B-C, and C-A phases of a three-phase utility system. Using a specially-developed software simulating the train and electrification system performance, loads at the traction power substation transformers are calculated in one-second intervals. Subsequently, voltages at the utility busbars are calculated for each interval, and the voltage variation from interval to interval is expressed in percent. The calculated voltage flicker is then compared to the utility accepted limits. Based on this comparison, the capability of the utility power system to support the traction loads can be assessed and the suitability of the proposed line taps for the traction power substations evaluated.",2003,0, 2215,Automatic document metadata extraction using support vector machines,"Automatic metadata generation provides scalability and usability for digital libraries and their collections. Machine learning methods offer robust and adaptable automatic metadata extraction. We describe a support vector machine classification-based method for metadata extraction from header part of research papers and show that it outperforms other machine learning methods on the same task. The method first classifies each line of the header into one or more of 15 classes. An iterative convergence procedure is then used to improve the line classification by using the predicted class labels of its neighbor lines in the previous round. Further metadata extraction is done by seeking the best chunk boundaries of each line. We found that discovery and use of the structural patterns of the data and domain based word clustering can improve the metadata extraction performance. An appropriate feature normalization also greatly improves the classification performance. Our metadata extraction method was originally designed to improve the metadata extraction quality of the digital libraries Citeseer [S. Lawrence et al., (1999)] and EbizSearch [Y. Petinot et al., (2003)]. We believe it can be generalized to other digital libraries.",2003,0, 2216,Hardware architecture design for variable block size motion estimation in MPEG-4 AVC/JVT/ITU-T H.264,"Variable block size motion estimation is adopted in the new video coding standard, MPEG-4 AVC/JVT/ITU-T H.264, due to its superior performance compared to the advanced prediction mode in MPEG-4 and H.263+. In this paper, we modified the reference software in a hardware-friendly way. Our main idea is to convert the sequential processing of each 8×8 sub-partition of a macro-block into parallel processing without sacrifice of video quality. Based on our algorithm, we proposed a new hardware architecture for variable block size motion estimation with full search at integer-pixel accuracy. The features of our design are 2-D processing element array with 1-D data broadcasting and 1-D partial result reuse, parallel adder tree, memory interleaving scheme, and high utilization. Simulation shows that our chip can achieve real-time applications under the operating frequency of 64.11 MHz for 720×480 frame at 30 Hz with search range of [-24, +23] in horizontal direction and [-16, +15] in vertical direction, which requires the computation power of more than 50 GOPS.",2003,0, 2217,Considering fault removal efficiency in software reliability assessment,"Software reliability growth models (SRGMs) have been developed to estimate software reliability measures such as the number of remaining faults, software failure rate, and software reliability. Issues such as imperfect debugging and the learning phenomenon of developers have been considered in these models. However, most SRGMs assume that faults detected during tests will eventually be removed. Consideration of fault removal efficiency in the existing models is limited. In practice, fault removal efficiency is usually imperfect. This paper aims to incorporate fault removal efficiency into software reliability assessment. Fault removal efficiency is a useful metric in software development practice and it helps developers to evaluate the debugging effectiveness and estimate the additional workload. In this paper, imperfect debugging is considered in the sense that new faults can be introduced into the software during debugging and the detected faults may not be removed completely. A model is proposed to integrate fault removal efficiency, failure rate, and fault introduction rate into software reliability assessment. In addition to traditional reliability measures, the proposed model can provide some useful metrics to help the development team make better decisions. Software testing data collected from real applications are utilized to illustrate the proposed model for both the descriptive and predictive power. The expected number of residual faults and software failure rate are also presented.",2003,0, 2218,A 2-level call admission control scheme using priority queue for decreasing new call blocking & handoff call dropping,"In order to provide a fast moving mobile host (MH) supporting multimedia applications with a consistent quality of service (QoS), an efficient call admission mechanism is in need. This paper proposes the 2-level call admission (2LCAC) scheme based on a call admission scheme using the priority to guarantee the consistent QoS for mobile multimedia applications. The 2LCAC consists of the basic call admission and advanced call admission; the former determines call admission based on bandwidth available in each cell and the latter determines call admission by utilizing delay tolerance time (DTT) and priority queue (PQueue) algorithms. In order to evaluate the performance of our scheme, we measure the metrics such as the blocking probability of new calls, dropping probability of handoff calls and bandwidth utilization. The result shows that the performance of our scheme is superior to that of existing schemes such as complete sharing policy (CSP), guard channel policy (GCP) and adaptive guard channel policy (AGCP).",2003,0, 2219,Model-based fault-adaptive control of complex dynamic systems,"
First Page of the Article
",2003,0, 2220,Asymptotic insensitivity of least-recently-used caching to statistical dependency,"We investigate a widely popular least-recently-used (LRU) cache replacement algorithm with semiMarkov modulated requests. SemiMarkov processes provide the flexibility for modeling strong statistical correlation, including the broadly reported long-range dependence in the World Wide Web page request patterns. When the frequency of requesting a page n is equal to the generalized Zipf's law c/nα, α > 1, our main result shows that the cache fault probability is asymptotically, for large cache sizes, the same as in the corresponding LRU system with i.i.d. requests. This appears to be the first explicit average case analysis of LRU caching with statistically dependent request sequences. The surprising insensitivity of LRU caching performance demonstrates its robustness to changes in document popularity. Furthermore, we show that the derived asymptotic result and simulation experiments are in excellent agreement, even for relatively small cache sizes. The potential of using our results in predicting the behavior of Web caches is tested using actual, strongly correlated, proxy server access traces.",2003,0, 2221,Quality-based auto-tuning of cell uplink load level targets in WCDMA,"The objective of this paper was to validate the feasibility of auto-tuning WCDMA cell uplink load level targets based on quality of service. The uplink cell load level was measured with received wideband total power. The quality indicators used were called blocking probability, packet queuing probability and degraded block error ratio probability. The objective was to improve performance and operability of the network with control software aiming for a specific quality of service. The load level targets in each cell were regularly adjusted with a control method in order to improve performance. The approach was validated using a dynamic WCDMA system simulator. The conducted simulations support the assumption that the uplink performance can be managed and improved by the proposed cell-based automated optimization.",2003,0, 2222,On end-to-end bandwidth analysis and measurement,"Many applications are interested in the end-to-end bandwidth of a path. Several tools have been implemented focusing on path capacity, but available bandwidth estimation is still an open problem to be investigated. We propose an end-to-end bandwidth measurement methodology with active probe based on packet pair technology. Our method can measure the path capacity in condition that the competing traffic is light and available bandwidth is get if the competing traffic is heavy. We also present a novel algorithm for filtering the measurement data.",2003,0, 2223,Model checking for probability and time: from theory to practice,"Probability features increasingly often in software and hardware systems: it is used in distributed coordination and routing problems, to model fault-tolerances and performance, and to provide adaptive resource management strategies. Probabilistic model checking is an automatic procedure for establishing if a desired property holds in a probabilistic specifications such as ""leader election is eventually resolved with probability 1"", ""the chance of shutdown occurring is at most 0.01%"", and ""the probability that a message will be delivered within 30ms is at least 0.75"". A probabilistic model checker calculates the probability of a given temporal logic property being satisfied, as opposed to validity. In contrast to conventional model checkers, which rely on reachability analysis of the underlying transition system graph, probabilistic model checking additionally involves numerical solutions of linear equations and linear programming problems. This paper reports our experience with implementing PRISM (www.cs.bham.ac.uk/∼dxp/prism), a probabilistic symbolic model checker, demonstrates its usefulness in analyzing real-world probabilistic protocols, and outlines future challenges for this research direction.",2003,0, 2224,Accurate modeling and simulation of SAW RF filters,"The popularity of wireless services and the increasing demand for higher quality, new services, and the need for higher data rates will boost the cellular terminal market. Today, third generation (3G) systems exist in many metropolitan areas. In addition, wireless LAN systems, such as Bluetooth or IEEE 802.11-based systems, are emerging. The key components in the microwave section of the mobile terminals of these systems incorporate - apart from active radio frequency integrated circuits (RFICs) and RF modules - a multitude of passive components. The most unique passive components used in the microwave section are surface acoustic wave (SAW) filters. Due to the progress of integration in the active part of the systems the component count in modern terminals is decreasing. On the other hand, the average number of SAW RF filters per cellular phone is increasing due to multi-band terminals. As a consequence, the passive components outnumber the RFICs by far in today's systems. The market is demanding smaller and smaller terminals and, thus, the size of all components has to be reduced. Further reduction of component count and required PCB area is obtained by integration of passive components and SAW devices using low-temperature co-fired ceramic (LTCC). The trend or reducing the size dramatically while keeping or even improving the performance of the RF filters requires accurate software tools for the simulation of all relevant effects and interactions. In the past it was sufficient to predict the acoustic behavior on the SAW chip, but higher operating frequencies, up to 2.5 GHz, and stringent specifications up to 6 GHz demand to account for electromagnetic (EM) effects, too. The combination of accurate acoustic simulation tools together with 2.5/3D EM simulation software packages allows to predict and optimize the performance of SAW filters and SAW-based front-end modules.",2003,0, 2225,Availability requirement for a fault-management server in high-availability communication systems,"This paper investigates the availability requirement for the fault management server in high-availability communication systems. This study shows that the availability of the fault management server does not need to be 99.999% in order to guarantee a 99.999% system availability, as long as the fail-safe ratio (the probability that the failure of the fault management server does not bring down the system) and the fault coverage ratio (probability that the failure in the system can be detected and recovered by the fault management server) are sufficiently high. Tradeoffs can be made among the availability of the fault management server, the fail-safe ratio, and the fault coverage ratio to optimize system availability. A cost-effective design for the fault management server is proposed.",2003,0, 2226,A fast method of characterizing HTSC bulk material,"HTSC bulk material which has been developed during the last ten years is now prepared in increasing quantities and used in demonstrators or prototypes of several applications, e.g., flywheels and motors. The quality control of large amounts of samples requires a method which is fast and provides essential information about the sample quality. In particular, the critical current density and its inhomogeneity should be determined. In this contribution we report about a trapped field measurement technique using pulsed field magnetization and field detection by a Hall-array. With this arrangement the required time of measurement is in the order of a few minutes per sample. The software which controls the devices contains also an analyzing procedure for the calculation of an effective critical current density as well as parameters describing the homogeneity of the samples.",2003,0, 2227,Modeling and measurements of novel high k monolithic transformers,"This paper presents modeling and measurements of a novel monolithic transformer with high coupling k and quality factor Q characteristics. The present transformer utilizes a Z-shaped multilayer metalization to increase k without sacrificing Q. The new transformer has been fabricated using Motorola 0.18 micron copper process. A simple 2-port lumped circuit model is used to model the new design. Experimental data shows a good agreement with predicted data obtained from an HFSS software simulator. An increase of about 10% in mutual coupling and 15% in Q has been achieved. For a modest increase in k of about 5%, Q can be increased by up to 20%.",2003,0, 2228,Concurrent fault detection in a hardware implementation of the RC5 encryption algorithm,"Recent research has shown that fault diagnosis and possibly fault tolerance are important features when implementing cryptographic algorithms by means of hardware devices. In fact, some security attack procedures are based on the injection of faults. At the same time, hardware implementations of cryptographic algorithms, i.e. crypto-processors, are becoming widespread. There is however, only very limited research on implementing fault diagnosis and tolerance in crypto-algorithms. Fault diagnosis is studied for the RC5 crypto-algorithm, a recently proposed block-cipher algorithm that is suited for both software and hardware implementations. RC5 is based on a mix of arithmetic and logic operations, and is therefore a challenge for fault diagnosis. We study fault propagation in RC5, and propose and evaluate the cost/performance tradeoffs of several error detecting codes for RC5. Costs are estimated in terms of hardware overhead, and performances in terms of fault coverage. Our most important conclusion is that, despite its nonuniform nature, RC5 can be efficiently protected by using low-cost error detecting codes.",2003,0, 2229,The efficient bus arbitration scheme in SoC environment,"This paper presents the dynamic bus arbiter architecture for a system on chip design. The conventional bus-distribution algorithms, such as the static fixed priority and the round robin, show several defects that are bus starvation, and low system performance because of bus distribution latency in a bus cycle time. The proposed dynamic bus architecture is based on a probability bus distribution algorithm and uses an adaptive ticket value method to solve the impartiality and starvation problems. The simulation results show that the proposed algorithm reduces the buffer size of a master by 11% and decreases the bus latency of a master by 50%.",2003,0, 2230,Transparent distributed threads for Java,"Remote method invocation in Java RMI allows the flow of control to pass across local Java threads and thereby span multiple virtual machines. However, the resulting distributed threads do not strictly follow the paradigm of their local Java counterparts for at least three reasons. Firstly, the absence of a global thread identity causes problems when reentering monitors. Secondly, blocks synchronized on remote objects do not work properly. Thirdly, the thread interruption mechanism for threads executing a remote call is broken. These problems make multi-threaded distributed programming complicated and error prone. We present a two-level solution: On the library level, we extend KaRMI (Philippsen et al. (2000)), a fast replacement for RMI, with global thread identities for eliminating problems with monitor reentry. Problem with synchronization on remote objects are solved with a facility for remote monitor acquisition. Our interrupt forwarding mechanism enables the application to get full control over its distributed threads. On the language level, we integrate these extensions with JavaParty's transparent remote objects (Philippsen et al. (1997)) to get transparent distributed threads. We finally evaluate our approach with benchmarks that show costs and benefits of our overall design.",2003,0, 2231,"Comments on ""The confounding effect of class size on the validity of object-oriented metrics""","It has been proposed by El Emam et al. (ibid. vol.27 (7), 2001) that size should be taken into account as a confounding variable when validating object-oriented metrics. We take issue with this perspective since the ability to measure size does not temporally precede the ability to measure many of the object-oriented metrics that have been proposed. Hence, the condition that a confounding variable must occur causally prior to another explanatory variable is not met. In addition, when specifying multivariate models of defects that incorporate object-oriented metrics, entering size as an explanatory variable may result in misspecified models that lack internal consistency. Examples are given where this misspecification occurs.",2003,0, 2232,Low-cost on-line fault detection using control flow assertions,"A control flow fault occurs when a processor fetches and executes an incorrect next instruction. Executable assertions, i.e., special instructions that check some invariant properties of a program, provide a powerful and low-cost method for on-line detection of hardware-induced control flow faults. We propose a technique called ACFC (Assertions for Control Flow Checking) that assigns an execution parity to a basic block, and uses the parity bit to detect faults. Using a graph model of a program, we classify control flow faults into skip, re-execute and multi-path faults. We derive some necessary conditions for these faults to manifest themselves as execution parity errors. To force a control flow fault to excite a parity error, the target program is instrumented with additional instructions. Special assertions are inserted to detect such parity errors. We have a developed a preprocessor that takes a C program as input and inserts ACFC assertions automatically. We have implemented a software-based fault injection tool SFIG which takes advantage of the GNU debugger. Fault injection experiments show that ACFC incurs less performance overhead (around 47%) and memory overhead (around 30%) than previous techniques, with no significant loss in fault coverage.",2003,0, 2233,"Low-cost, on-line software-based self-testing of embedded processor cores","A comprehensive online test strategy requires both concurrent and non-concurrent fault detection capabilities to guarantee SoCs's successful normal operation in-field at any level of its life cycle. While concurrent fault detection is mainly achieved by hardware or software redundancy, like duplication, non-concurrent fault detection, particularly useful for periodic testing, is usually achieved through hardware BIST. Software-based self-test has been recently proposed as an effective alternative to hardware-based self-test allowing at-speed testing while eliminating area, performance and power consumption overheads. In this paper we focus on the applicability of software-based self-test to non-concurrent on-line testing of embedded processor cores. Low-cost in-field testing requirements, particularly small test execution time and low power consumption guide the development of self-test routines. We show how self-test programs with a limited number of memory references and based on compact test routines provide an efficient low-cost on-line test strategy.",2003,0, 2234,Uncertainty in the output of artificial neural networks,"Analysis of the performance of artificial neural networks (ANNs) is usually based on aggregate results on a population of cases. In this paper, we analyze ANN output corresponding to the individual case. We show variability in the outputs of multiple ANNs that are trained and ""optimized"" from a common set of training cases. We predict this variability from a theoretical standpoint on the basis that multiple ANNs can be optimized to achieve similar overall performance on a population of cases, but produce different outputs for the same individual case because the ANNs use different weights. We use simulations to show that the average standard deviation in the ANN output can be two orders of magnitude higher than the standard deviation in the ANN overall performance measured by the Az value. We further show this variability using an example in mammography where the ANNs are used to classify clustered microcalcifications as malignant or benign based on image features extracted from mammograms. This variability in the ANN output is generally not recognized because a trained individual ANN becomes a deterministic model. Recognition of this variability and the deterministic view of the ANN present a fundamental contradiction. The implication of this variability to the classification task warrants additional study.",2003,0, 2235,Detection of errors in case hardening processes brought on by cooling lubricant residues,"Life cycle of case hardened steel work pieces depends on the quality of hardening. A large influencing factor on the quality of hardening is the cleanliness of the work pieces. In manufacturing a large amount of auxiliary materials such as cooling lubricants and drawing compounds are used to ensure correct execution of cutting and forming processes. Especially the residues of cooling lubricants are carried into following processes on the surfaces of the machined parts. Stable and controlled conditions cannot be guaranteed for these subsequent processes as the residues' influence on the process performance is known insufficiently, leading to a high uncertainty and consequently high expense factor. Therefore, information is needed about the type and amount of contamination. In practice the influence of these cooling lubricants on case hardening steels is a well-known phenomenon but correlation of the residue volume and resulting hardness are not known. A short overview of the techniques to detect cooling lubricant residues will be given in this paper and a method to detect the influence of the residues on the hardening process of case hardening steels will be shown. An example will be given for case hardening steel 16MnCr5 (1.7131). The medium of contamination is ARAL SAROL 470 EP.",2003,0, 2236,Combined wavelet domain and temporal video denoising,"We develop a new filter which combines spatially adaptive noise filtering in the wavelet domain and temporal filtering in the signal domain. For spatial filtering, we propose a new wavelet shrinkage method, which estimates how probable it is that a wavelet coefficient represents a ""signal of interest"" given its value, given the locally averaged coefficient magnitude and given the global subband statistics. The temporal filter combines a motion detector and recursive time-averaging. The results show that this combination outperforms single resolution spatio-temporal filters in terms of quantitative performance measures as well as in terms of visual quality. Even though our current implementation of the new filter does not allow real-time processing, we believe that its optimized software implementation could be used for real- or near real-time filtering.",2003,0, 2237,A novel and efficient control strategy for three-phase boost rectifiers,"To date three-phase boost rectifiers, due to their high efficiency, good current quality and low EMI emissions are widely used in industry as power factor correction (PFC) converters. Performance criteria of these converters significantly improve with increasing the switching frequency, and highly depend on the control strategy used. This paper introduces a simple and fast control strategy for three-phase boost rectifiers. The proposed method maintains the advantages of the other schemes, including high efficiency, fast dynamic response, unity power factor, sinusoidal input currents and regulated output voltage. In particular, by the introduction of a modified vector classification algorithm nonlinear function generations. Also abc-dq and dq-αβ transformation carried out in conventional SVM-based strategies are avoided. Therefore, there is no need for current regulators in the d-q frame, and associated control difficulties are removed. Simulation results on PSCAD/EMTDC software program, confirm the validity of the analytical work.",2003,0, 2238,Fault analysis of current-controlled PWM-inverter fed induction-motor drives,"In this paper, the fault-tolerance capability of IM-drive is studied. The discussion on the fault-tolerance of IM drives in the literature has mostly been on the conceptual level without any detailed analysis. Most of studies are only achieved experimentally. This paper provides an analytical tool to quickly analyze and predict the performance under fault conditions. Also, most of the presented results were machine specific and not general enough to be applicable as an evaluation tool. So, this paper will present a generalized method for predicting the post-fault performance of IM-drives after identifying the various faults that can occur. The fault analysis for IM in the motoring mode will be presented in this paper. The paper includes an analysis for different classifications of drive faults. The faults in an IM-drive -that will be studied- can be broadly classified as: machine fault, (i.e., one of stator windings is open or short, multiple phase open or short, bearings, and rotor bar is broken) and inverter-converter faults (i.e., phase switch open or short, multiple phase fault, and DC-link voltage drop). Briefly, a general-purpose software package for variety of IM-drive faults -is introduced. This package is very important in IM-fault diagnosis and detection using artificial intelligent techniques, wavelet and signal processing.",2003,0, 2239,Cost driven six sigma analysis for CE software development,Increased software demands from consumers lead CE (consumer electronics) companies towards a shift from hardware-oriented to a more software-oriented industry. Many CE companies spend a tremendous amount of money on software development and post release defect fixing. We propose a modification of the traditional six sigma approach to fully analyze and measure software development cost.,2003,0, 2240,Wire length prediction based clustering and its application in placement,"In this paper, we introduce a metric to evaluate proximity of connected elements in a netlist. Compared to connectivity by S. Hauck and G. Borriello (1997) and edge separability by J. Cong and S.K. Lim (2000), our metric is capable of predicting short connections more accurately. We show that the proposed metric can also predict relative wire length in multipin nets. We develop a fine-granularity clustering algorithm based on the new metric and embed it into the Fast Placer Implementation (FPI) framework by B. Hu and M. Marek-Sadowska (2003). Experimental results show that the new clustering algorithm produces better global placement results than the net absorption of Hu and M. Marek-Sadowska (2003) algorithm, connectivity of S. Hauck and G. Borriello (1997), and edge separability of J. Cong and S.K. Lim (2000) based algorithms. With the new clustering algorithm, FPI achieves up to 50% speedup compared to the latest version of Capo8.5 in http://vlsicad.ucsd.edu/Resources/SoftwareLinks/PDtools/, without placement quality losses.",2003,0, 2241,Performance evaluation of a perceptual ringing distortion metric for digital video,"This paper evaluates a perceptual impairment measure for ringing artifacts, which are common in hybrid MC/DPCM/DCT coded video, as a predictor of the mean opinion score (MOS) obtained in the standard subjective assessment. The perceptual ringing artifacts measure is based on a vision model and a ringing distortion region segmentation algorithm, which is converted into a new perceptual ringing distortion metric (PRDM) on a scale of 0 to 5. This scale corresponds to a modified double-stimulus impairment scale variant II (DSIS-II) method. The Pearson correlation, the Spearman rank order correlation and the average absolute error are used to evaluate the performance of the PRDM compared with the subjective test data. The results show a strong correlation between the PRDM and the MOS with respect to ringing artifacts.",2003,0, 2242,An exploratory study of software review in practice,"The aim of this paper is to identify the key software review inputs that significantly affect review performance in practice. Five in-depth semi-structured interviews were conducted with different IT organizations. From the interviews' results, the typical issues for conducting software review include (1) selecting right reviewers to perform a defect detection task, (2) the limitation of time and resources for organizing and conducting software review (3) no standard and specific guideline to measure an effective review for different types of software artifacts (i.e. requirement, design, code and test cases). Thus the result shows that the experience (i.e. knowledge and skills) of reviewers is the most significant input influencing software review performance.",2003,0, 2243,Model-based segmentation and tracking of head-and-shoulder video objects for real time multimedia services,"A statistical model-based video segmentation algorithm is presented for head-and-shoulder type video. This algorithm uses domain knowledge by abstracting the head-and-shoulder object with a blob-based statistical region model and a shape model. The object segmentation problem is then converted into a model detection and tracking problem. At the system level, a hierarchical structure is designed and spatial and temporal filters are used to improve segmentation quality. This algorithm runs in real time over a QCIF size video, and segments it into background, head and shoulder three video objects on average Pentium PC platforms. Due to its real time feature, this algorithm is appropriate for real time multimedia services such as videophone and web chat. Simulation results are offered to compare MPEG-4 performance with H.263 on segmented video objects with respects to compression efficiency, bit rate adaptation and functionality.",2003,0, 2244,Nonideal battery properties and their impact on software design for wearable computers,"This paper describes nonideal properties of batteries and how these properties may impact power-performance trade offs in wearable computing. The first part of the paper details the characteristics of an ideal battery and how these characteristics are used in sizing batteries and estimating discharge times. Typical nonideal characteristics and the regions of operation where they occur are described. The paper then presents results from a first-principles, variable-load battery model, showing likely areas for exploiting battery behavior in mobile computing. The major result is that, when battery behavior is nonideal, lowering the average power or the energy per operation may not increase the amount of computation that can be completed in a battery life.",2003,0, 2245,The development and evaluation of three diverse techniques for object-oriented code inspection,"We describe the development and evaluation of a rigorous approach aimed at the effective and efficient inspection of object-oriented (OO) code. Since the time that inspections were developed they have been shown to be powerful defect detection strategies. However, little research has been done to investigate their application to OO systems, which have very different structural and execution models compared to procedural systems. This suggests that inspection techniques may not be currently being deployed to their best effect in the context of large-scale OO systems. Work to date has revealed three significant issues that need to be addressed - the identification of chunks of code to be inspected, the order in which the code is read, and the resolution of frequent nonlocal references. Three techniques are developed with the aim of addressing these issues: one based on a checklist, one focused on constructing abstract specifications, and the last centered on the route that a use case takes through a system. The three approaches are evaluated empirically and, in this instance, it is suggested that the checklist is the most effective approach, but that the other techniques also have potential strengths. For the best results in a practical situation, a combination of techniques is recommended, one of which should focus specifically on the characteristics of OO.",2003,0, 2246,A cognitive complexity metric based on category learning,"Software development is driven by software comprehension. Controlling a software development process is dependent on controlling software comprehension. Measures of factors that influence software comprehension are required in order to achieve control. The use of high-level languages results in many different kinds of lines of code that require different levels of comprehension effort. As the reader learns the set of arrangements of operators, attributes and labels particular to an application, comprehension is eased as familiar arrangements are repeated. Elements of cognition that describe the mechanics of comprehension serve as a guide to assessing comprehension demands in the understanding of programs written in high level languages. A new metric, kinds of lines of code identifier density is introduced and a case study demonstrates its application and importance. Related work is discussed.",2003,0, 2247,Discriminatory software metric selection via a grid of interconnected multilayer perceptrons,"Software metrics quantify source code characteristics for the purpose of software quality analysis. An initial approach to the difficulty in mapping source code to quality rankings is to multiply the number of features collected. However, as the number of metrics used for analysis increases, rules of thumb for robust classification are violated, ultimately reducing confidence in the quality assessment. Thus, a metric selection method is necessary. This paper examines the ability of a grid of interconnected multilayer perceptrons to select an appropriate subset of software metrics. Local interconnections between the multilayer perceptrons, in the form of feature evolution heuristics, allow publication of discriminatory features. The combination of competitive publication of discriminatory features with a limited number of inputs leads to classifiers that conform to robust classifier design rules. This paper examines the determination of discriminatory feature subsets by a grid of multilayer perceptrons in relation to a gold standard provided by a software architect.",2003,0, 2248,Identifying effective software metrics using genetic algorithms,"Various software metrics may be used to quantify object-oriented source code characteristics in order to assess the quality of the software. This type of software quality assessment may be viewed as a problem of classification: given a set of objects with known features (software metrics) and group labels (quality rankings), design a classifier that can predict the quality rankings of new objects using only the software metrics. We have obtained a variety of software measures for a Java application used for biomedical data analysis. A system architect has ranked the quality of the objects as low, medium-low, medium or high with respect to maintainability. A commercial program was used to parse the source code identifying 16 metrics. A genetic algorithm (GA) was implemented to determine which subset of the various software metrics gave the best match to the quality ranking specified by the expert. By selecting the optimum metrics for determining object quality, GA-based feature selection offers an insight into which software characteristics developers should try to optimize.",2003,0, 2249,Evaluating four white-box test coverage methodologies,"This paper presents an illustrative study aimed at evaluating the effectiveness of four white-box test coverage techniques for software programs. In the study, an experimental design was considered which was used to evaluate the chosen testing techniques. The evaluation criteria were determined both in terms of the ability to detect faults and the number of test cases required. Faults were seeded artificially into the program and several faulty-versions of programs (mutants) were generated taking help of mutation operators. Test case execution and coverage measurements were done with the help of two testing tools, Cantata and OCT. Separate regression models relating coverage and effectiveness (fault detection ability and number of test cases required) are developed. These models can be helpful for determining test effectiveness when the coverage levels are known in a problem domain.",2003,0, 2250,The life cycle of a fuzzy knowledge-based classifier,"We describe the life cycle of a fuzzy knowledge-based classifier with special emphasis on one of its most neglected steps: the maintenance of its knowledge base. First, we analyze the process of underwriting Insurance applications, which is the classification problem used to illustrate the life cycle of a classifier. After discussing some design tradeoffs that must be addressed for the on-line and off-line use of a classifier, we describe the design and implementation of a fuzzy rule-based (FRB) and a fuzzy case-based (FCB) classifier. We establish a standard reference dataset (SRD), consisting of 3,000 insurance applications with their corresponding decisions. The SRD exemplifies the results achieved by an ideal, optimal classifier, and represents the target for our design. We apply evolutionary algorithms to perform an off-line optimization of the design parameters of each classifier, modifying their behavior to approximate this target. The SRD is also used as a reference for testing and performing a five-fold cross-validation of the classifiers. Finally, we focus on the monitoring and maintenance of the FRB classifier. We describe a fusion architecture that supports an off-line quality assurance process of the on-line FRB classifier. The fusion module takes the outputs of multiple classifiers, determines their degree of consensus, and compares their overall agreement with that of the FRB classifier. From this analysis, we can identify the most suitable cases to update the SRD, to audit, or to be reviewed by senior underwriters.",2003,0, 2251,Quantifying architectural attributes,"Summary form only given. Traditional software metrics are inapplicable to software architectures, because they require information that is not available at the architectural level, and reflect attributes that are not meaningful at this level. We briefly present architecture-relevant quality attributes, then we introduce architecture-enabled quantitative functions, and run an experiment which shows how and to what extent the latter are correlated to (hence can be used to predict) the former.",2003,0, 2252,Improving Chinese/English OCR performance by using MCE-based character-pair modeling and negative training,"In the past several years, we've been developing a high performance OCR engine for machine printed Chinese/ English documents. We have reported previously (1) how to use character modeling techniques based on MCE (minimum classification error) training to achieve the high recognition accuracy, and (2) how to use confidence-guided progressive search and fast match techniques to achieve the high recognition efficiency. In this paper, we present two more techniques that help reduce search errors and improve the robustness of our character recognizer. They are (1) to use MCE-trained character-pair models to avoid error-prone character-level segmentation for some trouble cases, and (2) to perform a MCE-based negative training to improve the rejection capability of the recognition models on the hypothesized garbage images during recognition process. The efficacy of the proposed techniques is confirmed by experiments in a benchmark test.",2003,0, 2253,CVS release history data for detecting logical couplings,"The dependencies and interrelations between classes and modules affect the maintainability of object-oriented systems. It is therefore important to capture weaknesses of the software architecture to make necessary corrections. We describe a method for software evolution analysis. It consists of three complementary steps, which form an integrated approach for the reasoning about software structures based on historical data: 1) the quantitative analysis uses version information for the assessment of growth and change behavior; 2) the change sequence analysis identifies common change patterns across all system parts; and 3) the relation analysis compares classes based on CVS release history data and reveals the dependencies within the evolution of particular entities. We focus on the relation analysis and discuss its results; it has been validated based on empirical data collected from a concurrent versions system (CVS) covering 28 months of a picture archiving and communication system (PACS). Our software evolution analysis approach enabled us to detect shortcomings of PACS such as architectural weaknesses, poorly designed inheritance hierarchies, or blurred interfaces of modules.",2003,0, 2254,Stability and volatility in the Linux kernel,"Packages are the basic units of release and reuse in software development. The contents and boundaries of packages should therefore be chosen to minimize change propagation and maximize reusability. This suggests the need for a predictive measure of stability at the package level. We observed the rates of change of packages in Linux, a large open-source software system. We compared our empirical observations to a theoretical 'stability metric' proposed by Martin. In this case, we found that Martin's metric has no predictive value.",2003,0, 2255,Business rule evolution and measures of business rule evolution,"There is an urgent industrial need to enforce the changes of business rules (BRs) to software systems quickly, reliably and economically. Unfortunately, evolving BRs in most existing software systems is both time-consuming and error-prone. In order to manage, control and improve BR evolution, it is necessary that the software evolution community comes to an understanding of the ways in which BRs are implemented and how BR evolution can be facilitated or hampered by the design of software systems. We suggest that new software metrics are needed to allow us to measure the characteristics of BR evolution and to help us to explore possible improvements in a systematic way. A suitable set of BR-related metrics help us to discover the root causes of the difficulties inherent in BR evolution, evaluate the success of proposed approaches to BR evolution and improve the BR evolution process as a whole.",2003,0, 2256,An investigation into formatting and layout errors produced by blind word-processor users and an evaluation of prototype error prevention and correction techniques,"This paper presents the results of an investigation into tools to support blind authors in the creation and checking of word processed documents. Eighty-nine documents produced by 14 blind authors are analyzed to determine and classify common types of layout and formatting errors. Based on the survey result, two prototype tools were developed to assist blind authors in the creation of documents: a letter creation wizard, which is used before the document is produced; and a format/layout checker that detects errors and presents them to the author after the document has been created. The results of a limited evaluation of the tools by 11 blind computer users are presented. A survey of word processor usage by these users is also presented and indicates that: authors have concerns about the appearance of the documents that they produce; many blind authors fail to use word processor tools such as spell checkers, grammar checkers and templates; and a significant number of blind people rely on sighted help for document creation or checking. The paper concludes that document formatting and layout is a problem for blind authors and that tools should be able to assist.",2003,0, 2257,E-MAGINE: the development of an evaluation method to assess groupware applications,"This paper describes the development of the evaluation method E-MAGINE. The aim of this evaluation method is to support groups to efficiently assess the groupware applications, which fit them best. The method focuses on knowledge sharing groups and follows a modular structure. E-MAGINE is based on the Contingency Perspective in order to structurally characterize groups in their context. Another main building block of the method forms the new ISO-norm for ICT tools, ""Quality in Use."" The overall method comprises two main phases. The initial phase leads to a first level profile and provides an indication of possible mismatches between group and application. The formulation of this initial profile has the benefit of providing a clear guide for further decisions on what instruments should be applied in the final phase of the evaluation process. It is argued that E-MAGINE fulfills the demand for a more practical and efficient groupware evaluation approach.",2003,0, 2258,Observations on balancing discipline and agility,"Agile development methodologies promise higher customer satisfaction, lower defect rates, faster development times and a solution to rapidly changing requirements. Plan-driven approaches promise predictability, stability, and high assurance. However, both approaches have shortcomings that, if left unaddressed, can lead to project failure. The challenge is to balance the two approaches to take advantage of their strengths and compensate for their weaknesses. We believe this can be accomplished using a risk-based approach for structuring projects to incorporate both agile and disciplined approaches in proportion to a project's needs. We present six observations drawn from our efforts to develop such an approach. We follow those observations with some practical advice to organizations seeking to integrate agile and plan-driven methods in their development process.",2003,0, 2259,An empirical study of an ER-model inspection meeting,"A great benefit of software inspections is that they can be applied at almost any stage of the software development life cycle. We document a large-scale experiment conducted during an entity relationship (ER) model inspection meeting. The experiment was aimed at finding empirically validated answers to the question ""which reading technique has a more efficient detection rate when searching for defects in an ER model"". Secondly, the effect of the usage of roles in a team meeting was also explored. Finally, we investigate the reviewers' ability to find defects belonging to certain defect categories. The findings showed that the participants using a checklist had a significantly higher detection rate than the ad hoc groups. Overall, the groups using roles had a lower performance than those without roles. Furthermore, the findings showed that when comparing the groups using roles to those without roles, the proportion of syntactic and semantic defects found in the number of overall defects identified did not significantly differ.",2003,0, 2260,"High quality statecharts through tailored, perspective-based inspections","In the embedded systems domain, statecharts have become an important technique to describe the dynamic behavior of a software system. In addition, statecharts are an important element of object-oriented design documents and are thus widely used in practice. However, not much is known about how to inspect them. Since their invention by Pagan in 1976, inspections proved to be an essential quality assurance technique in software engineering. Traditionally, inspections were used to detect defects in code documents, and later in requirements documents. We define a defect taxonomy for statecharts. Using this taxonomy, we present an inspection approach for inspecting statecharts, which combines existing inspection techniques with several new perspective-based scenarios. Moreover, we address the problems of inspecting large documents by using prioritized use cases in combination with perspective-based reading.",2003,0, 2261,Detection and recovery techniques for database corruption,"Increasingly, for extensibility and performance, special purpose application code is being integrated with database system code. Such application code has direct access to database system buffers, and as a result, the danger of data being corrupted due to inadvertent application writes is increased. Previously proposed hardware techniques to protect from corruption require system calls, and their performance depends on details of the hardware architecture. We investigate an alternative approach which uses codewords associated with regions of data to detect corruption and to prevent corrupted data from being used by subsequent transactions. We develop several such techniques which vary in the level of protection, space overhead, performance, and impact on concurrency. These techniques are implemented in the Dali main-memory storage manager, and the performance impact of each on normal processing is evaluated. Novel techniques are developed to recover when a transaction has read corrupted data caused by a bad write and gone on to write other data in the database. These techniques use limited and relatively low-cost logging of transaction reads to trace the corruption and may also prove useful when resolving problems caused by incorrect data entry and other logical errors.",2003,0, 2262,A ranking of software engineering measures based on expert opinion,"This research proposes a framework based on expert opinion elicitation, developed to select the software engineering measures which are the best software reliability indicators. The current research is based on the top 30 measures identified in an earlier study conducted by Lawrence Livermore National Laboratory. A set of ranking criteria and their levels were identified. The score of each measure for each ranking criterion was elicited through expert opinion and then aggregated into a single score using multiattribute utility theory. The basic aggregation scheme selected was a linear additive scheme. A comprehensive sensitivity analysis was carried out. The sensitivity analysis included: variation of the ranking criteria levels, variation of the weights, variation of the aggregation schemes. The top-ranked measures were identified. Use of these measures in each software development phase can lead to a more reliable quantitative prediction of software reliability.",2003,0, 2263,When can we test less?,"When it is impractical to rigorously assess all parts of complex systems, test engineers use defect detectors to focus their limited resources. We define some properties of an ideal defect detector and assess different methods of generating one. In the case study presented here, traditional methods of generating such detectors (e.g. reusing detectors from the literature, linear regression, model trees) were found to be inferior to those found via a PACE analysis.",2003,0, 2264,Definition and validation of design metrics for distributed applications,"As distributed technologies become more widely used, the need for assessing the quality of distributed applications correspondingly increases. Despite the rich body of research and practice in developing quality measures for centralised applications, there has been little emphasis on measures for distributed software. The need to understand the complex structure and behaviour of distributed applications suggests a shift in interest from traditional centralised measures to the distributed arena. We tackles the problem of evaluating quality attributes of distributed applications using software measures. Firstly, we present a measures suite to quantify internal attributes of design at an early development phase, embracing structural and behavioural aspects. The proposed measures are obtained from formal models derived from intuitive models of the problem domain. Secondly, since theoretical validation of software measures provides supporting evidence as to whether a measure really captures the internal attributes they purport to measure, we consider this validation as a necessary step before empirical validation takes place. Therefore, these measures are here theoretically validated following a framework proposed in the literature.",2003,0, 2265,Dealing with missing software project data,"Whilst there is a general consensus that quantitative approaches are an important part of successful software project management, there has been relatively little research into many of the obstacles to data collection and analysis in the real world. One feature that characterises many of the data sets we deal with is missing or highly questionable values. Naturally this problem is not unique to software engineering, so we explore the application of two existing data imputation techniques that have been used to good effect elsewhere. In order to assess the potential value of imputation we use two industrial data sets. Both are quite problematic from an effort modelling perspective because they contain few cases, have a significant number of missing values and the projects are quite heterogeneous. We examine the quality of fit of effort models derived by stepwise regression on the raw data and data sets with values imputed by various techniques is compared. In both data sets we find that k-nearest neighbour (k-NN) and sample mean imputation (SMI) significantly improve the model fit, with k-NN giving the best results. These results are consistent with other recently published results, consequently we conclude that imputation can assist empirical software engineering.",2003,0, 2266,Analyzing the cost and benefit of pair programming,"We use a combination of metrics to understand, model, and evaluate the impact of pair programming on software development. Pair programming is a core technique in the hot process paradigm of extreme programming. At the expense of increased personnel cost, pair programming aims at increasing both the team productivity and the code quality as compared to conventional development. In order to evaluate pair programming, we use metrics from three different categories: process metrics such as the pair speed advantage of pair programming; product metrics such as the module breakdown structure of the software; and project context metrics such as the market pressure. The pair speed advantage is a metric tailored to pair programming and measures how much faster a pair of programmers completes programming tasks as compared to a single developer. We integrate the various metrics using an economic model for the business value of a development project. The model is based on the standard concept of net present value. If the market pressure is strong, the faster time to market of pair programming can balance the increased personnel cost. For a realistic sample project, we analyze the complex interplay between the various metrics integrated in our model. We study for which combinations of the market pressure and pair speed advantage the value of the pair programming project exceeds the value of the corresponding conventional project. When time to market is the decisive factor and programmer pairs are much faster than single developers, pair programming can increase the value of a project, but there also are realistic scenarios where the opposite is true. Such results clearly show that we must consider metrics from different categories in combination to assess the cost-benefit relation of pair programming.",2003,0, 2267,Defining and validating metrics for navigational models,"Nowadays, several approaches for developing Web applications have been proposed in the literature. Most of them extend existing object-oriented conceptual modeling methods, incorporating new constructors in order to model the navigational structure and the content of Web applications. Such new constructors are commonly represented in a navigational model. While navigational models constitute the backbone of Web application design, their quality has a great impact on the quality of the final product, which is actually implemented and delivered. We discuss a set of metrics for navigational models that has been proposed for analyzing the quality of Web applications in terms of size and structural complexity. These metrics were defined and validated using a formal framework (DISTANCE) for software measure construction that satisfies the measurement needs of empirical software engineering research. Some experimental studies have shown that complexity affects the ability to understand and maintain conceptual models. In order to prove this, we also made a controlled experiment to observe how the proposed metrics can be used as early maintainability indicators.",2003,0, 2268,An industrial case study of the verification and validation activities,"The aim of verification and validation is to ensure the quality of a software product. The main fault detection techniques used are software inspections and testing. Software inspections aim at finding faults in the beginning of the software life-cycle and testing in the end. It is, however, difficult to know what kind of faults that are found, the severity of these, how much time it will take to correct them etc. Hence, it is important for a software organization to know what faults that can be found in inspections and testing, respectively. We report on a case study over 2 years in a large project. The purpose of the case study is to investigate the trade-off between inspections and testing. A measure of goodness is introduced, which measures if faults could have been found earlier in the process. The measure was successfully used to illustrate the effect on when faults are found concerning a process change, a project decision, and extensions of developed test cases. An increased focus on the development of low level design specifications before the coding activity, and testing improvements in early phases were concluded to be important process improvements for the organization. The results also concern how much resources that are used in the various development phases, from requirements development to system verification, and the severity of faults that are found in the various phases. The results will be used as input to a quality improvement program in the company.",2003,0, 2269,An analogy-based approach for predicting design stability of Java classes,"Predicting stability in object-oriented (OO) software, i.e., the ease with which a software item evolves while preserving its design, is a key feature for software maintenance. In fact, a well designed OO software must be able to evolve without violating the compatibility among versions, provided that no major requirement reshuffling occurs. Stability, like most quality factors, is a complex phenomenon and its prediction is a real challenge. We present an approach, which relies on the case-based reasoning (CBR) paradigm and thus overcomes the handicap of insufficient theoretical knowledge on stability. The approach explores structural similarities between classes, expressed as software metrics, to guess their chances of becoming unstable. In addition, our stability model binds its value to the impact of changing requirements, i.e., the degree of class responsibilities increase between versions, quantified as the stress factor. As a result, the prediction mechanism favours the stability values for classes having strong structural analogies with a given test class as well as a similar stress impact. Our predictive model is applied on a testbed made up of the classes from four major version of the Java API.",2003,0, 2270,Building UML class diagram maintainability prediction models based on early metrics,"The fact that the usage of metrics in the analysis and design of object oriented (OO) software can help designers make better decisions is gaining relevance in software measurement arena. Moreover, the necessity of having early indicators of external quality attributes, such as maintainability, based on early metrics is growing. In addition to this, the aim is to show how early metrics which measure internal attributes, such as structural complexity and size of UML class diagrams, can be used as early class diagram maintainability indicators. For this purpose, we present a controlled experiment and its replication, which we carried out to gather the empirical data, which in turn is the basis of the current study. From the results obtained, it seems that there is a reasonable chance that useful class diagram maintainability models could be built based on early metrics. Despite this fact, more empirical studies, especially using data taken form real projects performed in industrial settings, are needed in order to obtain a comprehensive body of knowledge and experience.",2003,0, 2271,An experiment family to investigate the defect detection effect of tool-support for requirements inspection,"The inspection of software products can help to find defects early in the development process and to gather valuable information on product quality. An inspection is rather resource intensive and involves several tedious tasks like navigating, sorting, or checking. Tool support is thus hoped to increase effectiveness and efficiency. However, little empirical work is available that directly compares paper-based (i.e., manual) and tool-based software inspections. Existing reports on tool support for inspection generally tend to focus on code inspections while little can be found on requirements or design inspection. We report on an experiment family: two experiments on paper-based inspection and a third experiment to empirically investigate the effect of tool support regarding defect detection effectiveness and inspection effort in an academic environment with 40 subjects. Main results of the experiment family are: (a) The effectiveness is similar for manual and tool-supported inspections; (b) the inspection effort and defect overlap decreased significantly with tool support, while (c) efficiency increased considerably with tool support.",2003,0, 2272,An evaluation of checklist-based reading for entity-relationship diagrams,"Software inspections have attracted much attention over the last 25 years. It is a method for static analysis of artifacts during software development. However, few studies look at inspections of entity-relationship diagrams (ER diagrams), and in particular an evaluation of the number of defects, number of false-positives and the types of defects that are found by most reviewers. ER-diagrams are commonly used in the design of databases, which makes them an important modelling concept for many large software systems. We present a large empirical study (486 subjects) where checklists were used for an inspection. The goodness in the evaluation is judged based on the type of defects the reviewers identify and the relationship between the detection of real defects and false positives. It is concluded that checklist-based inspections of entity-relationship diagrams is worthwhile.",2003,0, 2273,Using service utilization metrics to assess the structure of product line architectures,"Metrics have long been used to measure and evaluate software products and processes. Many metrics have been developed that have lead to different degrees of success. Software architecture is a discipline in which few metrics have been applied, a surprising fact given the critical role of software architecture in software development. Software product line architectures represent one area of software architecture in which we believe metrics can be of especially great use. The critical importance of the structure defined by a product line architecture requires that its properties be meaningfully assessed and that informed architectural decisions be made to guide its evolution. To begin addressing this issue, we have developed a class of closely related metrics that specifically target product line architectures. The metrics are based on the concept of service utilization and explicitly take into account the context in which individual architectural elements are placed. We define the metrics, illustrate their use, and evaluate their strengths and weaknesses through their application on three example product line architectures.",2003,0, 2274,Assessing the maintainability benefits of design restructuring using dependency analysis,"Software developers and project managers often have to assess the quality of software design. A commonly adopted hypothesis is that a good design should cost less to maintain than a poor design. We propose a model for quantifying the quality of a design from a maintainability perspective. Based on this model, we propose a novel strategy for predicting the ""return on investment"" (ROI) for possible design restructurings using procedure level dependency analysis. We demonstrate this approach with two exploratory Java case studies. Our results show that common low level source code transformations change the system dependency structure in a beneficial way, allowing recovery of the initial refactoring investment over a number of maintenance activities.",2003,0, 2275,Developing fault predictors for evolving software systems,"Over the past several years, we have been developing methods of predicting the fault content of software systems based on measured characteristics of their structural evolution. In previous work, we have shown there is a significant linear relationship between code churn, a synthesized metric, and the rate at which faults are inserted into the system in terms of number of faults per unit change in code churn. We have begun a new investigation of this relationship with a flight software technology development effort at the jet propulsion laboratory (JPL) and have progressed in resolving the limitations of the earlier work in two distinct steps. First, we have developed a standard for the enumeration of faults. Second, we have developed a practical framework for automating the measurement of these faults. we analyze the measurements of structural evolution and fault counts obtained from the JPL flight software technology development effort. Our results indicate that the measures of structural attributes of the evolving software system are suitable for forming predictors of the number of faults inserted into software modules during their development. The new fault standard also ensures that the model so developed has greater predictive validity.",2003,0, 2276,One approach to the metric baselining imperative for requirements processes,"The success of development projects in customer-oriented industries depends on reliable processes for the definition and maintenance of requirements. With the sustained, severe reduction in the rush to new technology, this widely accepted fact has become increasingly evident in the networking industry. Customers now focus on high product quality as they strive for economy of operation. Enhancing product quality necessitates enhancing processes, which in turn can necessitate applying more accurate (and precise) measures. Finding process deviations and identifying patterns of product deficiencies are critical steps to achieving high quality products. We describe the application of quantitative process control (QPC) during early development phases to establish and maintain baseline distributions characterizing RMCM&T processes, and to monitor their evolutions. Metric baselining as described includes key metric identification, and data normalization, filtering, and categorization. Empirical baselining provides the statistical sensitivity to detect requirements process problems, and to support targeted identification of particular requirements-related patterns in defects.",2003,0, 2277,A consumer report on BDD packages,"BDD packages have matured to a state where they are often considered a commodity. Does this mean that all (publicly and commercially) available packages are equally good? Does this preclude any new developments? In this paper, we present a consumer report on 13 BDD packages and thereby try to answer these questions. We argue that there is a substantial spectrum in quality as measured by various metrics and we found that even the better packages do not always deploy the latest technology. We show how various design decisions underlying the studied packages exhibit themselves at the programming interface level, and we claim that this allows us to predict performance to a certain extent.",2003,0, 2278,Statistical analysis on a case study of load effect on PSD technique for induction motor broken rotor bar fault detection,Broken rotor bars in an induction motor create asymmetries and result in abnormal amplitude of the sidebands around the fundamental supply frequency and its harmonics. Monitoring the power spectral density (PSD) amplitudes of the motor currents at these frequencies can be used to detect the existence of broken rotor bar faults. This paper presents a study on an actual three-phase induction motor using the PSD analysis as a broken rotor bar fault detection technique. The distributions of PSD amplitudes of experimental healthy and faulty motor data sets at these specific frequencies are analyzed statistically under different load conditions. Results indicate that statistically significant conclusions on broken rotor bar detection can vary significantly under different load conditions and under different inspected frequencies. Detection performance in terms of the variation of PSD amplitudes is also investigated as a case study.,2003,0, 2279,Understanding the nature of software evolution,"Over the past several years, we have been developing methods of measuring the change characteristics of evolving software systems. Not all changes to software systems are equal. Some changes to these systems are very small and have low impact on the system as a whole. Other changes are substantial and have a very large impact of the fault proneness of the complete system. In this study we will identify the sources of variation in the set of software metrics used to measure the system. We will then study the change characteristics to the system over a large number of builds. We have begun a new investigation in these areas in collaboration with a flight software technology development effort at the Jet Propulsion Laboratory (JPL) and have progressed in resolving the limitations of the earlier work in two distinct steps. First, we have developed a standard for the enumeration of faults. This new standard permits software faults to be measured precisely and accurately. Second, we have developed a practical framework for automating the measurement of these faults. This new standard and fault measurement process was then applied to a software system's structural evolution during its development. Every change to the software system was measured and every fault was identified and tracked to a specific code module. The measurement process was implemented in a network appliance, minimizing the impact of measurement activities on development efforts and enabling the comparison of measurements across multiple development efforts. In this paper, we analyze the measurements of structural evolution and fault counts obtained from the JPL flight software technology development effort. Our results indicate that the measures of structural attributes of the evolving software system are suitable for forming predictors of the number of faults inserted into software modules during their development, and that some types of change are more likely to result in the insertion of faults than others. The new fault standard also insures that the model so developed has greater predictive validity.",2003,0, 2280,Application of neural networks for software quality prediction using object-oriented metrics,"The paper presents the application of neural networks in software quality estimation using object-oriented metrics. Quality estimation includes estimating reliability as well as maintainability of software. Reliability is typically measured as the number of defects. Maintenance effort can be measured as the number of lines changed per class. In this paper, two kinds of investigation are performed: predicting the number of defects in a class; and predicting the number of lines change per class. Two neural network models are used: they are Ward neural network; and General Regression neural network (GRNN). Object-oriented design metrics concerning inheritance related measures, complexity measures, cohesion measures, coupling measures and memory allocation measures are used as the independent variables. GRNN network model is found to predict more accurately than Ward network model.",2003,0, 2281,Impact analysis and change management of UML models,"The use of Unified Modeling Language (UML) analysis/design models on large projects leads to a large number of interdependent UML diagrams. As software systems evolve, those diagrams undergo changes to, for instance, correct errors or address changes in the requirements. Those changes can in turn lead to subsequent changes to other elements in the UML diagrams. Impact analysis is then defined as the process of identifying the potential consequences (side-effects) of a change, and estimating what needs to be modified to accomplish a change. In this article, we propose a UML model-based approach to impact analysis that can be applied before any implementation of the changes, thus allowing an early decision-making and change planning process. We first verify that the UML diagrams are consistent (consistency check). Then changes between two different versions of a UML model are identified according to a change taxonomy, and model elements that are directly or indirectly impacted by those changes (i.e., may undergo changes) are determined using formally defined impact analysis rules (written with Object Constraint Language). A measure of distance between a changed element and potentially impacted elements is also proposed to prioritize the results of impact analysis according to their likelihood of occurrence. We also present a prototype tool that provides automated support for our impact analysis strategy, that we then apply on a case study to validate both the implementation and methodology.",2003,0, 2282,A framework for understanding conceptual changes in evolving source code,"As systems evolve, they become harder to understand because the implementation of concepts (e.g. business rules) becomes less coherent. To preserve source code comprehensibility, we need to be able to predict how this property will change. This would allow the construction of a tool to suggest what information should be added or clarified (e.g. in comments) to maintain the code's comprehensibility. We propose a framework to characterize types of concept change during evolution. It is derived from an empirical investigation of concept changes in evolving commercial COBOL II files. The framework describes transformations in the geometry and interpretation of regions of source code. We conclude by relating our observations to the types of maintenance performed and suggest how this work could be developed to provide methods for preserving code quality based on comprehensibility.",2003,0, 2283,An industrialized restructuring service for legacy systems,"Summary form only given. The article presents RainCodeOnline (http://www.raincode.com/online), Belgium-based initiative to provide COBOL restructuring services through the Web. The article details a number of issues: the underlying technology; the process; the pros and cons of automated restructuring (dialects, process, perception, pricing, security, cost, performance, quality, etc.); and the perception of quality and how it can be further improved. A number of results are presented as well.",2003,0, 2284,A guaranteed quality of service wireless access scheme for CDMA networks,"Current wireless multimedia applications may require different quality-of-service (QoS) measures such as throughput, packet loss rate, delay, and delay jitter. In this paper, we propose an access scheme for CDMA networks that can provide absolute QoS guarantees for different service classes. The access scheme uses several M/D/1 queues, each representing a different service class, and allocates a transmission rate to each queue so as to satisfy the different QoS requirements. Operation in error-prone channels is enabled by a mechanism that compensates sessions, which experience poor channels. Analysis and simulation results are used to illustrate the viability of the access scheme.",2003,0, 2285,Performance analysis of software rejuvenation,"Cluster-based systems, a combination of interconnected individual computers, have become a popular solution to build the scalable and highly available Web servers. In order to reduce system outages due to aging phenomenon, software rejuvenation, a proactive fault-tolerance strategy has been introduced into cluster systems. Compared with clusters of a flat architecture, in which all the nodes share the same functions, we model and analyze the dispatcher-worker based cluster systems, which employ prediction-based rejuvenation both on the dispatcher and the worker pool. To evaluate the effects of rejuvenation, stochastic reward net models are constructed and solved by SPNP (stochastic Petri net package). Numerical results show that prediction-based software rejuvenation can significantly increase system availability and reduce the expected job loss probability.",2003,0, 2286,"Experimental evaluation of the variation in effectiveness for DC, FPC and MC/DC test criteria","Given a test criterion, the number of test-sets satisfying the criterion may be very large, with varying fault detection effectiveness. This paper presents an experimental evaluation of the variation in fault detection effectiveness of all the test-sets for a given control-flow test criterion and a Boolean specification. The exhaustive experimental approach complements the earlier empirical studies that adopted analysis of some test-sets using random selection techniques. Three industrially used control-flow testing criteria, decision coverage (DC), full predicate coverage (FPC) and modified condition/decision coverage (MC/DC) have been analysed against four types of faults. The Boolean specifications used were taken from a past research paper and also generated randomly. To ensure that it is the test-set size, a variation of DC, decision coverage/random (DC/R), has also been considered against FPC and MC/DC criteria. In addition, a further analysis of variation in average effectiveness with respect to number of conditions in the decision has been done. The empirical results show that the MC/DC criterion is more reliable and stable in comparison to DC, DC/R and FPC.",2003,0, 2287,An empirical comparison of two safe regression test selection techniques,"Regression test selection techniques reduce the cost of regression testing by selecting a subset of an existing test suite to use in retesting a modified program. Safe regression test selection techniques guarantee (under specific conditions) that the selected subset will not omit faults that could have been revealed by the entire suite. Many regression test selection techniques have been described in the literature. Empirical studies of some of these techniques have shown that they can be beneficial, but only a few studies have empirically compared different techniques, and fewer still have considered safe techniques. In this paper, we report the results of a comparative empirical study of implementations of two safe regression test selection techniques: DejaVu and Pytia. Our results show that, despite differences in their approaches, and despite the theoretically greater ability of DejaVu to select smaller test suites than Pythia, the two techniques often selected equivalent test suites in practice, at comparable costs. These results suggest that factors such as ease of implementation, generality, and availability of supporting tools and data may play a greater role than cost-effectiveness for practitioners choosing between these techniques.",2003,0, 2288,An empirical analysis of fault persistence through software releases,"This work is based on the idea of analyzing the behavior all over the life-cycle of source files having a high number of faults at their first release. In terms of predictability, our study helps to understand if files that are faulty in their first release tend to remain faulty in later releases, and investigates the ways to assure a higher reliability to the faultiest programs, testing them carefully or lowering the complexity of their structure. The purpose of this paper is to verify empirically our hypothesis, through an experimental analysis on two different projects, and to find causes observing the structure of the faulty files. As a conclusion, we can say that the number of faults at the first release of source files is an early and significant index of its expected defect rate and reliability.",2003,0, 2289,The application of capture-recapture log-linear models to software inspections data,"Re-inspection has been deployed in industry to improve the quality of software inspections. The number of remaining defects after inspection is an important factor affecting whether to re-inspect the document or not. Models based on capture-recapture (CR) sampling techniques have been proposed to estimate the number of defects remaining in the document after inspection. Several publications have studied the robustness of some of these models using software engineering data. Unfortunately, most of the existing studies did not examine the log linear models with respect software inspection data. In order o explore the performance of the log linear models, we evaluated their performance for three person inspection teams. Furthermore, we evaluated the models using an inspection data set that was previously used to asses different CR models. Generally speaking, the study provided very promising results. According to our results, the log linear models proved to be more robust that all CR based models previously assessed for three-person inspections.",2003,0, 2290,Investigating the accuracy of defect estimation models for individuals and teams based on inspection data,"Defect content estimation approaches, based on data from inspection, estimate the total number of defects in a document to evaluate the quality of the product and the development process. Objective estimation approaches require a high-quality measurement process, potentially suffer from overfitting, and may underestimate the number of defects for inspections that yield few data points. Reading techniques for inspection, which focus the attention of the inspectors on particular parts of the inspected document, may influence their subjective estimates. In this paper we consider approaches to aggregate subjective estimates of individual inspectors in a team to alleviate individual bias. We evaluate these approaches with data from an experiment in a university environment where 177 inspectors in 30 teams inspected a software requirements document. Main findings of the experiment were that reading techniques considerably influenced the accuracy of inspector estimates. Further team estimates improved both estimation accuracy and variation compared to individual estimates and one of the best empirically evaluated objective estimation approaches.",2003,0, 2291,An experimental evaluation of correlated network partitions in the Coda distributed file system,"Experimental evaluation is an important way to assess distributed systems, and fault injection is the dominant technique in this area for the evaluation of a system's dependability. For distributed systems, network failure is an important fault model. Physical network failures often have far-reaching effects, giving rise to multiple correlated failures as seen by higher-level protocols. This paper presents an experimental evaluation, using the Loki fault injector, which provides insight into the impact that correlated network partitions have on the Coda distributed file system. In this evaluation, Loki created a network partition between two Coda file servers, during which updates were made at each server to the same replicated data volume. Upon repair of the partition, a client requested directory resolution to converge the diverging replicas. At various stages of the resolution, Loki invoked a second correlated network partition, thus allowing us to evaluate its impact on the system's correctness, performance, and availability.",2003,0, 2292,Assessing the dependability of OGSA middleware by fault injection,"This paper presents our research on devising a dependability assessment method for the upcoming OGSA 3.0 middleware using network level fault injection. We compare existing DCE middleware dependability testing research with the requirements of testing OGSA middleware and derive a new method and fault model. From this we have implemented an extendable fault injector framework and undertaken some proof of concept experiments with a simulated OGSA middleware system based around Apache SOAP and Apache Tomcat. We also present results from our initial experiments, which uncovered a discrepancy with our simulated OGSA system. We finally detail future research, including plans to adapt this fault injector framework from the stateless environment of a standard Web service to the stateful environment of an OGSA service.",2003,0, 2293,Perf: an ongoing research project on performance evaluation,"The fast technological evolution and the convergence of fixed and mobile communication infrastructures have led to the realization of a large variety of innovative applications and services that have become an integral part of our society. Efficiency, reliability, availability and security, i.e., quality of service, are fundamental requirements for these applications. In this framework, performance evaluation and capacity planning play a key role. This paper presents an ongoing research project carried out under the FIRB Programme of the Italian Ministry of Education, Universities and Research. The project addresses basic and foundational research in performance evaluation with the objective of developing new techniques, methodologies and tools for the analysis of the complex systems and applications that drive our society.",2003,0, 2294,Cost-effective graceful degradation in speculative processor subsystems: the branch prediction case,"We analyze the effect of errors in branch predictors, a representative example of speculative processor subsystems, to motivate the necessity for fault tolerance in such subsystems. We also describe the design of fault tolerant branch predictors using general fault tolerance techniques. We then propose a fault-tolerant implementation that utilizes the finite state machine (FSM) structure of the pattern history table (PHT) and the set of potential faulty states to predict the branch direction, yet without strictly identifying the correct state. The proposed solution provides virtually the same prediction accuracy as general fault tolerant techniques, while significantly reducing the incurred hardware overhead.",2003,0, 2295,Static test compaction for multiple full-scan circuits,"Current design methodologies and methodologies for reducing test data volume and test application time for full-scan circuits allow testing of multiple circuits (or subcircuits of the same circuit) simultaneously using the same test data. We describe a static compaction procedure that accepts test sets generated independently for multiple full-scan circuits, and produces a compact test set that detects all the faults detected by the individual test sets. The resulting test set can be used for testing the circuits simultaneously using the same test data. This procedure provides an alternative to test generation procedures that perform test generation for complex circuits made up of multiple circuits. Such procedures also reduce the amount of test data and test application time required for testing all the circuits by testing them simultaneously using the same test data. However, they require consideration of a more complex circuit.",2003,0, 2296,"Software reviews, the state of the practice","A 2002 survey found that many companies use software reviews unsystematically, creating a mismatch between expected outcomes and review implementations. This suggests that many software practitioners understand basic review concepts but often fall to exploit their full potential.",2003,0, 2297,Software architecture for diagnosing physical systems,"This paper discusses about a general software architecture suitable for the design of diagnostic systems based on logical reasoning. This type of architecture should be able to support different kinds of consistency tests and different kinds of triggering strategies. Different steps, such as data acquisition from a physical system, batch or online consistency tests, diagnostic reasoning and management of consistency tests have been isolated. Software dedicated to the diagnosis of a system of two cascading water tanks is also presented as an illustration.",2003,0, 2298,Accounting for false indication in a Bayesian diagnostics framework,"Accounting for the effects of test uncertainty is a significant problem in test and diagnosis. Specifically, assessment of the level of uncertainty and subsequent utilization of that assessment to improve diagnostics must be addressed. One approach, based on measurement science, is to treat the probability of a false indication (false alarm or missed detection) as the measure of uncertainty. Given the ability to determine such probabilities, a Bayesian approach to diagnosis suggests itself. In the paper, we present a mathematical derivation for false indication and apply it to the specification of Bayesian diagnosis. We draw from measurement science, reliability theory, and the theory of Bayesian networks to provide an end-to-end probabilistic treatment of the fault diagnosis problem.",2003,0, 2299,Effect of tilt angle variations in a halo implant on Vth values for 0.14-μm CMOS devices,"Sensitivity of critical transistor parameters to halo implant tilt angle for 0.14-μm CMOS devices was investigated. Vth sensitivity was found to be 3% per tilt degree. A tilt angle mismatch between two serial ion implanters used in manufacturing was detected by tracking Vth performance for 0.14-μm production lots. Even though individual implanters may be within tool specifications for tilt angle control (±0.5° for our specific tool type), the relative mismatch could be as large as 1°, and therefore, result in a Vth mismatch of over 3% from nominal. The Vth mismatch results are in qualitative agreement with simulation results using SUPREM and MEDICI software.",2003,0, 2300,Online control design for QoS management,"In this paper we present an approach for QoS management that can be applied for a general class of real-time distributed computation systems. In this paper, the QoS adaptation problem is formulated based on a utility function that measures the relative performance of the system. A limited-horizon online supervisory controller is used for this purpose. The online controller explores a limited region of the state-space of the system at each time step and decides the best action accordingly. The feasibility and accuracy of the online algorithm can be assessed at design time.",2003,0, 2301,A model to evaluate the economic benefits of software components development,"ABB is a multi-national corporation that is developing a new generation of products based on the concept of IndustrialIT. This concept provides a common integration platform for product interoperability. As IndustrialIT enabled products are developed across ABB, software reuse must be considered. Component based software development (CBSD) is an effective means to improve productivity and quality by developing reusable components. Measuring the economic benefits and performing sensitivity analyses of CBSD scenarios in the development of IndustrialIT products is important to improve efficiency. This paper presents a model that allows project leaders to evaluate a variety of software development scenarios. The model is based on a goal-question-metrics (GQM) approach and was developed at the ABB corporate research laboratories.",2003,0, 2302,A Petri net based method for storage units estimation,"This work presents a structural methodology for computing the number of storage units (registers) in hardware/software co-design context considering timing constraints. The proposed method is based on one intermediate model-data flow net, specified by our team, that takes into account timing precedence and data-dependency. The considered hardware/software co-design framework uses Petri nets as common formalism for performing quantitative and qualitative analysis.",2003,0, 2303,Tolerance of control-flow testing criteria,"Effectiveness of testing criteria is the ability to detect failure in a software program. We consider not only effectiveness of some testing criterion in itself but a variance of effectiveness of different test sets satisfied the same testing criterion. We name this property ""tolerance"" of a testing criterion and show that, for practical using a criterion, a high tolerance is as well important as high effectiveness. The results of empirical evaluation of tolerance for different criteria, types of faults and decisions are presented. As well as quite simple and well-known control-flow criteria, we study more complicated criteria: full predicate coverage, modified condition/decision coverage and reinforced condition/decision coverage criteria.",2003,0, 2304,Transforming quantities into qualities in assessment of software systems,"The assessment of software systems often requires to consider together qualitative and quantitative aspects. Because of the different nature, measures belong to different domains. The main problem is to aggregate such information into a derived measure able to provide an overall estimation. This problem has been traditionally solved trough the transformation of qualitative assessments into quantitative measures. Indeed, such a transformation implicitly assumes a conceptual equivalence between the terms quantitative and objective on one side, and qualitative and subjective on the other side. An alternative approach is to consider logical aggregation models, able to infer the overall evaluation based on the assessment of individual attributes. This approach requires an early transformation of quantitative measures in qualitative assessments. Such a transformation is possible trough the use of judgment functions. The aim of this paper is to introduce the judgment functions and to study their properties.",2003,0, 2305,An efficient defect estimation method for software defect curves,Software defect curves describe the behavior of the estimate of the number of remaining software defects as software testing proceeds. They are of two possible patterns: single-trapezoidal-like curves or multiple-trapezoidal-like curves. In this paper we present some necessary and/or sufficient conditions for software defect curves of the Goel-Okumoto NHPP model. These conditions can be used to predict the effect of the detection and removal of a software defect on the variations of the estimates of the number of remaining defects. A field software reliability dataset is used to justify the trapezoidal shape of software defect curves and our theoretical analyses. The results presented in this paper may provide useful feedback information for assessing software testing progress and have potentials in the emerging area of software cybernetics that explores the interplay between software and control.,2003,0, 2306,An embryonic approach to reliable digital instrumentation based on evolvable hardware,"Embryonics encompasses the capability of self-repair and self-replication in systems. This paper presents a technique based on reconfigurable hardware coupled with a novel backpropagation algorithm for reconfiguration, together referred to as evolvable hardware (EHW), for ensuring reliability in digital instrumentation. The backpropagation evolution is much faster than genetic learning techniques. It uses the dynamic restructuring capabilities of EHW to detect faults in digital systems and reconfigures the hardware to repair or adapt to the error in real-time. An example application is presented of a robust BCD to a seven-segment decoder driving a digital display. The results obtained are quite interesting and promise quick and low cost embryonic schemes for reliability in digital instrumentation.",2003,0, 2307,The design of reliable devices for mission-critical applications,"Mission-critical applications require that any failure that may lead to erroneous behavior and computation is detected and signaled as soon as possible in order not to jeopardize the entire system. Totally self-checking (TSC) systems are designed to be able to autonomously detect faults when they occur during normal circuit operation. Based on the adopted TSC design strategy and the goal pursued during circuit realization (e.g., area minimization), the circuit, although TSC, may not promptly detect the fault depending on the actual number of input configurations that serve as test vectors for each fault in the network. If such a number is limited, although TSC it may be improbable that the fault is detected once it occurs, causing detection and aliasing problems. The paper presents a design methodology, based on a circuit re-design approach and an evaluation function, for improving a TSC circuit promptness in detecting faults' occurrence, a property we will refer to as TSC quality.",2003,0, 2308,Improvement of sensor accuracy in the case of a variable surface reflectance gradient for active laser range finders,"In active laser range finders, the computation of the (x, y, z) coordinates of each point of a scene can be performed using the detected centroid p~ of the image spot on the sensor. When the reflectance of the scene under analysis is uniform, the intensity profile of the image spot is a Gaussian and its centroid is correctly detected assuming an accurate peak position detector. However, when a change of reflectance occurs on the scene, the intensity profile of the image spot is no longer Gaussian. This change introduces a deviation Δp on the detected centroid p~, which will lead to erroneous (x, y, z) coordinates. This paper presents two heuristic models to improve the sensor accuracy in the case of a variable surface reflectance gradient. Simulation results are presented to show the quality of the correction and the resulting accuracy.",2003,0, 2309,Helmet-mounted display image quality evaluation system,"Helmet-mounted displays (HMDs) provide essential pilotage and fire control imagery information for pilots. To maintain system integrity and readiness, there is a need to develop an image quality evaluation system for HMDs. In earlier work, a framework was proposed for an HMD system called the integrated helmet and display sighting system (IHADSS), used with the U.S. Army's Apache helicopter. This paper describes prototype development and interface design and summarizes bench test findings using three IHADSS helmet display units (HDUs). The prototype consists of hardware (cameras, sensors, image capture/data acquisition cards, battery pack, HDU holder, moveable rack and handle, and computer) and software algorithms for image capture and analysis. Two cameras with different-size apertures are mounted in parallel on a rack facing an HDU holder. A handle allows users to position the HDU in front of the two cameras. The HMD test pattern is then captured. Sensors detect the position of the holder and whether the HDU is angled correctly in relation to the camera. Algorithms detect HDU features captured by the two cameras, including focus, orientation, displacement, field-of-view, and number of grayshades. Bench testing of three field-quality HDUs indicates that the image analysis algorithms are robust and able to detect the desired image features. Suggested future directions include development of a learning algorithm to automatically develop or revise feature specifications as the number of inspection samples increases.",2003,0, 2310,Co-histogram and its application in remote sensing image compression evaluation,"Peak signal-to-noise ratio (PSNR) has found its application as an evaluation metric for image coding, but in many instances it provides an inaccurate representation of the image quality. The new tool proposed in this paper is called co-histogram, which is a statistic graph generated by counting the corresponding pixel pairs of two images. For image coding evaluation, the two images are the original image and a compressed and recovered image. The graph is a two-dimensional joint probability distribution of the two images. A co-histogram shows how the pixels are distributed among combinations of two image pixel values. By means of co-histogram, we can have a visual interpretation of PSNR, and the symmetry of a co-histogram is also significant for objective evaluation of remote sensing image compression. Our experiments with two SAR images and a TM image using DCT-based JPEG and wavelet-based SPIHT coding methods perform the importance of the co-histogram symmetry.",2003,0, 2311,The design and performance of real-time Java middleware,"More than 90 percent of all microprocessors are now used for real-time and embedded applications. The behavior of these applications is often constrained by the physical world. It is therefore important to devise higher-level languages and middleware that meet conventional functional requirements, as well as dependably and productively enforce real-time constraints. We provide two contributions to the study of languages and middleware for real-time and embedded applications. We first describe the architecture of jRate, which is an open-source ahead-of-time-compiled implementation of the RTSJ middleware. We then show performance results obtained using RTJPerf, which is an open-source benchmarking suite that systematically compares the performance of RTSJ middleware implementations. We show that, while research remains to be done to make RTSJ a bullet-proof technology, the initial results are promising. The performance and predictability of JRate provides a baseline for what can be achieved by using ahead-of-time compilation. Likewise, RTJPerf enables researchers and practitioners to evaluate the pros and cons of RTSJ middleware systematically as implementations mature.",2003,0, 2312,Reliable heterogeneous applications,"This paper explores the notion of computational resiliency to provide reliability in heterogeneous distributed applications. This notion provides both software fault-tolerance and the ability to tolerate information-warfare attacks. This technology seeks to strengthen a military mission, rather than to protect its network infrastructure using static defense measures such as network security, intrusion sensors, and firewalls. Even if a failure or attack is successful and never detected, it should be possible to continue information operations and achieve mission objectives. Computational resiliency involves the dynamic use of replicated software structures, guided by mission policy, to achieve reliable operation. However, it goes further to regenerate, automatically, replication in response to a failure or attack, allowing the level of system reliability to be restored and maintained. This paper examines a prototype concurrent programming technology to support computational resiliency in a heterogeneous distributed computing environment. The performance of the technology is explored through two example applications.",2003,0, 2313,Quality of service provision assessment for campus network,"The paper presents a methodology for assessing the quality of service (QoS) provision for a campus network. The author utilizes the Staffordshire University's network communications infrastructure (SUNCI) as a testing platform and discusses a new approach and QoS provision, by adding a component of measurement to the existing model presented by J.L. Walker (see J. Services Marketing, vol.9, no.1, p.5-14, 1995). The QoS provision is assessed in light of users' perception compared with the network traffic measurements and online monitoring reports. The users' perception of the QoS provision of a telecommunications network infrastructure is critical to the successful business management operation of any organization. The computing environment in modern campus networks is complex, employing multiple heterogeneous hardware and software technologies. In support of highly interactive user applications, QoS provision is essential to the users' ever increasing level of expectations. The paper offers a cost effective approach to assessing the QoS provision within a campus network.",2003,0, 2314,Soft-error detection using control flow assertions,"Over the last few years, an increasing number of safety-critical tasks have been demanded of computer systems. In this paper, a software-based approach for developing safety-critical applications is analyzed. The technique is based on the introduction of additional executable assertions to check the correct execution of the program control flow. By applying the proposed technique, several benchmark applications have been hardened against transient errors. Fault injection campaigns have been performed to evaluate the fault detection capability of the proposed technique in comparison with state-of-the-art alternative assertion-based methods. Experimental results show that the proposed approach is far more effective than the other considered techniques in terms of fault detection capability, at the cost of a limited increase in memory requirements and in performance overhead.",2003,0, 2315,An intelligent early warning system for software quality improvement and project management,"One of the main reasons behind unfruitful software development projects is that it is often too late to correct the problems by the time they are detected. It clearly indicates the need for early warning about the potential risks. In this paper, we discuss an intelligent software early warning system based on fuzzy logic using an integrated set of software metrics. It helps to assess risks associated with being behind schedule, over budget, and poor quality in software development and maintenance from multiple perspectives. It handles incomplete, inaccurate, and imprecise information, and resolve conflicts in an uncertain environment in its software risk assessment using fuzzy linguistic variables, fuzzy sets, and fuzzy inference rules. Process, product, and organizational metrics are collected or computed based on solid software models. The intelligent risk assessment process consists of the following steps: fuzzification of software metrics, rule firing, derivation and aggregation of resulted risk fuzzy sets, and defuzzification of linguistic risk variables.",2003,0, 2316,Application of an attribute selection method to CBR-based software quality classification,"This study investigates the attribute selection problem for reducing the number of software metrics (program attributes) used by a case-based reasoning (CBR) software quality classification model. The metrics are selected using the Kolmogorov-Smirnov (K-S) two sample test. The ""modified expected cost of misclassification"" measure, recently proposed by our research team, is used as a performance measure to select, evaluate, and compare classification models. The attribute selection procedure presented in this paper can assist a software development organization in determining the software metrics that are better indicators of software quality. By reducing the number of software metrics to be collected during the development process, the metrics data collection task can be simplified. Moreover, reducing the number of metrics would result in reducing the computation time of a CBR model. Using an empirical case study of a real-world software system, it is shown that with a reduced number of metrics the CBR technique is capable of yielding useful software quality classification models. Moreover, their performances were better than or similar to CBR models calibrated without attribute selection.",2003,0, 2317,Genetic programming-based decision trees for software quality classification,"The knowledge of the likely problematic areas of a software system is very useful for improving its overall quality. Based on such information, a more focused software testing and inspection plan can be devised. Decision trees are attractive for a software quality classification problem which predicts the quality of program modules in terms of risk-based classes. They provide a comprehensible classification model which can be directly interpreted by observing the tree-structure. A simultaneous optimization of the classification accuracy and the size of the decision tree is a difficult problem, and very few studies have addressed the issue. This paper presents an automated and simplified genetic programming (gp) based decision tree modeling technique for the software quality classification problem. Genetic programming is ideally suited for problems that require optimization of multiple criteria. The proposed technique is based on multi-objective optimization using strongly typed GP. In the context of an industrial high-assurance software system, two fitness functions are used for the optimization problem: one for minimizing the average weighted cost of misclassification, and one for controlling the size of the decision tree. The classification performances of the GP-based decision trees are compared with those based on standard GP, i.e., S-expression tree. It is shown that the GP-based decision tree technique yielded better classification models. As compared to other decision tree-based methods, such as C4.5, GP-based decision trees are more flexible and can allow optimization of performance objectives other than accuracy. Moreover, it provides a practical solution for building models in the presence of conflicting objectives, which is commonly observed in software development practice.",2003,0, 2318,An expression's single fault model and the testing methods,"This paper proposes a single fault model for the faults of the expressions, including operator faults (operator reference fault: an operator is replaced by another, extra or missing operator for single operand), incorrect variable or constant, incorrect parentheses. These types of faults often exist in the software, but some fault classes are hard to detect using traditional testing methods. A general testing method is proposed to detect these types of faults. Furthermore the fault simulation method of the faults is presented which can accelerate the generation of test cases and minimize the testing cost greatly. Our empirical results indicate that our methods require a smaller number of test cases than random testing, while retaining fault-detection capabilities that are as good as, or better than the traditional testing methods.",2003,0, 2319,Briefing a new approach to improve the EMI immunity of DSP systems,"Hereafter, we present an approach dealing to improve the reliability of digital signal processing (DSP) systems operating in real noisy (electromagnetic interference - EMI) environments. The approach is based on the coupling of two techniques: the ""DSP-oriented signal integrity improvement"" technique deals to increase the signal-to-noise ratio (SNR) and is essentially a modification of the classic Recovery Blocks Scheme. The second technique, named ""SW-based fault handling"" aims to detect in real-time data- and control-flow faults throughout modifications of the processor C-code. When compared to conventional approaches using Fast Fourier Transform (FIT) and Hamming Code, the primary benefit of such an approach is to improve system reliability by means of a considerably low complexity, reasonably low performance degradation and, when implemented in hardware, with reduced area overhead. Aiming to illustrate the proposed approach, we present a case study for a speech recognition system, which was partially implemented in a PC microcomputer and in a COTS microcontroller. This system was tested under a home-tailored EMI environment according to the International Standard Normative IEC 61.0004-29. The obtained results indicate that the proposed approach can effectively improve the reliability of DSP systems operating in real noise (EMI) environments.",2003,0, 2320,Detection or isolation of defects? An experimental comparison of unit testing and code inspection,"Code inspections and white-box testing have both been used for unit testing. One is a static analysis technique, the other, a dynamic one, since it is based on executing test cases. Naturally, the question arises whether one is superior to the other, or, whether either technique is better suited to detect or isolate certain types of defects. We investigated this question with an experiment with a focus on detection of the defects (failures) and isolation of the underlying sources of the defects (faults). The results indicate that there exist significant differences for some of the effects of using code inspection versus testing. White-box testing is more effective, i.e. detects significantly more defects while inspection isolates the underlying source of a larger share of the defects detected. Testers spend significantly more time, hence the difference in efficiency is smaller, and is not statistically significant. The two techniques are also shown to detect and identify different defects, hence motivating the use of a combination of methods.",2003,0, 2321,A comprehensive and systematic methodology for client-server class integration testing,"This article is a first attempt towards a comprehensive, systematic methodology for class interface testing in the context of client/server relationships. The proposed approach builds on and combines existing techniques. It first consists in selecting a subset of the method sequences defined for the class testing of the client class, based on an analysis of the interactions between the client and the server methods. Coupling information is then used to determine the conditions, i.e., values for parameters and data members, under which the selected client method sequences are to be executed so as to exercise the interaction. The approach is illustrated by means of an abstract example and its cost-effectiveness is evaluated through a case study.",2003,0, 2322,Optimal resource allocation for the quality control process,"Software development project employs some quality control (QC) process to detect and remove defects. The final quality of the delivered software depends on the effort spent on all the QC stages. Given a quality goal, different combinations of efforts for the different QC stages may lead to the same goal. In this paper, we address the problem of allocating resources to the different QC stages, such that the optimal quality is obtained. We propose a model for the cost of QC process and then view the resource allocation among different QC stages as an optimization problem. We solve this optimization problem using non-linear optimization technique of sequential quadratic programming. We also give examples to show how a sub-optimal resource allocation may either increase the resource requirement significantly or lower the quality of the final software.",2003,0, 2323,Building a requirement fault taxonomy: experiences from a NASA verification and validation research project,"Fault-based analysis is an early lifecycle approach to improving software quality by preventing and/or detecting pre-specified classes of faults prior to implementation. It assists in the selection of verification and validation techniques that can be applied in order to reduce risk. This paper presents our methodology for requirements-based fault analysis and its application to National Aeronautics and Space Administration (NASA) projects. The ideas presented are general enough to be applied immediately to the development of any software system. We built a NASA-specific requirement fault taxonomy and processes for tailoring the taxonomy to a class of software projects or to a specific project. We examined requirement faults for six systems, including the International Space Station (ISS), and enhanced the taxonomy and processes. The developed processes, preliminary tailored taxonomies for critical/catastrophic high-risk (CCHR) systems, preliminary fault occurrence data for the ISS project, and lessons learned are presented and discussed.",2003,0, 2324,A Bayesian belief network for assessing the likelihood of fault content,"To predict software quality, we must consider various factors because software development consists of various activities, which the software reliability growth model (SRGM) does not consider. In this paper, we propose a model to predict the final quality of a software product by using the Bayesian belief network (BBN) model. By using the BBN, we can construct a prediction model that focuses on the structure of the software development process explicitly representing complex relationships between metrics, and handling uncertain metrics, such as residual faults in the software products. In order to evaluate the constructed model, we perform an empirical experiment based on the metrics data collected from development projects in a certain company. As a result of the empirical evaluation, we confirm that the proposed model can predict the amount of residual faults that the SRGM cannot handle.",2003,0, 2325,RTTometer: measuring path minimum RTT with confidence,"Internet path delay is a substantial metric in determining path quality. Therefore, it is not surprising that round-trip time (RTT) plays a tangible role in several protocols and applications, such as overlay network construction protocol, peer-to-peer services, and proximity-based server redirection. Unfortunately, current RTT measurement tools report delay without any further insight about path condition. Therefore, applications usually estimate minimum RTT by sending a large number of probes to gain confidence in the measured RTT. Nevertheless, a large number of probes does not directly translate to better confidence. Usually, minimum RTT can be measured using few probes. Based on observations of path RTT presented by Z. Wang et al. (see Proc. Passive & Active Measurement Workshop - PAM'03, 2003), we develop a set of techniques, not only to measure minimum path RTT, but also to associate it with a confidence level that reveals the condition on the path during measurement. Our tool, called RTTometer, is able to provide a confidence measure associated with path RTT. Besides, given a required confidence level, RTTometer dynamically adjusts the number of probes based on a path's condition. We describe our techniques implemented in RTTometer and present our preliminary experiences using RTTometer to estimate RTT on various representative paths of the Internet.",2003,0, 2326,Reducing optical crosstalk in affordable systems of virtual environment,"We have implemented a scheme for elimination of depolarization artefacts in passive stereo-projection systems. These problems appear due to non-perfectness of stereo-projection equipment: depolarization of the light due to reflection from the screen or due to the passage through the screen; non-ideality of polarizing filters; mixing of left- and right-eye images in a change of relative orientation of the filters. These effects lead to strong violations in stereo-perception (known as ""ghosts""). They can be eliminated by software methods, using linear filtering of the image before its projection to the screen, based on the formula L' = L -αR, R' = R -αL , where L, R are images, destined for the left and right eye respectively, α is constant, defining intensity of the ghosts. This receipt, in theory leading to exact compensation of the ghosts, in practice possesses certain limitations, which allow not complete elimination but only strong suppression of the ghosts, at very careful calibration of the system. We have implemented this algorithm in VE system Avango by means of texture mappings. All necessary operations are performed by the graphics board, thus providing the real-time rendering rate. The described method considerably improves stability of stereo-perception, making high-quality performances of virtual environment possible on affordable equipment.",2003,0, 2327,Exploring the relationship between experience and group performance in software review,"The aim is to examine the important relationships between experience, task training and software review performance. One hundred and ninety-two volunteer university students were randomly assigned into 48 four-member groups. Subjects were required to detect defects from a design document. The main findings include (1) role experience has a positive effect on software review performance; (2) working experience in the software industry has a positive effect on software review performance; (3) task training has no significant effect on software review performance; (4) role experience has no significant effect on task training; (5) working experience in the software industry has a significant effect on task training.",2003,0, 2328,An extension of the behavioral theory of group performance in software development technical reviews,"In the original theory of group performance in software development technical reviews there was no consideration given to the influence of the inspection process, inspection performance, or the inspected artifact, on the structure of the theoretical model. We present an extended theoretical model, along with discussion and justification for components of the new model. These extensions include consideration of the software product quality attributes, the prescriptive processes now developed for reviews, and a more detailed consideration of the technical and nontechnical characteristics of the inspectors. We present both the structure of the model and the nature of the interactions between elements in the model. Consideration is also given to the opportunity for increased formalism in the technical review process. We show that opportunity exists to both improve industrial practice and extend research-based knowledge of the technical review process and context.",2003,0, 2329,Estimating computation times in data intensive e-services,"A priori estimation of quality of service (QoS) levels is a significant issue in e-services since service level agreements (SLAs) need to specify and adhere to such estimates. Response time is an important metric for data intensive e-services such as data mining, data analysis and querying/information retrieval from large databases where the focus is on the time taken to present results to clients. A key component of response time in such data intensive services is the time taken to perform the computation, namely, the time taken to perform either data mining, analysis or retrieval. In this paper, we present an approach for accurately estimating the computation times of data intensive e-services.",2003,0, 2330,A model for battery lifetime analysis for organizing applications on a pocket computer,"A battery-powered portable electronic system shuts down once the battery is discharged; therefore, it is important to take the battery behavior into account. A system designer needs an adequate high-level battery model to make battery-aware decisions targeting the maximization of the system's online lifetime. We propose such a model that allows a designer to analytically predict the battery time-to-failure for a given load. Our model also allows for a tradeoff between the accuracy and the amount of computation performed. The quality of the proposed model is evaluated using typical pocket computer applications and a detailed low-level simulation of a lithium-ion electrochemical cell. In addition, we verify the proposed model against actual measurements taken on a real lithium-ion battery.",2003,0, 2331,Multi-million gate FPGA physical design challenges,"The recent past has seen a tremendous increase in the size of design circuits that can be implemented in a single FPGA. These large design sizes significantly impact cycle time due to design automation software runtimes and an increased number of performance based iterations. New FPGA physical design approaches need to be utilized to alleviate some of these problems. Hierarchical approaches to divide and conquer, the design, early estimation tools for design exploration, and physical optimizations are some of the key methodologies that have to be introduced in the FPGA physical design tools. This paper will investigate the loss/benefit in quality of results due to hierarchical approaches and compare and contrast some of the design automation problem formulations and solutions needed for FPGAs versus known standard cell ASIC approaches.",2003,0, 2332,Per-prediction for PHY mode selection in OFDM communication systems,"Modern wireless communication standards like HiperLAN/2 or IEEE802.11a provide a high degree of flexibility on the physical layer (PHY) allowing the data link control (DLC) layer to choose transmission parameters with respect to the currently observed link quality. This possibility of so-called link adaptation (LA) is a key element for meeting quality of service (QoS) requirements and optimizing system performance. The decisions made by the LA algorithm are typically based on a prediction of the packet error rate (PER) implied by a certain transmit parameter setting. With most existing LA schemes, this prediction is based on the observed signal-to-noise ratio (SNR) or carrier-to-interference ratio (CIR), respectively. For frequency selective channels, however, the SNR alone does not adequately describe the channel quality. It is shown in this paper that LA schemes based on an accurate description of the momentary channel status can outperform conventional schemes with respect to throughput and reliability. Based on a novel ""indicator"" concept, schemes for obtaining an accurate prediction of the PER by evaluation of the channel transfer function are presented and evaluated by software simulation.",2003,0, 2333,An adaptive bandwidth reservation scheme in multimedia wireless networks,"Next generation wireless networks target to provide quality of service (QoS) for multimedia applications. In this paper, the system supports two QoS criteria, i.e., the system should keep the handoff dropping probability always less than a predefined QoS bound, while maintaining the relative priorities of different traffic classes in terms of blocking probability. To achieve this goal, a dynamic multiple-threshold bandwidth reservation scheme is proposed, which is capable of granting differential priorities to different traffic class and to new ad handoff traffic for each class by dynamically adjusting bandwidth reservation thresholds. Moreover, in times of network congestion, a preventive measure by use of throttling new connection acceptance is taken. Another contribution of this paper is to generalize the concept of relative priority, hence giving the network operator more flexibility to adjust admission control policy by incorporating some dynamic factors such as offered load. The elaborate simulation is conducted to verify the performance of the scheme.",2003,0, 2334,Unbounded system model allows robust communication infrastructure for power quality measurement and control,"A robust information infrastructure is required to collect power quality measurements and to execute corrective actions. It is based on a software architecture, designed at middleware level, that makes use of Internet protocols for communication over different media. While the middleware detects the anomalies in the communication and computation system and reacts appropriately, the application functionality is maintained through system reconfiguration or graceful degradation. Such anomalies may come from dynamic changes in the topology of the underlying communication system, or the enabling/disabling of processing nodes on the network. The added value of this approach comes from the flexibility to deal with a dynamic environment based on an unbounded system model. The paper illustrates this approach in a power quality measurement and control system with compensation based on active filters.",2003,0, 2335,Measurement-based admission control in UMTS multiple cell case,"In this paper, we develop an efficient call admission control (CAC) algorithm for UMTS systems. We first introduce the expressions that we developed for signal-to-interference (SIR) for both uplink and downlink, to obtain a novel CAC algorithm that takes into account, in addition to SIR constraints, the effects of mobility, coverage as well as the wired capacity in the LMTS terrestrial radio access network (UTRAN). for the uplink, and the maximal transmission power of the base station, for the downlink. As of its implementation, we investigate the measurement-based approach as a means to predict future, both handoff and new, call arrivals and thus manage different priority levels. Compared to classical CAC algorithms, our CAC mechanism achieves better performance in terms of outage probability and QoS management.",2003,0, 2336,Addressing workload variability in architectural simulations,"The inherent variability of multithreaded commercial workloads can lead to incorrect results in architectural simulation studies. Although most architectural simulation studies ignore space variability's effects, our results demonstrate that space variability has serious implications for architectural simulation studies using multithreaded workloads. The standard solution - running long enough - does not easily apply to simulation because of its enormous slowdown. To address this problem, we propose a simulation methodology combining multiple simulations with standard statistical techniques, such as confidence intervals and hypothesis testing. This methodology greatly decreases the probability of drawing incorrect conclusions, and permits reasonable simulation times given sufficient simulation hosts.",2003,0, 2337,"Verification, validation, and certification of modeling and simulation applications","Certifying that a large-scale complex modeling and simulation (M&S) application can be used for a set of specific purposes is an onerous task, which involves complex evaluation processes throughout the entire M&S development life cycle. The evaluation processes consist of verification and validation activities, quality assurance, assessment of qualitative and quantitative elements, assessments by subject matter experts, and integration of disparate measurements and assessments. Planning, managing, and conducting the evaluation processes require a disciplined life-cycle approach and should not be performed in an ad hoc manner. The purpose of this tutorial paper is to present structured evaluation processes throughout the entire M&S development life cycle. Engineers, analysts, and managers can execute the evaluation processes presented herein to be able to formulate a certification decision for a large-scale complex M&S application.",2003,0, 2338,Faults in grids: why are they so bad and what can be done about it?,"Computational grids have the potential to become the main execution platform for high performance and distributed applications. However, such systems are extremely complex and prone to failures. We present a survey with the grid community on which several people shared their actual experience regarding fault treatment. The survey reveals that, nowadays, users have to be highly involved in diagnosing failures, that most failures are due to configuration problems (a hint of the area's immaturity), and that solutions for dealing with failures are mainly application-dependent. Going further, we identify two main reasons for this state of affairs. First, grid components that provide high-level abstractions when working, do expose all gory details when broken. Since there are no appropriate mechanisms to deal with the complexity exposed (configuration, middleware, hardware and software issues), users need to be deeply involved in the diagnosis and correction of failures. To address this problem, one needs a way to coordinate different support teams working at the grids different levels of abstraction. Second, fault tolerance schemes today implemented on grids tolerate only crash failures. Since grids are prone to more complex failures, such those caused by heisenbugs, one needs to tolerate tougher failures. Our hope is that the very heterogeneity, that makes a grid a complex environment, can help in the creation of diverse software replicas, a strategy that can tolerate more complex failures.",2003,0, 2339,Investigation of interfaces with analytical tools,"This paper focuses on advancements in three areas of analyzing interfaces, namely, acoustic microscopy for detecting damage to closely spaced interfaces, thermal imaging to detect damage and degradation of thermal interface materials and laser spallation, a relatively new concept to understand the strength of interfaces. Acoustic microscopy has been used widely in the semiconductor assembly and package area to detect delamination, cracks and voids in the package, but the resolution in the axial direction has always been a limitation of the technique. Recent advancements in acoustic waveform analysis has now allowed for detection and resolution of closely spaced interfaces such as layers within the die. Thermal imaging using infrared (IR) thermography has long been used for detection of hot spots in the die or package. With recent advancements in very high-speed IR cameras, improved pixel resolution, and sophisticated software programming, the kinetics of heat flow can now be imaged and analyzed to reveal damage or degradation of interfaces that are critical to heat transfer. The technique has been demonstrated to be useful to understand defects and degradation of thermal interface materials used to conduct heat away from the device. Laser spallation is a method that uses a short duration laser pulse to cause fracture at the weakest interface and has the ability to measure the adhesion strength of the interface. The advantage of this technique is that it can be used for fully processed die or wafers and even on packaged devices. The technique has been used to understand interfaces in devices with copper metallization and low-k dielectrics.",2003,0, 2340,Risk responsibility for supply in Latin America - the Argentinean case,"In deregulation of electricity sectors in Latin America two approaches have been used to allocate the responsibility on the electricity supply: (1) The government keeps the final responsibility on the supply. Suppliers (distribution companies or traders) do not have control on the rationing when it becomes necessary to curtail load. In such case they cannot manage the risks associated to the supply. This is the case in the markets of Brazil and Colombia. (2) The responsibility is fully transferred to suppliers. The regulatory entity supervises the quality of the supply and different types of penalties are applied when load is not supplied. This approach is currently used in Argentina, Chile and Peru. In Argentina the bilateral contracts, that are normally financial, become physical when a rationing event happens. This approach permits suppliers to have a great control on risks. Both approaches have defenders and detractors. In some cases, the conclusions on a same event have completely opposite interpretations and diagnoses. For instance, the crisis of supply in Brazil during 2002 was interpreted as a fault of the market by the defenders of the final responsibility of the state, or attributed to excess of regulation and of interference of the government by the advocates of decentralized schemes. This presentation will analyze the performance of both approaches in Latin America, assessing the diverse types of arguments used to criticize or to defend to each one of these approaches, and finally to present some conclusions on the current situation and future of the responsibility on supply and risks associated.",2003,0, 2341,Low-cost power quality monitor based on a PC,"This paper presents the development of a low-cost digital system useful for power quality monitoring and power management. Voltage and current measurements are made through Hall-effect sensors connected to a standard data acquisition board, and the applications were programmed in LabVIEWâ„?, running on Windows in a regular PC. The system acquires data continuously, and stores in files the events that result from anomalies detected in the monitored power system. Several parameters related to power quality and power management can be analyzed through 6 different applications, named: ""scope and THD"", ""strip chart"", ""wave shape"", ""sags and swells"", ""classical values"" and ""p-q theory"". The acquired information can be visualized in tables and/or in charts. It is also possible to generate reports in HTML format. These reports can be sent directly to a printer, embedded in other software applications, or accessed through the Internet, using a Web browser. The potential of the developed system is shown, namely the advantages of virtual instrumentation, regarding to flexibility, cost and performance, in the scope of power quality monitoring and power management.",2003,0, 2342,Modelling and analysing fault propagation in safety-related systems,"A formal specification for analysing and implementing multiple fault diagnosis software is proposed in this paper. The specification computes all potential fault sources that correspond to a set of triggered alarms for a safety-related system, or part of a system. The detection of faults occurring in a safety-related system is a fundamental function that needs to be addressed efficiently. Safety monitors for fault diagnosis have been extensively studied in areas such as aircraft systems and chemical industries. With the introduction of intelligent sensors, diagnosis results are made available to monitoring systems and operators. For complex systems composed of thousands of components and sensors, the diagnosis of multiple faults and the computational burden of processing test results are substantial. This paper addresses the multiple fault diagnosis problem for zero-time propagation using a fault propagation graph. Components represented as nodes in a fault propagation graph are allocated with alarms. When faults occur and are propagated some of these alarms are triggered. The allocation of alarms to nodes is based on a severity analysis performed using a form of failure mode and effect analysis on components in the system.",2003,0, 2343,Generic signature generation tool for diagnosis and parametric estimation of multi-variable dynamical nonlinear systems,"In this paper, a graphical signature generation tool is proposed for diagnosis and parametric estimation for nonlinear systems. This tool is based on the definition of a two-dimensional graphical signature obtained by using the history of the output measurements on some moving-horizon window. The underlying idea is that if each parametric variation deforms this signature in a different way, the latter becomes a remarkable tool for diagnosis, in particular in the case of the diagnosis by human operator. The whole scheme is illustrated by a simple example.",2003,0, 2344,Stability and performance of the stochastic fault tolerant control systems,"In this paper, the stability and performance of the fault tolerant control system (FTCS) are studied. The analysis is based on a stochastic framework of integrated FTCS, in which the system component failure and the fault detection and isolation (FDI) scheme are characterized by two Markovian parameters. In addition, the model uncertainties and noise/disturbance are treated in the same framework. The sufficient conditions for stochastic stability and the system performance using a stochastic integral quadratic constraint are developed. A simulation study on an example system is performed with illustrative results obtained.",2003,0, 2345,A high-level pipelined FPGA based DCT for video coding applications,"Video coding functions, such as discrete cosine transform (DCT), variable length coding and motion estimation, require a significant amount of processing power to implement in software. For high quality video or in applications where a powerful processor is not available, a hardware implementation is the solution. We propose a flexible field programmable gate array (FPGA) model, based on a high-level pipelined processor core, that can improve the performance of video coding. Furthermore, distributed arithmetic and exploitation of parallelism and bit-level pipelining are used to produce a DCT implementation on a single FPGA.",2003,0, 2346,First results of real-time time and frequency transfer using GPS code and carrier phase observations,"We have used code and carrier phase data from the global positioning system (GPS) satellites to estimate time differences between atomic clocks in near (<10 s) real-time. For some sites we have used data transmitted via Internet connections and TCP/IP, while for other sites data were collected in deferred time, but processed by a Kalman filter-based software as if they were available in real time. Satellite orbit and clock data of different quality have been used. The real-time estimates of time differences of the station clocks have been compared to those estimated from regular postprocessing using accurate satellite orbits and clocks from the international GPS service (IGS). First results show that the standard deviation of the differences between the real-time carrier phase-based and the postprocessing estimates of the clock time differences can be less than 100 ps for baselines of about 1000 km.",2003,0, 2347,RTOS scheduling in transaction level models,"Raising the level of abstraction in system design promises to enable faster exploration of the design space at early stages. While scheduling decision for embedded software has great impact on system performance, it's much desired that the designer can select the right scheduling algorithm at high abstraction levels so as to save him from the error-prone and time consuming task of tuning code delays or task priority assignments at the final stage of system design. In this paper we tackle this problem by introducing a RTOS model and an approach to refine any unscheduled transaction level model (TLM) to a TLM with RTOS scheduling support. The refinement process provides a useful tool to the system designer to quickly evaluate different dynamic scheduling algorithms and make the optimal choice at an early stage of system design.",2003,0, 2348,Early estimation of the size of VHDL projects,"The analysis of the amount of human resources required to complete a project is felt as a critical issue in any company of the electronics industry. In particular, early estimation of the effort involved in a development process is a key requirement for any cost-driven system-level design decision. In this paper, we present a methodology to predict the final size of a VHDL project on the basis of a high-level description, obtaining a significant indication about the development effort. The methodology is the composition of a number of specialized models, tailored to estimate the size of specific component types. Models were trained and tested on two disjoint and large sets of real VHDL projects. Quality-of-result indicators show that the methodology is both accurate and robust.",2003,0, 2349,Programmable low-cost ultrasound machine,"Most commercial ultrasound machines do not allow the researcher to acquire internal data, modify the signal processing algorithms or test a new clinical application. A software-based ultrasound machine could offer the needed flexibility to modify the ultrasound processing algorithms and evaluate new ultrasound applications for clinical efficacy. We have designed an ultrasound machine in which all of the back-end processing is performed in software rather than in hardware. This programmable machine supports all the conventional ultrasound modes (e.g., B mode, color flow, M mode and spectral Doppler) without any performance degradation. The frame rates and image quality of this programmable machine were evaluated and found to be comparable to commercial ultrasound machines. Due to its programmability, we were able to improve the accuracy of flow velocity estimation by integrating a new clutter rejection filtering method into the system. The machine also supports advanced features such as 3D volume rendering in software. Our results demonstrate that a software-based ultrasound machine can provide the same functionality for clinical use as conventional ultrasound systems while offering increased flexibility at low cost.",2003,0, 2350,Towards statistical inferences of successful prostate surgery,"Prostate cancer continues to be the leading cancer in the United States male population. The options for local therapy have proliferated and include various forms of radiation delivery, cryo-destruction, and novel forms of energy delivery as in high-intensity focused ultrasound. Surgical removal, however, remains the standard procedure for cure. Currently there are little objective parameters that are used to compare the efficiency of each form of surgical removal. As surgeons apply these different surgical approaches, a quality assessment would be most useful, not only with regard to overall comparison of one approach vs. another but also surgeon evaluation of personal surgical performance as they relate to a standard. To this end, we discuss the development of a process employing image reconstruction and analysis techniques to assess the volume and extent of extracapsular soft tissue removed with the prostate. Parameters such as the percent of capsule covered by soft tissue and where present the average depth of soft tissue coverage are assessed. A final goal is to develop software for the purpose of a quality assurance assessment for pathologists and surgeons to evaluate the adequacy/appropriateness of each surgical procedure; laparoscopic versus open perineal or retropubic prostatectomy.",2003,0, 2351,Effect of feature extraction and feature selection on expression data from epithelial ovarian cancer,"Classifying the gene expression levels of normal and cancerous cells and identifying the genes most contributing to this distinction propose an alternative means of diagnosis. We have investigated the effect of feature extraction and feature selection on clustering of the expression data on two different data sets for ovarian cancer. One data set consisted of 2176 transcripts from 30 samples, nine from normal ovarian epithelial cells and 21 from cancerous ones. The other data set had 7129 transcripts coming from 27 tumor and four normal ovarian tissues. Hierarchical clustering algorithms employing complete-link, average-link and Ward's method were implemented for comparative evaluation. Principal component analysis was applied for feature extraction and resulted in 100% segregation. Feature selection was performed to identify the most distinguishing genes using CART® software. Selected features were able to cluster the data with 100% success. The results suggest that adoption of feature extraction and selection enhances the quality of clustering of gene expression data for ovarian cancer. Identification of distinguishing genes is a more complex problem that requires incorporating pathway knowledge with statistical and machine learning methods.",2003,0, 2352,Introduction of BSS into VR-based test and simulation-with application in NDE,"VR-based test and simulation system(VTSS) is a concept put forward in the authors' previous works. In a VTSS, the test processes are interactively planned, optimized and simulated in a virtual test environment generated by computer, aiming at eventually performing tests completely in virtual test environments. To make VTSS intelligent and practical, the technique of blind source separation (BSS) is introduced into VTSS. With ultrasonic non-destructive evaluation (NDE) as the technical background, a prototype of the system has been described in this paper. BSS is adopted in VTSS to serve three purposes: defect classification, system modeling and noise reduction. The conclusion is that BSS can play a very important role in VTSS.",2003,0, 2353,The WindSat calibration/validation plan and early results,"Summary form only given. The WindSat radiometer was launched as part of the Coriolis mission in January 2003. WindSat was designed to provide fully polarimetric passive microwave measurements globally, and in particular over the oceans for ocean surface wind vector retrieval. Due to prohibitive risk and cost associated with an end-to-end pre-launch absolute radiometer calibration (i.e. from the energy incident on the main reflector through the receiver digitized output) it was important to develop an on-orbit calibration plan that verifies instrument conformance to specification and, if necessary, derive suitable calibration coefficients or sensor algorithms that bring the instrument into specification. This is especially true for the WindSat Cal/Val, in view of the fact that it is the first fully polarimetric spaceborne radiometer. This paper will provide an overview of the WindSat Cal/Val Program. The Cal/Val plan, patterned after the very successful SSM/I Cal/Val, is designed to ensure the highest quality data products and maximize the return on the available Cal/Val resources. The Cal/Val is progressive and will take place in well-defined stages: Early Orbit Evaluation, Initial Assessment, Detailed Calibration, and EDR Validation. The approach allows us to focus efforts more quickly on issues as they are identified and ensures a logical progression of activities. At each level of the Cal/Val the examination of the instrument and algorithm errors becomes more detailed. Along with the WindSat Cal/Val structure overview, we will present the independent data sources to be used and the analysis techniques to be employed. During the Early Orbit phase, special instrument operating modes have been developed to monitor the sensor health from the time of instrument turn-on to a short time after it reaches stable operating conditions. This mode is uniquely important for the WindSat since it affords the only opportunity to examine all of the data in the full 360 degree scan- > - > and to directly assess potential field-of-view intrusion effects from the spacecraft or other sensors on-board the satellite and evaluate the spin control system by observing the statistical distribution of data as the horns scan through the calibration loads. The next phase of the WindSat Cal/Val consists of an initial assessment of all sensor and environmental data products generated by the Ground Processing Software. The primary focus of the assessment is to conduct an end-to-end review of the output files (TDR, SDR and EDR) to verify the following: proper functioning of the on-line GPS modules, instrument calibration (including Antenna Pattern Correction and Stokes Coupling) does not contain large errors, EDR algorithms provide reasonable products, and there are no major geo-iocation errors. This paper will provide a summary of the results of the Early Orbit and Initial Assessment phases of the WindSat Cal/Val Program.",2003,0, 2354,Plane-based calibration for multibeam sonar mounting angles,"Multibeam sonar systems are much more efficient than the convectional single-beam echo sounders for seafloor-mapping in hydrographic surveying. On the other hand, the operation of multibeam sonar systems needs to integrate more auxiliary sensor units. Because the world coordinates of each footprint is calculated based on the geometry of the sonar head relative to the GPS of the ship. Therefore, the resulting survey quality highly depends on the accuracy of the estimated mounting configuration of the sonar head, and other sensor units. Basically, the configuration parameters include the three Euler's angles, three linear translations and the asynchronous latency of signals between the transducer and other sensors. These parameters can not be measured directly. They can only be estimated from the post-process of the bathymetry data called patch test. Generally, the patch test requires the survey ship to follow several designated paths which are parallel, reciprocal or perpendicular to each other. Furthermore, the choice of seabed slope is also an important factor for the quality of the result. The contourplots of the seabed for the different paths are used to estimate the mounting configuration of the sonar head. In this work, we propose best-fitting a small flat patch to represent the seabed right beneath a segment of the path. A pair of patches from the two adjacent segments of reciprocal or perpendicular paths are selected for comparison. The difference between the two patches gives us an idea how the mounting parameters, i.e. the rolling, pitching and yawing angles, might be. If the parameters are accurately estimated, the two patches should be coplane. We design several semi-positive definite functions and feed back control algorithms to steer the mounting angles to search for the solutions. One more advantage of this approach is that the variation of each mounting angles as the survey undergoes can be monitored. Our preliminary study shows that our results are only 1% different from other commercial software. Further comparisons will be reported in the final paper.",2003,0, 2355,Performance of common and dedicated traffic channels for point-to-multipoint transmissions in W-CDMA,"We present a performance evaluation of two strategies for transmitting packet data to a group of multicast users over the W-CDMA air interface. In the first scheme, a single common or broadcast channel is used to transmit multicast data over the entire cell, whereas in the second scheme multiple dedicated channels are employed for transmitting to the group. We evaluate the performance of both schemes in terms of the number of users that can be supported by either scheme at a given quality of service defined in terms of a target outage probability.",2003,0, 2356,Predicting maintainability with object-oriented metrics -an empirical comparison,"
First Page of the Article
",2003,0, 2357,Particle swarm optimization for worst case tolerance design,"Worst case tolerance analysis is a major subtask in modern industrial electronics. Recently, the demands on industrial products like production costs or probability of failure have become more and more important in order to be competitive in business. The main key to improve the quality of electronic products is the challenge to reduce the effects of parameter variations, which can be done by robust parameter design. This paper addresses the applicability of particle swarm optimization combined with pattern search for worst case circuit design. The main advantages of this approach are the efficiency and robustness of the particle swarm optimization strategy. The method is also well suited for higher order problems, i.e. for problems with a high number of design parameters, because of the linear complexity of the pattern search algorithm.",2003,0, 2358,H-COMP: a tool for quantitative and comparative analysis of endmember identification algorithms,"Over the past years, several endmember extraction algorithms have been developed for spectral mixture analysis of hyperspectral data. Due to a lack of quantitative approaches to substantiate new algorithms, available methods have not been rigorously compared using a unified scheme. In this paper, we describe H-COMP, an IDL (Interactive Data Language)-based software toolkit for visualization and interactive analysis of results provided by endmember selection methods. The suitability of using H-COMP for assessment and comparison of endmember extraction algorithms is demonstrated in this work by a comparative analysis of three standard algorithms: Pixel Purity Index (PPI), N-FINDR, and Automated Morphological Endmember Extraction (AMEE). Simulated and real hyperspectral datasets, collected by the NASA/JPL Airborne Visible-Infrared Imaging Spectrometer (AVIRIS), are used to carry out a comparative effort, focused on the definition of reliable endmember quality metrics.",2003,0, 2359,Higher education level laboratory for real time pollution monitoring,"The present paper presents a higher education level laboratory for real time pollution monitoring used in environmental courses of the engineering universities. The main objective of this laboratory is to simulate usual applications in the field of real time measuring and monitoring of air and water pollutants. The laboratory was designed and realized by ECOSEN Ltd. Company. Real time simulators were obtained, using modules with solid state (SnO2) gas sensors. A separate module is for pH measurements in water. For data acquisition a microcontroller or a parallel interface were used. An IBM compatible PC is used for data processing and management. Dedicated software was developed in Turbo Pascal 6.0 and Visual Basic 6.0 for data acquisition, processing and management. This laboratory is included, starting from year 2000, in the university course ""Techniques of the environmental quality measurement"" held at the Environmental Engineering Faculty of the Ecological University in Bucharest. An earlier simpler version is now used by University of Civil Engineering in Bucharest.",2003,0, 2360,Reduced complexity motion estimation techniques: review and comparative study,"Because of its high computational complexity, the process of motion estimation has become a bottleneck problem in many video applications, such as mobile video terminals and software-based video codecs. This has motivated the development of a number of fast motion estimation techniques. This paper briefly reviews reduced complexity motion estimation techniques and classifies them into five categories. It then presents the results of a comparative study across all categories. The aim of this study is to provide the reader with a feel of the relative performance of the categories, with particular attention to the important trade-off between computational complexity and prediction quality.",2003,0, 2361,A knowledge based tool to support industrial customers in PQ evaluations,"The work illustrates a knowledge-based software tool, aimed to support industrial customers in evaluating power quality (PQ) disturbances impact on their processes. For this evaluation the customer is provided with data and information, collected from both national and international experiences in PQ field, that allows to estimate: a) the susceptibility of the industrial process to the several typical PQ events; b) the typical costs associated to common PQ disturbances. The tool implements several procedures, each dedicated to focused evaluations of either susceptibility (SENS) or cost (COST) of PQ events.",2003,0, 2362,Empirical case studies of combining software quality classification models,"The increased reliance on computer systems in the modern world has created a need for engineering reliability control of computer systems to the highest possible standards. This is especially crucial in high-assurance and mission critical systems. Software quality classification models are one of the important tools in achieving high reliability. They can be used to calibrate software metrics-based models to detect fault-prone software modules. Timely use of such models can greatly aid in detecting faults early in the life cycle of the software product. Individual classifiers (models) may be improved by using the combined decision from multiple classifiers. Several algorithms implement this concept and have been investigated. These combined learners provide the software quality modeling community with accurate, robust, and goal oriented models. This paper presents a comprehensive comparative evaluation of three combined learners, Bagging, Boosting, and Logit-Boost. We evaluated these methods with a strong and a weak learner, i.e., C4.5 and Decision Stumps, respectively. Two large-scale case studies of industrial software systems are used in our empirical investigations.",2003,0, 2363,A neuro-fuzzy model for software cost estimation,"A novel neuro-fuzzy constructive cost model (COCOMO) for software estimation is proposed. The model carries some of the desirable features of the neuro-fuzzy approach, such as learning ability and good interpretability, while maintaining the merits of the COCOMO model. Unlike the standard neural network approach, this model is easily validated by experts and capable of generalization. In addition, it allows inputs to be continuous-rating values and linguistic values, therefore avoiding the problem of similar projects having different estimated costs. Also presented in this paper is a detailed learning algorithm. The validation, using industry project data, shows that the model greatly improves the estimation accuracy in comparison with the well-known COCOMO model.",2003,0, 2364,Character string predicate based automatic software test data generation,"A character string is an important element in programming. A problem that needs further research is how to automatically generate software test data for character strings. This paper presents a novel approach for automatic test data generation of program paths including character string predicates, and the effectiveness of this approach is examined on a number of programs. Each element of input variable of a character string is determined by using the gradient descent technique to perform function minimization so that the test data of character string can be dynamically generated. The experimental results illustrate that this approach is effective.",2003,0, 2365,Experiences in the inspection process characterization techniques,"Implementation of a disciplined engineering approach to software development requires the existence of an adequate supporting measurement & analysis system. Due to demands for increased efficiency and effectiveness of software processes, measurement models need to be created to characterize and describe the various processes usefully. The data derived from these models should then be analyzed quantitatively to assess the effects of new techniques and methodologies. In recent times, statistical and process thinking principles have led software organizations to appreciate the value of applying statistical process control techniques. As part of the journey towards SW-CMM® Level 5 at the Motorola Malaysia Software Center, which the center achieved in October 2001, considerable effort was spent on exploring SPC techniques to establish process control while focusing on the quantitative process management KPA of the SW-CMM®. This paper discusses the evolutionary learning experiences, results and lessons learnt by the center in establishing appropriate analysis techniques using statistical and other derivative techniques. The paper discusses the history of analysis techniques that were explored with specific focus on characterizing the inspection process. Future plans to enhance existing techniques and to broaden the scope to cover analysis of other software processes are also discussed.",2003,0, 2366,The CMS high level trigger,"The High Level Trigger (HLT) system of the CMS experiment will consist of a series of reconstruction and selection algorithms designed to reduce the Level-1 trigger accept rate of 100 kHz to 100 Hz forwarded to permanent storage. The HLT operates on events assembled by an event builder collecting detector data from the CMS front-end system at full granularity and resolution. The HLT algorithms will run on a farm of commodity PCs, the filter farm, with a total expected computational power of 106 SpecInt95. The farm software, responsible for collecting, analyzing, and storing event data, consists of components from the data acquisition and the offline reconstruction domains, extended with the necessary glue components and implementation of interfaces between them. The farm is operated and monitored by the DAQ control system and must provide near-real-time feedback on the performance of the detector and the physics quality of data. In this paper, the architecture of the HLT farm is described, and the design of various software components reviewed. The status of software development is presented, with a focus on the integration issues. The physics and CPU performance of current reconstruction and selection algorithm prototypes is summarized in relation with projected parameters of the farm and taking into account the requirements of the CMS physics program. Results from a prototype test stand and plans for the deployment of the final system are finally discussed.",2003,0, 2367,Real-time embedded system support for the BTeV level 1 Muon Trigger,"The Level 1 Muon Trigger subsystem for BTeV will be implemented using the same architectural building blocks as the BTeV Level 1 Pixel Trigger: pipelined field programmable gate arrays feeding a farm of dedicated processing elements. The muon trigger algorithm identifies candidate tracks, and is sensitive to the muon charge (sign); candidate dimuon events are identified by complementary charge track-pairs. To insure that the trigger is operating effectively, the trigger development team is actively collaborating in an independent multi-university research program for reliable, self-aware, fault adaptive behavior in real-time embedded systems (RTES). Key elements of the architecture, algorithm, performance, and engineered reliability are presented.",2003,0, 2368,The effect of counting statistics on the integrity of deconvolved gamma-ray spectra,"Symetrica's spectrum-processing software improves both the spectral-resolution and sensitivity of scintillation spectrometers. This paper demonstrates the robustness of this algorithm when applied to spectra having poor statistical quality. The error in the location of the deconvolved 609keV full-energy peak, within a 226Ra spectrum; was less than 2keV for a peak area containing fewer than 150 counts. For the raw data, this accuracy was only achieved for peak areas greater than 1300 counts. Processing the spectra in this way also improves the accuracy, by factor of between 2 and 2.6, with which the full-energy peak areas could be measured. This paper has also shown that, by using the processed data, a hidden 137Cs source having a count rate at the level of just 2% that of a 60Co masking source, was always detectable in the limited number of spectrum recorded.",2003,0, 2369,Simulation and modeling of stator flux estimator for induction motor using artificial neural network technique,"Accurate stator flux estimation for high performance induction motor drives is very important to ensure proper drive operation and stability. Unfortunately, there is some problems occurred when estimating stator flux especially at zero speed and at low frequency. Hence a simple open loop controller of pulse width modulation voltage source inverter (PWM-VSI) fed induction motor configuration is presented. By a selection of voltage model-based of stator flux estimation, a simple method using artificial neural network (ANN) technique is proposed to estimate stator flux by means of feed forward back propagation algorithm. In motor drives applications, artificial neural network has several advantages such as faster execution speed, harmonic ripple immunity and fault tolerance characteristics that will result in a significant improvement in the steady state performances. Thus, to simulate and model stator flux estimator, Matlab/Simulink software package particularly power system block set and neural network toolbox is implemented. A structure of three-layered artificial neural network technique has been applied to the proposed stator flux estimator. As a result, this technique gives good improvement in estimating stator flux which the estimated stator flux is very similar in terms of magnitude and phase angle if compared to the real stator flux.",2003,0, 2370,Component based development for transient stability power system simulation software,"A component-based development (CBD) approach has become increasingly important in the software industry. Any software that apply the CBD will not only save time and cost through reusability of component, but also have the capability to handle the complex problems. Since CBD design is based on object-oriented programming (OOP), the components with a good quality and reusability can be created, classified and managed for future reuse. The methodology of OOP is based on the real object. The mechanism of OOP such as encapsulation, inheritance, and polymorphism are the advantages that could be used to define real objects associated with the program. This paper focused on the implementation of the CBD to power system transient stability simulation (TSS). There are many methods to solve transient stability problem, but in this paper two methods are applied to solve TSS problems, namely trapezoidal method and modified Euler method. The performance of two approaches, CBD and non CBD applications of power system transient stability simulation is assessed through tests carried out using IEEE data test systems.",2003,0, 2371,A survey and measurement-based comparison of bandwidth management techniques,"This article gives a brief tutorial for bandwidth management systems; a survey of the techniques used by eight real-world systems, such as class-based queuing (CBQ), per-flow queuing (PFQ), random early detection (RED), and TCP rate control (TCR); a compact testbed with a set of methodologies to differentiate the employed techniques; and a detailed black-box evaluation. The tutorial describes the needs for the three types of policy rules: class-based bandwidth limitation, session-bandwidth guarantee, and inter/intra-class bandwidth borrowing. The survey portion investigates how the eight chosen commercial/open-source real systems enforce the three policy types. To evaluate the techniques, the designed testbed emulates real-life Internet conditions, such as many simultaneous sessions from different IPs/ports, controllable wide area network (WAN) delay and packet loss rate for each session, and different TCP source implementations. The performance metrics include accuracy of bandwidth management, fairness among sessions, robustness under Internet packet losses and different operating systems, inter/intra-class bandwidth borrowing, and voice over IP (VoIP) quality. The black-box test results demonstrate that (1) only the combination of CBQ+PFQ+TCR can solve the most difficult scenario (multiple sessions competing for the narrow 20kb/s class); (2) the TCR approach may degrade the goodput, fairness, and compatibility even under slight packet loss rates (0.5 percent); (3) without PFQ, TCR and RED have limited ability to isolate the sessions (especially for RED); (4) the G.729 VoIP quality over a 125kb/s access link becomes good only after exercising MSS-clamping to shrink the packet size of the background traffic down to 256 byte.",2003,0, 2372,Neuronal prediction system of meteorological parameters for quality assurance of the traffic,"This work tries to be an answer to the problem of the ice formations on the road and over the taking of landing ways surfaces. As we all know, the unpredicted apparition of the ice over the roads and in the airports is the main cause of the majority human and material looses in those two places. The neuronal prediction system wants to be an alternative to the existent forecasting systems. This system will combine the last moment technique like the networks, the software (Labview, Matlab) and the most modern programming approach: neuronal networks, neuro-fuzzy systems, statistical signal processing, and the result wants to be a prediction system for the ice formation which is very precise and less expensive than the existent systems.",2003,0, 2373,User-level QoS and traffic engineering for 3G wireless 1×EV-DO systems,"Third-generation (3G) wireless systems such as 3G1X, 1×EV-DO, and 1xEV-DV provide support for a variety of high-speed data applications. The success of these services critically relies on the capability to ensure an adequate quality of service (QoS) experience to users at an affordable price. With wireless bandwidth at a premium, traffic engineering and network planning play a vital role in addressing these challenges. We present models and techniques that we have developed for quantifying the QoS perception of 1×EV-DO users generating file transfer protocol (FTP) or Web browsing sessions. We show how user-level QoS measures may be evaluated by means of a Processor-Sharing model that explicitly accounts for the throughput gains from multi-user scheduling. The model provides simple analytical formulas for key performance metrics such as response times, blocking probabilities, and throughput. Analytical models are especially useful for network deployment and in-service tuning purposes due to the intrinsic difficulties associated with simulation-based optimization approaches. © 2003 Lucent Technologies Inc.",2003,0, 2374,How good is your blind spot sampling policy,"Assessing software costs money and better assessment costs exponentially more money. Given finite budgets, assessment resources are typically skewed towards areas that are believed to be mission critical. This leaves blind spots: portions of the system that may contain defects which may be missed. Therefore, in addition to rigorously assessing mission critical areas, a parallel activity should sample the blind spots. This paper assesses defect detectors based on static code measures as a blind spot sampling method. In contrast to previous results, we find that such defect detectors yield results that are stable across many applications. Further, these detectors are inexpensive to use and can be tuned to the specifics of the current business situations.",2004,1, 2375,Towards survivable intrusion detection system,"Intrusion detection systems (IDS) are increasingly a key part of system defense, often operating under a high level of privilege to achieve their purposes. Therefore, the ability of an IDS to withstand attack is important in a production system. In this paper, we address the issue of survivable IDS. We begin by categorizing potential vulnerabilities in a generic IDS and classifying methods used to enhance IDS survivability. We then propose an efficient fault tolerance based Survivable IDS (SIDS) along with a systematic way to transform an original IDS architecture into this survivable architecture. Key components of SIDS include: a dual-functionality forward-ahead (DFFA) structure, backup communication paths, component recycling, system reconfiguration, and an anomaly detector. Use of the SIDS transformation should result in an improvement in IDS survivability at low cost.",2004,0, 2376,A cost-benefit stopping criterion for statistical testing,"Determining when to stop a statistical test is an important management decision. Several stopping criteria have been proposed, including criteria based on statistical similarity, the probability that the system has a desired reliability, and the expected cost of remaining faults. This paper proposes a new stopping criterion based on a cost-benefit analysis using the expected reliability of the system (as opposed to an estimate of the remaining faults). The expected reliability is used, along with other factors such as units deployed and expected use, to anticipate the number of failures in the field and the resulting anticipated cost of failures. Reductions in this number generated by increasing the reliability are balanced against the cost of further testing to determine when testing should be stopped.",2004,0, 2377,Time-energy design space exploration for multi-layer memory architectures,"This paper presents an exploration algorithm which examines execution time and energy consumption of a given application, while considering a parameterized memory architecture. The input to our algorithm is an application given as an annotated task graph and a specification of a multi-layer memory architecture. The algorithm produces Pareto trade-off points representing different multi-objective execution options for the whole application. Different metrics are used to estimate parameters for application-level Pareto points obtained by merging all Pareto diagrams of the tasks composing the application. We estimate application execution time although the final scheduling is not yet known. The algorithm makes it possible to trade off the quality of the results and its runtime depending on the used metrics and the number of levels in the hierarchical composition of the tasks' Pareto points. We have evaluated our algorithm on a medical image processing application and randomly generated task graphs. We have shown that our algorithm can explore huge design space and obtain (near) optimal results in terms of Pareto diagram quality.",2004,0, 2378,Efficient implementations of mobile video computations on domain-specific reconfigurable arrays,"Mobile video processing as defined in standards like MPEG-4 and H.263 contains a number of timeconsuming computations that cannot be efficiently executed on current hardware architectures. The authors recently introduced a reconfigurable SoC platform that permits a low-power, high-throughput and flexible implementation of the motion estimation and DCT algorithms. The computations are done using domainspecific reconfigurable arrays that have demonstrated up to 75% reduction in power consumption when compared to generic FPGA architecture, which makes them suitable for portable devices. This paper presents and compares different configurations of the arrays to efficiently implementing DCT and motion estimation algorithms. A number of algorithms are mapped into the various reconfigurable fabrics demonstrating the flexibility of the new reconfigurable SoC architecture and its ability to support a number of implementations having different performance characteristics.",2004,0, 2379,A generic RTOS model for real-time systems simulation with systemC,"The main difficulties in designing real-time systems are related to time constraints: if an action is performed too late, it is considered as a fault (with different levels of criticism). Designers need to use a solution that fully supports timing constraints and enables them to simulate early on the design process a real-time system. One of the main difficulties in designing HW/SW systems resides in studying the effect of serializing tasks on processors running a real-time operating system (RTOS). In this paper, we present a generic model of RTOS based on systemC. It allows assessing real-time performances and the influence of scheduling according to RTOS properties such as scheduling policy, context-switch time and scheduling latency.",2004,0, 2380,Empirical analysis of safety-critical anomalies during operations,"Analysis of anomalies that occur during operations is an important means of improving the quality of current and future software. Although the benefits of anomaly analysis of operational software are widely recognized, there has been relatively little research on anomaly analysis of safety-critical systems. In particular, patterns of software anomaly data for operational, safety-critical systems are not well understood. We present the results of a pilot study using orthogonal defect classification (ODC) to analyze nearly two hundred such anomalies on seven spacecraft systems. These data show several unexpected classification patterns such as the causal role of difficulties accessing or delivering data, of hardware degradation, and of rare events. The anomalies often revealed latent software requirements that were essential for robust, correct operation of the system. The anomalies also caused changes to documentation and to operational procedures to prevent the same anomalous situations from recurring. Feedback from operational anomaly reports helped measure the accuracy of assumptions about operational profiles, identified unexpected dependencies among embedded software and their systems and environment, and indicated needed improvements to the software, the development process, and the operational procedures. The results indicate that, for long-lived, critical systems, analysis of the most severe anomalies can be a useful mechanism both for maintaining safer, deployed systems and for building safer, similar systems in the future.",2004,0, 2381,Speed-accuracy tradeoff during performance of a tracking task without visual feedback,"To help people with visual impairment, especially people with severely impaired vision, access graphic information on a computer screen, we have carried out fundamental research on the effect of increasing the number of detection fields. In general, application of the parallelism concept enables information to be accessed more precisely and easily when the number of sensors is high. We have developed a ""Braille Box"" by modifying Braille cells to form a tactile stimulator array which is compatible with the fingertip. Each pin can be controlled independently so that we can change the size and type of array in order to study the tactile perception of both simple and complex graphical forms. Our results show that by applying the parallelism concept to the detection field, people with visual impairment can increase the speed of exploration of geometric forms without decreasing the level of accuracy: thus, avoiding a speed-accuracy tradeoff. Further experiments need to be done with this Braille Box in order to improve the device and help people with visual impairment access graphic information.",2004,0, 2382,The Effects of an ARMOR-based SIFT environment on the performance and dependability of user applications,"Few, distributed software-implemented fault tolerance (SIFT) environments have been experimentally evaluated using substantial applications to show that they protect both themselves and the applications from errors. We present an experimental evaluation of a SIFT environment used to oversee spaceborne applications as part of the Remote Exploration and Experimentation (REE) program at the Jet Propulsion Laboratory. The SIFT environment is built around a set of self-checking ARMOR processes running on different machines that provide error detection and recovery services to themselves and to the REE applications. An evaluation methodology is presented in which over 28,000 errors were injected into both the SIFT processes and two representative REE applications. The experiments were split into three groups of error injections, with each group successively stressing the SIFT error detection and recovery more than the previous group. The results show that the SIFT environment added negligible overhead to the application's execution time during failure-free runs. Correlated failures affecting a SIFT process and application process are possible, but the division of detection and recovery responsibilities in the SIFT environment allows it to recover from these multiple failure scenarios. Only 28 cases were observed in which either the application failed to start or the SIFT environment failed to recognize that the application had completed. Further investigations showed that assertions within the SIFT processes-coupled with object-based incremental checkpointing-were effective in preventing system failures by protecting dynamic data within the SIFT processes.",2004,0, 2383,Analyzing software measurement data with clustering techniques,"For software quality estimation, software development practitioners typically construct quality-classification or fault prediction models using software metrics and fault data from a previous system release or a similar software project. Engineers then use these models to predict the fault proneness of software modules in development. Software quality estimation using supervised-learning approaches is difficult without software fault measurement data from similar projects or earlier system releases. Cluster analysis with expert input is a viable unsupervised-learning solution for predicting software modules' fault proneness and potential noisy modules. Data analysts and software engineering experts can collaborate more closely to construct and collect more informative software metrics.",2004,0, 2384,EPIC: profiling the propagation and effect of data errors in software,"We present an approach for analyzing the propagation and effect of data errors in modular software enabling the profiling of the vulnerabilities of software to find 1) the modules and signals most likely exposed to propagating errors and 2) the modules and signals which, when subjected to error, tend to cause more damage than others from a systems operation point-of-view. We discuss how to use the obtained profiles to identify where dependability structures and mechanisms will likely be the most effective, i.e., how to perform a cost-benefit analysis for dependability. A fault-injection-based method for estimation of the various measures is described and the software of a real embedded control system is profiled to show the type of results obtainable by the analysis framework.",2004,0, 2385,Towards dependable Web services,"Web services are the key technology for implementing distributed enterprise level applications such as B2B and grid computing. An important goal is to provide dependable quality guarantees for client-server interactions. Therefore, service level management (SLM) is gaining more and more significance for clients and providers of Web services. The first step to control service level agreements is a proper instrumentation of the application code in order to monitor the service performance. However, manual instrumentation of Web services is very costly and error-prone and thus not very efficient. Our goal was to develop a systematic and automated, tool-supported approach for Web services instrumentation. We present a dual approach for efficiently instrumenting Web services. It consists of instrumenting the frontend Web services platform as well as the backend services. Although the instrumentation of the Web services platform necessarily is platform-specific, we have found a general, reusable approach. On the backend-side aspect-oriented programming techniques are successfully applied to instrument backend services. We present experimental studies of performance instrumentation using the application response measurement (ARM) API and evaluate the efficiency of the monitoring enhancements. Our results point the way to systematically gain better insights into the behaviour of Web services and thus how to build more dependable Web-based applications.",2004,0, 2386,Application-level fault tolerance in the orbital thermal imaging spectrometer,"Systems that operate in extremely volatile environments, such as orbiting satellites, must be designed with a strong emphasis on fault tolerance. Rather than rely solely on the system hardware, it may be beneficial to entrust some of the fault handling to software at the application level, which can utilize semantic information and software communication channels to achieve fault tolerance with considerably less power and performance overhead. We show the implementation and evaluation of such a software-level approach, application-level fault tolerance and detection (ALFTD) into the orbital thermal imaging spectrometer (OTIS).",2004,0, 2387,Error detection enhancement in COTS superscalar processors with event monitoring features,"Increasing use of commercial off-the-shelf (COTS) superscalar processors in industrial, embedded, and real-time systems necessitates the development of error detection mechanisms for such systems. This shows an error detection scheme called committed instructions counting (CIC) to increase error detection in such systems. The scheme uses internal performance monitoring features and an external watchdog processor (WDP). The performance monitoring features enable counting the number of committed instructions in a program. The scheme is experimentally evaluated on a 32-bit Pentium® processor using software implemented fault injection (SWIFI). A total of 8181 errors were injected into the Pentium® processor. The results show that the error detection coverage varies between to 90.92% and 98.41%, for different workloads.",2004,0, 2388,Safety testing of safety critical software based on critical mission duration,"To assess the safety of software based safety critical systems, we firstly analyzed the differences between reliability and safety, then, introduced a safety model based on three-state Markov model and some safety-related metrics. For safety critical software it is common to demand that all known faults are removed. Thus an operational test for safety critical software takes the form of a specified number of test cases (or a specified critical mission duration) that must be executed unsafe-failure-free. When the previous test has been early terminated as a result of an unsafe failure, it has been proposed that the further test need to be more stringent (i.e. the number of tests that must be executed unsafe-failure-free should increase). In order to solve the problem, a safety testing method based on critical mission duration and Bayesian testing stopping rules is proposed.",2004,0, 2389,Analysis and evaluation of topological and application characteristics of unreliable mobile wireless ad-hoc network,"We present a study of topological characteristics of mobile wireless ad-hoc networks. The characteristics studied are connectivity, coverage, and diameter. Knowledge of topological characteristics of a network aids in the design and performance prediction of network protocols. We introduce intelligent goal-directed mobility algorithms for achieving desired topological characteristics. A simulation-based study shows that to achieve low, medium and high network QoS defined in terms of combined requirements of the three metrics, the network needs respectively 8, 16, and 40 nodes. If nodes can fail, the requirements increase to 8, 36 and 60 nodes respectively. We present a theoretical derivation of the improvement due to the mobility models and the sufficient condition for 100% connectivity and coverage. Next, we show the effect of improved topological characteristics in enhancing QoS of an application level protocol, namely, a location determination protocol called Hop-Terrain. The study shows that the error in location estimation is reduced by up to 68% with goal-directed mobility.",2004,0, 2390,The system recovery benchmark,"We describe a benchmark for measuring system recovery on a nonclustered standalone system. A system's ability to recover from an outage quickly is a critical factor in overall system availability. General purpose computer systems, such as UNIX based systems, tend to execute the same sequence or series of steps during system startup and outage recovery. Our experience has shown that these steps are consistent, reproducible and measurable, and can thus be benchmarked. Additionally, the factors that create variability in restart/recovery can be bound and represented in a meaningful way. A defined set of measurements, coupled with a specification for representing the results and system variables, provide the foundation for system recovery benchmarking.",2004,0, 2391,Impact of difference of feeder impedances on the performance of a static transfer switch,"Conventionally, control system of a static transfer switch (STS) is designed based on the assumption that the STS terminal voltages are in-phase. This paper investigates the impact of phase difference of the STS terminal voltages on the STS transfer time and cross current. The phase difference is assumed to be a result of the difference of the feeder impedances. The paper shows that phase differences of even up to 4.5°, which are accompanied by up to 5% voltage drop at the STS terminal, can noticeably increase the transfer time and the cross current magnitude. The studies are conducted on the IEEE STS-l benchmark system using the PSCAD/EMTDC software.",2004,0, 2392,SRAT-distribution voltage sags and reliability assessment tool,"Interruptions to supply and sags of distribution system voltage are the main aspects causing customer complaints. There is a need for analysis of supply reliability and voltage sag to relate system performance with network structure and equipment design parameters. This analysis can also give prediction of voltage dips, as well as relating traditional reliability and momentary outage measures to the properties of protection systems and to network impedances. Existing reliability analysis software often requires substantial training, lacks automated facilities, and suffers from data availability. Thus it requires time-consuming manual intervention for the study of large networks. A user-friendly sag and reliability assessment tool (SRAT) has been developed based on existing impedance data, protection characteristics, and a model of failure probability. The new features included in SRAT are a) efficient reliability and sag assessments for a radial network with limited loops, b) reliability evaluation associated with realistic protection and restoration schemes, c) inclusion of momentary outages in the same model as permanent outage evaluation, d) evaluation of the sag transfer through meshed subtransmission network, and e) simplified probability distribution model determined from available faults records. Examples of the application of the tools to an Australian distribution network are used to illustrate the application of this model.",2004,0, 2393,Legacy software evaluation model for outsourced maintainer,"Outsourcing has become common practice in the software industry. Organizations routinely subcontract the maintenance of their software assets to specialized companies. A great challenge for these companies, is to rapidly evaluate the quality of the systems they will have to maintain so as to accurately estimate the amount of work they will require. To answer these concerns, we developed a framework of metrics to evaluate the complexity of a legacy software system and help an outsourcing maintainer define its contracts. This framework was defined using a well known approach in software quality, called ""goal-question-metric"". We present the goal-question-metric approach, its results, and the initial experimentation of the metrics on five real life systems in Cobol.",2004,0, 2394,Using history information to improve design flaws detection,"As systems evolve and their structure decays, maintainers need accurate and automatic identification of the design problems. Current approaches for automatic detection of design problems are not accurate enough because they analyze only a single version of a system and consequently they miss essential information as design problems appear and evolve over time. Our approach is to use the historical information of the suspected flawed structure to increase the accuracy of the automatic problem detection. Our means is to define measurements which summarize how persistent the problem was and how much maintenance effort was spent on the suspected structure. We apply our approach on a large scale case study and show how it improves the accuracy of the detection of god classes and data classes, and additionally how it adds valuable semantical information about the evolution of flawed design structures.",2004,0, 2395,The process of and the lessons learned from performance tuning of a product family software architecture for mobile phones,"Performance is an important nonfunctional quality attribute of a software system but not always is considered when a software is designed. Furthermore, software evolves and changes can negatively affect the performance. New requirements could introduce performance problems and the need for a different architecture design. Even if the architecture has been designed to be easy to extend and flexible enough to be modified to perform its function, a software component designed to be too general and flexible can slower the execution of the application. Performance tuning is a way to assess the characteristics of an existing software and highlight design flaws or inefficiencies. Periodical performance tuning inspections and architecture assessments can help to discover potential bottlenecks before it is too late especially when changes and requirements are added to the architecture design. In this paper a performance tuning experience of one Nokia product family architecture will be described. Assessing a product family architecture means also taking into account the performance of the entire line of products and optimizations must include or at least not penalize its members.",2004,0, 2396,Towards the definition of a maintainability model for Web applications,"The growing diffusion of Web-based services in many and different business domains has triggered the need for new Web applications (WAs). The pressing market demand imposes very short time for the development of new WAs, and frequent modifications for existing ones. Well-defined software processes and methodologies are rarely adopted both in the development and maintenance phases. As a consequence, WAs' quality usually degrades in terms of architecture, documentation, and maintainability. Major concerns regard the difficulties in estimating costs of maintenance interventions. Thus, a strong need for methods and models to assess the maintainability of existing WAs is growing more and more. In this paper we introduce a first proposal for a WA maintainability model; the model considers those peculiarities that makes a WA different from a traditional software system and a set of metrics allowing an estimate of the maintainability is identified. Results from some initial case studies to verify the effectiveness of the proposed model are presented in the paper.",2004,0, 2397,Reducing overfitting in genetic programming models for software quality classification,"A high-assurance system is largely dependent on the quality of its underlying software. Software quality models can provide timely estimations of software quality, allowing the detection and correction of faults prior to operations. A software metrics-based quality prediction model may depict overfitting, which occurs when a prediction model has good accuracy on the training data but relatively poor accuracy on the test data. We present an approach to address the overfitting problem in the context of software quality classification models based on genetic programming (GP). The problem has not been addressed in depth for GP-based models. The presence of overfitting in a software quality classification model affects its practical usefulness, because management is interested in good performance of the model when applied to unseen software modules, i.e., generalization performance. In the process of building GP-based software quality classification models for a high-assurance telecommunications system, we observed that the GP models were prone to overfitting. We utilize a random sampling technique to reduce overfitting in our GP models. The approach has been found by many researchers as an effective method for reducing the time of a GP run. However, in our study we utilize random to reduce overfitting with the aim of improving the generalization capability of our GP models.",2004,0, 2398,An approach for designing and assessing detectors for dependable component-based systems,"In this paper, we present an approach that helps in the design and assessment of detectors. A detector is a program component that asserts the validity of a predicate in a given program state. We first develop a theory of error detection, and identify two main properties of detectors, namely completeness and accuracy. Given the complexity of designing efficient detectors, we introduce two metrics, namely completeness (C) and inaccuracy (I), that capture the operational effectiveness of detector operations, and each metric captures one efficiency aspect of the detector. Subsequently, we present an approach for experimentally evaluating these metrics, and is based on fault-injection. The metrics developed in our approach also allow a system designer to perform a cost-benefit analysis for resource allocation when designing efficient detectors for fault-tolerant systems. The applicability of our approach is suited for the design of reliable component-based systems.",2004,0, 2399,Assessing reliability risk using fault correction profiles,"Building on the concept of the fault correction profile - a set of functions that predict fault correction events as a function of failure detection events - introduced in previous research, we define and apply reliability risk metrics that are derived from the fault correction profile. These metrics assess the threat to reliability of an unstable fault correction process. The fault correction profile identifies the need for process improvements and provides information for developing fault correction strategies. Applying these metrics to the NASA Goddard Space Flight Center fault correction process and its data, we demonstrate that reliability risk can be measured and used to identify the need for process improvement.",2004,0, 2400,Unsupervised learning for expert-based software quality estimation,"Current software quality estimation models often involve using supervised learning methods to train a software quality classifier or a software fault prediction model. In such models, the dependent variable is a software quality measurement indicating the quality of a software module by either a risk-based class membership (e.g., whether it is fault-prone or not fault-prone) or the number of faults. In reality, such a measurement may be inaccurate, or even unavailable. In such situations, this paper advocates the use of unsupervised learning (i.e., clustering) techniques to build a software quality estimation system, with the help of a software engineering human expert. The system first clusters hundreds of software modules into a small number of coherent groups and presents the representative of each group to a software quality expert, who labels each cluster as either fault-prone or not fault-prone based on his domain knowledge as well as some data statistics (without any knowledge of the dependent variable, i.e., the software quality measurement). Our preliminary empirical results show promising potentials of this methodology in both predicting software quality and detecting potential noise in a software measurement and quality dataset.",2004,0, 2401,Adding assurance to automatically generated code,"Code to estimate position and attitude of a spacecraft or aircraft belongs to the most safety-critical parts of flight software. The complex underlying mathematics and abundance of design details make it error-prone and reliable implementations costly. AutoFilter is a program synthesis tool for the automatic generation of state estimation code from compact specifications. It can automatically produce additional safety certificates which formally guarantee that each generated program individually satisfies a set of important safety policies. These safety policies (e.g., array-bounds, variable initialization) form a core of properties which are essential for high-assurance software. Here we describe the AutoFilter system and its certificate generator and compare our approach to the static analysis tool PolySpace.",2004,0, 2402,Open design of networked power quality monitoring systems,Permanent continuous power quality monitoring is beginning to be recognized as an important aid for managing power quality. Preventive maintenance can only be initiated if such monitoring is available to detect the minor disturbances that may precede major disruptions. This paper establishes the need to encourage interoperability between power quality instruments from different vendors. It discusses the frequent problem of incompatibility between equipment that results from the inherent inflexibilities in existing designs. A new approach has been proposed to enhance interoperability through the use of open systems in their design. It is demonstrated that it is possible to achieve such open design using existing software and networking technologies. The benefits and disadvantages to both the end-users and the equipment manufacturers are also being discussed.,2004,0, 2403,Analysis of end-to-end QoS for networked virtual reality services in UMTS,"Virtual reality services may be considered a good representative of advanced services in the new-generation network. The focus of this article is to address quality of service support for VR services in the context of the UMTS QoS framework specified by the 3G standardization forum, the Third Generation Partnership Project. We propose a classification of VR services based on delivery requirements (real-time or non-real-time) and degree of interactivity that maps to existing UMTS QoS classes and service attributes. The mapping is based on matching VR service requirements to performance parameters and target values defined for UMTS applications. Test cases involving heterogeneous VR applications are defined, using as a reference a general model for VR service design and delivery. Measurements of network parameters serve to determine the end-to-end QoS requirements of the considered applications, which are in turn mapped to proposed VR service classes.",2004,0, 2404,Reliability and robustness assessment of diagnostic systems from warranty data,"Diagnostic systems are software-intensive built-in-test systems, which detect, isolate and indicate the failures of prime systems. The use of diagnostic systems reduces the losses due to the failures of prime systems and facilitates the subsequent correct repairs. Therefore, they have found extensive applications in industry. Without loss of generality, this paper utilizes the on-board diagnostic systems of automobiles as an illustrative example. A failed diagnostic system generates α or β. α error incurs unnecessary warranty costs to manufacturers, while β error causes potential losses to customers. Therefore, the reliability and robustness of diagnostic systems are important to both manufacturers and customers. This paper presents a method for assessing the reliability and robustness of the diagnostic systems by using warranty data. We present the definitions of robustness and reliability of the diagnostic systems, and the formulae for estimating α, β and reliability. To utilize warranty data for assessment, we describe the two-dimensional (time-in-service and mileage) warranty censoring mechanism, model the reliability function of the prime systems, and devise warranty data mining strategies. The impact of α error on warranty cost is evaluated. Fault tree analyses for α and β errors are performed to identify the ways for reliability and robustness improvement. The method is applied to assess the reliability and robustness of an automobile on-board diagnostic system.",2004,0, 2405,A design tool for large scale fault-tolerant software systems,"In order to assist software designers in the application of fault-tolerance techniques to large scale software systems, a computer-aided software design tool has been proposed and implemented that assess the criticality of the software modules contained in the system. This information assists designers in identifying weaknesses in large systems that can lead to system failures. Through analysis and modeling techniques based in graph theory, modules are assessed and rated as to the criticality of their position in the software system. Graphical representation at two levels facilitates the use of cut set analysis, which is our main focus. While the task of finding all cut sets in any graph is NP-complete, the tool intelligently applies cut set analysis by limiting the problem to provide only the information needed for meaningful analysis. In this paper, we examine the methodology and algorithms used in the implementation of this tool and consider future refinements. Although further testing is needed to assess performance on increasingly complex systems, preliminary results look promising. Given the growing demand for reliable software and the complexities involved in the design of these systems, further research in this area is indicated.",2004,0, 2406,A comparison of two safety-critical architectures using the safety related metrics,"In this paper, we introduce two safety-related metrics to evaluate a safety-critical computer-based system, and the derivations of these metrics are reviewed using Markov models. We describe two Markov architectural configurations of the system, and the comparison based on the proposed safety-related metrics is demonstrated. The comparison results confirm and conclude that one of the two architectures performs safer than the other. After the analysis, we state that the set of safety-related metrics we have derived in this paper is a good set of measurements to evaluate the safety attribute of safety-critical systems.",2004,0, 2407,ISP-operated protection of home networks with FIDRAN,"In order to fight against the increasing number of network security incidents due to mal-protected home networks permanently connected to the Internet via DSL, TV cable or similar technologies, we propose that Internet service providers (ISP) operate and manage intrusion prevention systems (IPS) which are to a large extend executed on the consumer's gateway to the Internet (e.g., DSL router). The paper analyses the requirements of ISP-operated intrusion prevention systems and presents our approach for an IPS that runs on top of an active networking environment and is automatically configured by a vulnerability scanner. We call the system FIDRAN (Flexible Intrusion Detection and Response framework for Active Networks). The system autonomously analyses the home network and correspondingly configures the IPS. Furthermore, our system detects and adjusts itself to changes in the home network (new service, new host, etc.). First performance comparisons show that our approach - while offering more flexibility and being able to support continuous updating by active networking principles - competes well with the performance of conventional intrusion prevention systems like Snort-Inline.",2004,0, 2408,Processing of abdominal ultrasound images using seed based region growing method,"There are many diseases relating to abdomen. Patients suffering by abdominal diseases will be experiencing chronic or acute abdominal pain or suspects of an abdominal mass. Abdomen has two major parts: liver and gallbladder. Gallbladder and liver diseases are very common not only in Malaysia but also all over the globe. Hundreds of patients die from such diseases every year. Doctors face difficulty in diagnosing the types of diseases and sometimes unnecessary measures like surgery has to be performed. An abdominal ultrasound image is a useful way of examining internal organs, including the liver, gallbladder, spleen and kidneys. Ultrasound is safe, radiation free, faster and cheaper. Ultrasound images themselves will not give a clear view of an affected region. In general raw ultrasound images contains lot of imbedded noises. So digital processing can improve the quality of raw ultrasound images. In this work a software tool called ultrasound processing tool (UPT) has been developed by employing the histogram equalization and region growing approach to give a clearer view of the affected regions in the abdomen. The system was tested on more than 20 cases. Here, the results of two cases are presented, one on gallbladder mass and another on liver cancer. The radiologists have reported that original ultrasound images were not at all clear enough to detect the shape and area of the affected regions and the ultrasound processing tool has provided them clear and better view of the internal details of the diseases.",2004,0, 2409,Tolerating late memory traps in dynamically scheduled processors,"In the past few years, exception support for memory functions such as virtual memory, informing memory operations, software assist for shared memory protocols, or interactions with processors in memory has been advocated in various research papers. These memory traps may occur on a miss in the cache hierarchy or on a local or remote memory access. However, contemporary, dynamically scheduled processors only support memory exceptions detected in the TLB associated with the first-level cache. They do not support memory exceptions taken deep in the memory hierarchy. In this case, memory traps may be late, in the sense that the exception condition may still be undecided when a long-latency memory instruction reaches the retirement stage. In this paper we evaluate through simulation the overhead of memory traps in dynamically scheduled processors, focusing on the added overhead incurred when a memory trap is late. We also propose some simple mechanisms to reduce this added overhead while preserving the memory consistency model. With more aggressive memory access mechanisms in the processor we observe that the overhead of all memory traps - either early or late - is increased while the lateness of a trap becomes largely tolerated so that the performance gap between early and late memory traps is greatly reduced. Additionally, because of caching effects in the memory hierarchy, the frequency of memory traps usually decreases as they are taken deeper in the memory hierarchy and their overall impact on execution times becomes negligible. We conclude that support for memory traps taken throughout the memory hierarchy could be added to dynamically scheduled processors at low hardware cost and little performance degradation.",2004,0, 2410,Teaching the process of code review,"Behavioural theory predicts that interventions that improve individual reviewers' expertise also improve the performance of the group in Software Development Technical Reviews (SDTR) [C. Sauer et al.,(2000)]. This includes improvements both in individual's expertise in the review process, as well as their ability to find defects and distinguish true defects from false positives. We present findings from University training in these skills using authentic problems. The first year the course was run it was designed around actual code review sessions, the second year this was expanded to enable students to develop and trial their own generic process for document reviews. This report considers the values and shortcomings of the teaching program from an extensive analysis of the defect detection in the first year, when students were involved in a review process that was set up for them, and student feedback from the second year when students developed and analysed their own process.",2004,0, 2411,The impact of training-by-examples on inspection performance using two laboratory experiments,"Software inspection is often seen as a technique to produce quality software. It has been claimed that expertise is a key determinant in inspection performance particularly in individual detection and group meetings [Sauer, C., et al.,(2000)]. Uncertainty among reviewers during group meetings due to lack of expertise is seen as a weakness in inspection performance. One aspect of achieving expertise is through education or formal training. Recent theoretical frameworks in software inspection also support the idea of possible effects of training on inspection performance [Sauer, C., et al.,(2000)]. To investigate this further, two laboratory experiments were conducted to test the effects of training using defect examples. Our findings show conflicting results between the two experiments, indirectly highlighting the importance of an effective inspection process. The results have implications for the use of a repository of defect examples for training reviewers.",2004,0, 2412,The framework of a web-enabled defect tracking system,"This paper presents an evaluation and investigation of issues to implement a defect management system; a tool used to understand and predict software product quality and software process efficiency. The scope is to simplify the process of defect tracking through a web-enabled application. The system will enable project management, development, quality assurance and software engineer to track and manage problem specifically defects in the context of software project. A collaborative function is essential as this will enable users to communicate in real time mode. This system makes key defect tracking coordination and information available disregards the geographical and time factor.",2004,0, 2413,Quality-evaluation models and measurements,"Quality can determine a software product's success or failure in today's competitive market. Among the many characteristics of quality, some aspects deal directly with the functional correctness or the conformance to specifications, while others deal with usability, portability, and so on. Correctness - that is, how well software conforms to requirements and specifications - is typically the most important aspect of quality, particularly when crucial operations depend on the software. Even for market segments in which new features and usability take priority, such as software for personal use in the mass market, correctness is still a fundamental part of the users' expectations. We compare and classify quality-evaluation models, particularly those evaluating the correctness aspect of quality, and examine their data requirements to provide practical guidance for selecting appropriate models and measurements.",2004,0, 2414,Low-latency mobile IP handoff for infrastructure-mode wireless LANs,"The increasing popularity of IEEE 802.11-based wireless local area networks (LANs) lends them credibility as a viable alternative to third-generation (3G) wireless technologies. Even though wireless LANs support much higher channel bandwidth than 3G networks, their network-layer handoff latency is still too high to be usable for interactive multimedia applications such as voice over IP or video streaming. Specifically, the peculiarities of commercially available IEEE 802.11b wireless LAN hardware prevent existing mobile Internet protocol (IP) implementations from achieving subsecond Mobile IP handoff latency when the wireless LANs are operating in the infrastructure mode, which is also the prevailing operating mode used in most deployed IEEE 802.11b LANs. In this paper, we propose a low-latency mobile IP handoff scheme that can reduce the handoff latency of infrastructure-mode wireless LANs to less than 100 ms, the fastest known handoff performance for such networks. The proposed scheme overcomes the inability of mobility software to sense the signal strengths of multiple-access points when operating in an infrastructure-mode wireless LAN. It expedites link-layer handoff detection and speeds up network-layer handoff by replaying cached foreign agent advertisements. The proposed scheme strictly adheres to the mobile IP standard specification, and does not require any modifications to existing mobile IP implementations. That is, the proposed mechanism is completely transparent to the existing mobile IP software installed on mobile nodes and wired nodes. As a demonstration of this technology, we show how this low-latency handoff scheme together with a wireless LAN bandwidth guarantee mechanism supports undisrupted playback of remote video streams on mobile stations that are traveling across wireless LAN segments.",2004,0, 2415,Evaluation of JPEG 2000 encoder options: human and model observer detection of variable signals in X-ray coronary angiograms,"Previous studies have evaluated the effect of the new still image compression standard JPEG 2000 using nontask based image quality metrics, i.e., peak-signal-to-noise-ratio (PSNR) for nonmedical images. In this paper, the effect of JPEG 2000 encoder options was investigated using the performance of human and model observers (nonprewhitening matched filter with an eye filter, square-window Hotelling, Laguerre-Gauss Hotelling and channelized Hotelling model observer) for clinically relevant visual tasks. Two tasks were investigated: the signal known exactly but variable task (SKEV) and the signal known statistically task (SKS). Test images consisted of real X-ray coronary angiograms with simulated filling defects (signals) inserted in one of the four simulated arteries. The signals varied in size and shape. Experimental results indicated that the dependence of task performance on the JPEG 2000 encoder options was similar for all model and human observers. Model observer performance in the more tractable and computationally economic SKEV task can be used to reliably estimate performance in the complex but clinically more realistic SKS task. JPEG 2000 encoder settings different from the default ones resulted in greatly improved model and human observer performance in the studied clinically relevant visual tasks using real angiography backgrounds.",2004,0, 2416,Frequency-sensitive competitive learning for scalable balanced clustering on high-dimensional hyperspheres,"Competitive learning mechanisms for clustering, in general, suffer from poor performance for very high-dimensional (>1000) data because of ""curse of dimensionality"" effects. In applications such as document clustering, it is customary to normalize the high-dimensional input vectors to unit length, and it is sometimes also desirable to obtain balanced clusters, i.e., clusters of comparable sizes. The spherical kmeans (spkmeans) algorithm, which normalizes the cluster centers as well as the inputs, has been successfully used to cluster normalized text documents in 2000+ dimensional space. Unfortunately, like regular kmeans and its soft expectation-maximization-based version, spkmeans tends to generate extremely imbalanced clusters in high-dimensional spaces when the desired number of clusters is large (tens or more). This paper first shows that the spkmeans algorithm can be derived from a certain maximum likelihood formulation using a mixture of von Mises-Fisher distributions as the generative model, and in fact, it can be considered as a batch-mode version of (normalized) competitive learning. The proposed generative model is then adapted in a principled way to yield three frequency-sensitive competitive learning variants that are applicable to static data and produced high-quality and well-balanced clusters for high-dimensional data. Like kmeans, each iteration is linear in the number of data points and in the number of clusters for all the three algorithms. A frequency-sensitive algorithm to cluster streaming data is also proposed. Experimental results on clustering of high-dimensional text data sets are provided to show the effectiveness and applicability of the proposed techniques.",2004,0, 2417,Motion estimation for frame-rate reduction in H.264 transcoding,"This paper proposes a transcoding method for frame rate reduction in H.264 video coding standard. H.264 adopts various block types and multiple reference frames for motion compensation. When frames are skipped to reduce frame rates in transcoder, it is not easy to estimate optimum motion vectors and block types in H.264. A simple and effective block-adaptive motion vector resampling (BAMVR) method is proposed to estimate motion vector for motion compensation. In order to improve coding efficiency and visual quality, the rate-distortion optimization (RDO) algorithm is also combined with the BAMVR method in transcoder. In experimental results, rate-distortion performance and computational complexity of the proposed transcoder are analyzed for various video sequences. The proposed method achieves remarkable improvement in computational complexity compared to the full-motion estimation (ME) with RDO method.",2004,0, 2418,A model of scalable distributed network performance management,"Quality of service in IP networks necessitates the use of performance management. As Internet continues to grow exponentially, a management system should be scalable in terms of network size, speed and number of customers subscribed to value-added services. This article proposes a flexible, scalable, self-adapting model for managing large-scale distributed network. In this model, Web services framework is used to build the software architecture and XML is used to build the data exchange interface. Policy-based hierarchical event-processing mechanism presented by this paper can efficiently balance the loads and improve the flexibility of the system. The prediction algorithm adopted by this model can predict the network performance more effectively and accurately.",2004,0, 2419,Hardware - software structure for on-line power quality assessment: part I,"The main objective of the proposed work is to introduce a new concept of advanced power quality assessment. The introduced system is implemented using applications of a set of powerful software algorithms and a digital signal processor based hardware data acquisition system. The suggested scheme is mainly to construct a system for real time detection and identification of different types of power quality disturbances that produce a sudden change in the power quality levels. Moreover, a new mitigation technique through generating feedback correction signals for disturbance compensation is addressed. The performance of the suggested system is tested and verified through real test examples. The obtained results reveal that, the introduced system detects fast and accurately most of the power quality disturbance events and introduces new indicative factors estimating the performance of any supply system subjected to a set number of disturbance events.",2004,0, 2420,Assessing the robustness of self-managing computer systems under highly variable workloads,"Computer systems are becoming extremely complex due to the large number and heterogeneity of their hardware and software components, the multilayered architecture used in their design, and the unpredictable nature of their workloads. Thus, performance management becomes difficult and expensive when carried out by human beings. An approach, called self-managing computer systems, is to build into the systems the mechanisms required to self-adjust configuration parameters so that the quality of service requirements of the system are constantly met. In this paper, we evaluate the robustness of such methods when the workload exhibits high variability in terms of the interarrival time and service times of requests. Another contribution of this paper is the assessment of the use of workload forecasting techniques in the design of QoS controllers.",2004,0, 2421,On identifying stable ways to configure systems,We consider the often error-prone process of initially building and/or reconfiguring a computer system. We formulate an optimization framework for capturing certain aspects of this system (re)configuration process. We describe offline and online algorithms that could aid operators in making decisions for how best to take actions on their computers so as to maintain the health of their systems.,2004,0, 2422,Fault-aware job scheduling for BlueGene/L systems,"Summary form only given. Large-scale systems like BlueGene/L are susceptible to a number of software and hardware failures that can affect system performance. We evaluate the effectiveness of a previously developed job scheduling algorithm for BlueGene/L in the presence of faults. We have developed two new job-scheduling algorithms considering failures while scheduling the jobs. We have also evaluated the impact of these algorithms on average bounded slowdown, average response time and system utilization, considering different levels of proactive failure prediction and prevention techniques reported in the literature. Our simulation studies show that the use of these new algorithms with even trivial fault prediction confidence or accuracy levels (as low as 10%) can significantly improve the performance of the BlueGene/L system.",2004,0, 2423,Automatic deployment for hierarchical network enabled servers,"Summary form only given. This article focuses on the deployment of grid infrastructures, more specifically problem solving environments (PSE) for numerical applications on the grid. Although the deployment of such an architecture may be constrained e.g., firewall, right access or security, its efficiency heavily depends on the quality of the mapping between its different components and the grid resources. This article proposes a new model based on linear programming to estimate the performance of a deployment of a hierarchical PSE. The advantages of our modeling approach are: evaluate a virtual deployment before a real deployment, provide a decision builder tool (i.e., designed to compare different architectures or add new resources) and take into account the platform scalability. Using our model, it is possible to determine the bottleneck of the platform and thus to know whether a given deployment can be improved or not. We illustrate the model by applying the results to improve performance of an existing hierarchical PSE called DIET.",2004,0, 2424,On static WCET analysis vs. run-time monitoring of execution time,"Summary form only given. Dynamic, distributed, real-time control systems control a widely varying environment, are made up of application programs that are dispersed among loosely-coupled computers, and must control the environment in a timely manner. The environment determines the number of threats; thus, it is difficult to determine the range of the workload at design time using static worst-case execution time analysis. While a system is lightly loaded, it is wasteful to reserve resources for the heaviest load. Likewise, it is also possible that the load will increase higher than the assumed worst case. A system that has a preset number of resources reserved to it is no longer guaranteed to meet its deadlines under such conditions. In order to ensure that such applications meet their real-time requirements, a mechanism is required to monitor and maintain the real-time quality of service (QoS): a QoS manager, which monitors the processing timing (latency) and resource usage of a distributed real-time system, forecasts, detects and diagnoses violations of the timing constraints, and requests more or fewer resources to maintain the desired timing characteristics. To enable better control over the system, the goals are as follows: 1) Gather detailed information about antiair warfare and air-traffic control application domains and employ it in the creation of a distributed real-time sensing and visualization testbed for air-traffic control. 2) Identify mathematical relationships among independent and dependent variables, such as performance and fault tolerance vs. resource usage, and security vs. performance. 3) Uncover new techniques for ensuring performance, fault tolerance, and security by optimizing the variables under the constraints of resource availability and user requirements.",2004,0, 2425,An investigation into the application of different performance prediction techniques to e-Commerce applications,Summary form only given. Predictive performance models of e-Commerce applications allows grid workload managers to provide e-Commerce clients with qualities of service (QoS) whilst making efficient use of resources. We demonstrate the use of two 'coarse-grained' modelling approaches (based on layered queuing modelling and historical performance data analysis) for predicting the performance of dynamic e-Commerce systems on heterogeneous servers. Results for a popular e-Commerce benchmark show how request response times and server throughputs can be predicted on servers with heterogeneous CPUs at different background loads. The two approaches are compared and their usefulness to grid workload management is considered.,2004,0, 2426,A heuristic for multi-constrained multicast routing,"In contrast to the situation that the constrained minimum Steiner tree (CMST) problem has attracted much attention in the quality of service (QoS) routing area, little work has been done on multicast routing subject to multiple additive constraints, even though the corresponding applications are obvious. We propose a heuristic, HMCMC, to solve this problem. The basic idea of HMCMC is to construct the multicast tree step by step, which is done essentially based on the latest research results on multi-constrained unicast routing. Computer simulations demonstrate that, if there is one, the proposed heuristic can find a feasible multicast tree with a fairly high probability.",2004,0, 2427,Thinking about thinking aloud: a comparison of two verbal protocols for usability testing,"We report on an exploratory experimental comparison of two different thinking aloud approaches in a usability test that focused on navigation problems in a highly nonstandard Web site. One approach is a rigid application of Ericsson and Simon's (for original paper see Protocol Analysis: Verbal Reports as Data, MIT Press (1993)) procedure. The other is derived from Boren and Ramey's (for original paper see ibid., vol. 43, no. 3, p. 261-278 (2000)) proposal based on speech communication. The latter approach differs from the former in that the experimenter has more room for acknowledging (mm-hmm) contributions from subjects and has the possibility of asking for clarifications and offering encouragement. Comparing the verbal reports obtained with these two methods, we find that the process of thinking aloud while carrying out tasks is not affected by the type of approach that was used. The task performance does differ. More tasks were completed in the B and R condition, and subjects were less lost. Nevertheless, subjects' evaluations of the Web site quality did not differ, nor did the number of different navigation problems that were detected.",2004,0, 2428,Semidefinite programming for ad hoc wireless sensor network localization,We describe an SDP relaxation based method for the position estimation problem in wireless sensor networks. The optimization problem is set up so as to minimize the error in sensor positions to fit distance measures. Observable gauges are developed to check the quality of the point estimation of sensors or to detect erroneous sensors. The performance of this technique is highly satisfactory compared to other techniques. Very few anchor nodes are required to accurately estimate the position of all the unknown nodes in a network. Also the estimation errors are minimal even when the anchor nodes are not suitably placed within the network or the distance measurements are noisy.,2004,0, 2429,Measuring the complexity of component-based system architecture,"This paper proposes a graph-based metric to evaluate complexity of the architecture by analyzing the dependencies between components of the system. Measuring the complexity is helpful during analyzing, testing, and maintaining the system. This measurement could direct the process of improvement and reengineering work. A complexity measure could also be used as a predictor of the effort that is needed to maintain the system. In component-based systems, functionalities are not performed within one component. Components communicate and share information in order to provide system functionalities. In our approach, to capture system architecture, we borrow Li's model, component dependency graph (CDG).",2004,0, 2430,Predicting C++ program quality by using Bayesian belief networks,"There have been many attempts to build models for predicting the software quality. Such models are used to measure the quality of software systems. The key variables in these models are either size or complexity metrics. There are, however, serious statistical and theoretical difficulties with these approaches. By using Bayesian belief network, we can overcome some of the more serious problems by taking more quality factors, which have direct or indirect impact on the software quality. In this paper, we have suggested a model to predicting the computer program quality by using Bayesian belief network. We found that the implementation of all quality factors were not feasible. Therefore, we have selected 14 quality factors to be implemented on an average size of two C++ programs. The selection criteria were based on the reviewer's opinions. Each node on the given Bayesian believe network represents one quality factor. We have drawn the BBN for the two C++ programs considering 14 nodes. The BBN has been constructed. The model has been executed and the results have been discussed.",2004,0, 2431,Classification of power quality events using optimal time-frequency representations-Part 1: theory,"Better software and hardware for automatic classification of power quality (PQ) disturbances are desired for both utilities and commercial customers. Existing automatic recognition methods need improvement in terms of their capability, reliability, and accuracy. This paper presents the theoretical foundation of a new method for classifying voltage and current waveform events that are related to a variety of PQ problems. The method is composed of two sequential processes: feature extraction and classification. The proposed feature extraction tool, time-frequency ambiguity plane with kernel techniques, is new to the power engineering field. The essence of the feature exaction is to project a PQ signal onto a low-dimension time-frequency representation (TFR), which is deliberately designed for maximizing the separability between classes. The technique of designing an optimized TFR from time-frequency ambiguity plane is for the first time applied to the PQ classification problem. A distinct TFR is designed for each class. The classifiers include a Heaviside-function linear classifier and neural networks with feedforward structures. The flexibility of this method allows classification of a very broad range of power quality events. The performance validation and hardware implementation of the proposed method are presented in the second part of this two-paper series.",2004,0, 2432,Exact analysis of a class of GI/G/1-type performability models,"We present an exact decomposition algorithm for the analysis of Markov chains with a GI/G/1-type repetitive structure. Such processes exhibit both M/G/1-type & GI/M/1-type patterns, and cannot be solved using existing techniques. Markov chains with a GI/G/1 pattern result when modeling open systems which accept jobs from multiple exogenous sources, and are subject to failures & repairs; a single failure can empty the system of jobs, while a single batch arrival can add many jobs to the system. Our method provides exact computation of the stationary probabilities, which can then be used to obtain performance measures such as the average queue length or any of its higher moments, as well as the probability of the system being in various failure states, thus performability measures. We formulate the conditions under which our approach is applicable, and illustrate it via the performability analysis of a parallel computer system.",2004,0, 2433,Triage: performance isolation and differentiation for storage systems,"Ensuring performance isolation and differentiation among workloads that share a storage infrastructure is a basic requirement in consolidated data centers. Existing management tools rely on resource provisioning to meet performance goals; they require detailed knowledge of the system characteristics and the workloads. Provisioning is inherently slow to react to system and workload dynamics, and in the general case, it is impossible to provision for the worst case. We propose a software-only solution that ensures predictable performance for storage access. It is applicable to a wide range of storage systems and makes no assumptions about workload characteristics. We use an on-line feedback loop with an adaptive controller that throttles storage access requests to ensure that the available system throughput is shared among workloads according to their performance goals and their relative importance. The controller considers the system as a ""black box"" and adapts automatically to system and workload changes. The controller is distributed to ensure high availability under overload conditions, and it can be used for both block and file access protocols. The evaluation of Triage, our experimental prototype, demonstrates workload isolation and differentiation, in an overloaded cluster file-system where workloads and system components are changing.",2004,0, 2434,The use of unified APC/FD in the control of a metal etch area,"An adaptive neural network-based advanced process control software, the Dynamic Neural Controllerâ„?(DNC), was employed at National Semiconductor's 200 mm fabrication facility, South Portland, Maine, to enhance the performance of metal etch tools. The installation was performed on 5 identical LAM 9600 TCP Metal etchers running production material. The DNC produced a single predictive model on critical outputs and metrology for each tool based on process variables, maintenance, input metrology and output metrology. Although process metrology is usually measured on only one wafer per lot, the process can be closely monitored on a wafer-by-wafer basis with the DNC models. The DNC was able to provide recommendations for maintenance (replacing components in advance of predicted failure) and process variable adjustments (e.g. gas flow) to maximize tool up time and to reduce scrap. This enabled the equipment engineers to both debug problems more quickly on the tool and to make adjustments to tool parameters before out-of-spec wafers were produced. After a comparison of the performance of all 5 tools for a 2-month period prior to DNC installation vs. a 2-month post-DNC period, we concluded that the software was able to predict when maintenance actions were required, when process changes were required, and when maintenance actions were being taken but were not required. We observed a significant improvement in process Cpks for the metal etchers in this study.",2004,0, 2435,Model-driven reverse engineering,"Reverse engineering is the process of comprehending software and producing a model of it at a high abstraction level, suitable for documentation, maintenance, or reengineering. But from a manager's viewpoint, there are two painful problems: 1) It's difficult or impossible to predict how much time reverse engineering will require. 2) There are no standards to evaluate the quality of the reverse engineering that the maintenance staff performs. Model-driven reverse engineering can overcome these difficulties. A model is a high-level representation of some aspect of a software system. MDRE uses the features of modeling technology but applies them differently to address the maintenance manager's problems. Our approach to MDRE uses formal specification and automatic code generation to reverse the reverse-engineering process. Models written in a formal specification language called SLANG describe both the application domain and the program being reverse engineered, and interpretations annotate the connections between the two. The ability to generate a similar version of a program gives managers a fixed target for reverse engineering. This, in turn, enables better effort prediction and quality evaluation, reducing development risk.",2004,0, 2436,"Requirements triage: what can we learn from a ""medical"" approach?","New-product development is commonly risky, judging by the number of high-profile failures that continue to occur-especially in software engineering. We can trace many of these failures back to requirements-related issues. Triage is a technique that the medical profession uses to prioritize treatment to patients on the basis of their symptoms' severity. Trauma triage provides some tantalizing insights into how we might measure risk of failure early, quickly, and accurately. For projects at significant risk, we could activate a ""requirements trauma system"" to include specialists, processes, and tools designed to correct the issues and improve the probability that the project ends successfully. We explain these techniques and suggest how we can adapt them to help identify and quantify requirements-related risks.",2004,0, 2437,Impact of process variation phenomena on performance and quality assessment,"Summary form only given. Logic product density and performance trends have continued to follow the course predicted by Moore's Law. To support the trends in the future and build logic products approaching one billion or more transistors before the end of the decade, several challenges must be met. These challenges include: 1) maintaining transistor/interconnect feature scaling, 2) the increasing power density dilemma, 3) increasing relative difficulty of 2-D feature resolution and general critical dimension control, 4) identifying cost effective solutions to increasing process and design database complexity, and 5), improving general performance and quality predictability in the face of the growing control, complexity and predictability issues. The trend in transistor scaling can be maintained while addressing the power density issue with new transistor structures, design approaches, and product architectures (e.g. high-k, metal gate, etc.). Items 3 to 5 are the focus of this work and are also strongly inter-related. The general 2-D patterning and resolution control problems will require several solution approaches both through design and technology e.g. reduce design degrees of freedom, use of simpler arrayed structures, improved uniformity, improved tools, etc. The data base complexity/cost problem will require solutions likely to involve use of improved data structure, improved use of hierarchy, and improved software and hardware solutions. Performance assessment, predictability and quality assessment will benefit from solutions to the control and complexity issues noted above. In addition, new design techniques/tools as well as improved process characterization models and methods can address the general performance/quality assessment challenge.",2004,0, 2438,Requirements driven software evolution,"Software evolution is an integral part of the software life cycle. Furthermore in the recent years the issue of keeping legacy systems operational in new platforms has become critical and one of the top priorities in IT departments worldwide. The research community and the industry have responded to these challenges by investigating and proposing techniques for analyzing, transforming, integrating, and porting software systems to new platforms, languages, and operating environments. However, measuring and ensuring that compliance of the migrant system with specific target requirements have not been formally and thoroughly addressed. We believe that issues such as the identification, measurement, and evaluation of specific re-engineering and transformation strategies and their impact on the quality of the migrant system pose major challenges in the software re-engineering community. Other related problems include the verification, validation, and testing of migrant systems, and the design of techniques for keeping various models (architecture, design, source code) during evolution, synchronized. In this working session, we plan to assess the state of the art in these areas, discuss on-going work, and identify further research issues.",2004,0, 2439,An intelligent admission control scheme for next generation wireless systems using distributed genetic algorithms,"A different variety of services requiring different levels of quality of service (QoS) need be addressed for mobile users of the next generation wireless system (NGWS). An efficient handoff technique with intelligent admission control can accomplish this aim. In this paper, a new, intelligent handoff scheme using distributed genetic algorithms (DGA) is proposed for NGWS. This scheme uses DGA to achieve high network utilization, minimum cost and handoff latency. A performance analysis is provided to assess the efficiency of the proposed DGA scheme. Simulation results show a significant improvement in handoff latencies and costs over traditional genetic algorithms and other admission control schemes.",2004,0, 2440,Does your result checker really check?,"A result checker is a program that checks the output of the computation of the observed program for correctness. Introduced originally by Blum, the result-checking paradigm has provided a powerful platform assuring the reliability of software. However, constructing result checkers for most problems requires not only significant domain knowledge but also ingenuity and can be error prone. In this paper we present our experience in validating result checkers using formal methods. We have conducted several case studies in validating result checkers from the commercial LEDA system for combinatorial and geometric computing. In one of our case studies, we detected a logical error in a result checker for a program computing max flow of a graph.",2004,0, 2441,Availability measurement and modeling for an application server,"Application server is a standard middleware platform for deploying Web-based business applications which typically require the underlying platform to deliver high system availability and to minimize loss of transactions. This paper presents a measurement-based availability modeling and analysis for a fault tolerant application server system - Sun Java System Application Server, Enterprise Edition 7. The study applies hierarchical Markov reward modeling techniques on the target software system. The model parameters are conservatively estimated from lab or field measurements. The uncertainty analysis method is used on the model to obtain average system availability and confidence intervals by randomly sampling from possible ranges of parameters that cannot be accurately measured in limited time frames or may vary widely in customer sites. As demonstrated in this paper, the combined use of lab measurement, analytical modeling, and uncertainty analysis is a useful evaluation approach which can provide a conservative availability assessment at stated confidence levels for a new software product.",2004,0, 2442,QoS of timeout-based self-tuned failure detectors: the effects of the communication delay predictor and the safety margin,"Unreliable failure detectors have been an important abstraction to build dependable distributed applications over asynchronous distributed systems subject to faults. Their implementations are commonly based on timeouts to ensure algorithm termination. However, for systems built on the Internet, it is hard to estimate this time value due to traffic variations. Thus, different types of predictors have been used to model this behavior and make predictions of delays. In order to increase the quality of service (QoS), self-tuned failure detectors dynamically adapt their timeouts to the communication delay behavior added of a safety margin. In this paper, we evaluate the QoS of a failure detector for different combinations of communication delay predictors and safety margins. As the results show, to improve the QoS, one must consider the relation between the pair predictor/margin, instead of each one separately. Furthermore, performance and accuracy requirements should be considered for a suitable relationship.",2004,0, 2443,A scalable distributed QoS multicast routing protocol,"Many Internet multicast applications such as teleconferencing and remote diagnosis have quality-of-service (QoS) requirements. It is a challenging task to build QoS constrained multicast trees with high performance, high success ratio, low overhead, and low system requirements. This paper presents a new scalable QoS multicast routing protocol (SoMR) that has very small communication overhead and requires no state outside the multicast tree. SoMR achieves the favorable tradeoff between routing performance and overhead by carefully selecting the network sub-graph in which it conducts the search for a path that can support the QoS requirement, and by auto-tuning the selection according to the current network conditions. Its early-warning mechanism helps to detect and route around the real bottlenecks in the network, which increases the chance of finding feasible paths for additive QoS requirements. SoMR minimizes the system requirements; it relies only on the local state stored at each router. The routing operations are completely decentralized.",2004,0, 2444,Scalable network assessment for IP telephony,Multimedia applications such as IP telephony are among the applications that demand strict quality of service (QoS) guarantees from the underlying data network. At the predeployment stage it is critical to assess whether the network can handle the QoS requirements of IP telephony and fix problems that may prevent a successful deployment. In this paper we describe a technique for efficiently assessing network readiness for IP telephony. Our technique relies on understanding link level QoS behavior in a network from an IP telephony perspective. We use network topology and end-to-end measurements collected from the network in locating the sources of performance problems that may prevent a successful IP telephony deployment. We present an empirical study conducted on a real network spanning three geographically separated sites of an enterprise network. The empirical results indicate that our approach efficiently and accurately pinpoints links in the network incurring the most significant delay.,2004,0, 2445,Code generation for WSLAs using AXpect,"WSLAs can be viewed as describing the service aspect of Web services. By their nature, Web services are distributed. Therefore, integrating support code into a Web service application is potentially costly and error prone. Viewed from this AOP perspective, then, we present a method for integrating WSLAs into code generation using the AXpect weaver, the AOP technology for Infopipes. This helps to localize the code physically and therefore increase the eventual maintainability and enhance the reuse of the WSLA code. We then illustrate the weavers capability by using a WSLA document to codify constraints and metrics for a streaming image application that requires CPU resource monitoring.",2004,0, 2446,ANN application in electronic diagnosis-preliminary results,"In this paper artificial neural networks (ANNs) are applied to diagnosis of catastrophic defects in a linear analog circuit. In fact, today the technical diagnosis is great challenge for design engineers because the diagnostic problem is generally underdeterminate. It is also a deductive process with one set of data creating, in general, unlimited number of hypotheses among which one should try to find the solution. So, the diagnosis methods are mostly based on proprietary knowledge and personal experience, although they were built into integrated diagnostic equipment. ANN approach is proposed here as an alternative to existing solutions, based on the fact that ANNs are expected to encompass all phases of the diagnostic process: symptom detection, hypothesis generation, and hypothesis discrimination. The approach is demonstrated on the example of a simple resistive electrical circuit, and the generalization property is shown by supplying noisy data to ANNs inputs during diagnosis.",2004,0, 2447,Probability models for high dynamic range imaging,"Methods for expanding the dynamic range of digital photographs by combining images taken at different exposures have recently received a lot of attention. Current techniques assume that the photometric transfer function of a given camera is the same (modulo an overall exposure change) for all the input images. Unfortunately, this is rarely the case with today's camera, which may perform complex nonlinear color and intensity transforms on each picture. In this paper, we show how the use of probability models for the imaging system and weak prior models for the response functions enable us to estimate a different function for each image using only pixel intensity values. Our approach also allows us to characterize the uncertainty inherent in each pixel measurement. We can therefore produce statistically optimal estimates for the hidden variables in our model representing scene irradiance. We present results using this method to statistically characterize camera imaging functions and construct high-quality high dynamic range (HDR) images using only image pixel information.",2004,0, 2448,Computing depth under ambient illumination using multi-shuttered light,"Range imaging has become a critical component of many computer vision applications. The quality of the depth data is of critical importance, but so is the need for speed. Shuttered light-pulse (SLP) imaging uses active illumination hardware to provide high quality depth maps at video frame rates. Unfortunately, current analytical models for deriving depth from SLP imagers are specific to the number of shutters and have a number of deficiencies. As a result, depth estimation often suffers from bias due to object reflectivity, incorrect shutter settings, or strong ambient illumination such as that encountered outdoors. These limitations make SLP imaging unsuitable for many applications requiring stable depth readings. This paper introduces a method that is general to any number of shutters. Using three shutters, the new method produces invariant estimates under changes in ambient illumination, producing high quality depth maps in a wider range of situations.",2004,0, 2449,Dynamic coupling measurement for object-oriented software,"The relationships between coupling and external quality factors of object-oriented software have been studied extensively for the past few years. For example, several studies have identified clear empirical relationships between class-level coupling and class fault-proneness. A common way to define and measure coupling is through structural properties and static code analysis. However, because of polymorphism, dynamic binding, and the common presence of unused (""dead"") code in commercial software, the resulting coupling measures are imprecise as they do not perfectly reflect the actual coupling taking place among classes at runtime. For example, when using static analysis to measure coupling, it is difficult and sometimes impossible to determine what actual methods can be invoked from a client class if those methods are overridden in the subclasses of the server classes. Coupling measurement has traditionally been performed using static code analysis, because most of the existing work was done on nonobject oriented code and because dynamic code analysis is more expensive and complex to perform. For modern software systems, however, this focus on static analysis can be problematic because although dynamic binding existed before the advent of object-orientation, its usage has increased significantly in the last decade. We describe how coupling can be defined and precisely measured based on dynamic analysis of systems. We refer to this type of coupling as dynamic coupling. An empirical evaluation of the proposed dynamic coupling measures is reported in which we study the relationship of these measures with the change proneness of classes. Data from maintenance releases of a large Java system are used for this purpose. Preliminary results suggest that some dynamic coupling measures are significant indicators of change proneness and that they complement existing coupling measures based on static analysis.",2004,0, 2450,Empirical studies on requirement management measures,The goal of this research is to demonstrate that a subset of a set of 38 requirements management measures are good predictors of stability and volatility of requirements and change requests. At the time of writing we have theoretically validated ten of these 38 measures. We are currently planning and performing an industrial case study where we want to reach the goal described above.,2004,0, 2451,Toward a software testing and reliability early warning metric suite,"The field reliability is measured too late for affordably guiding corrective action to improve the quality of the software. Software developers can benefit from an early warning of their reliability while they can still affordably react. This early warning can be built from a collection of internal metrics. An internal metric, such as the number of lines of code, is a measure derived from the product itself. An external measure is a measure of a product derived from assessment of the behavior of the system. For example, the number of defects found in test is an external measure. The ISO/IEC standard states that [i]nternal metrics are of little value unless there is evidence that they are related to external quality. Internal metrics can be collected in-process and more easily than external metrics. Additionally, internal metrics have been shown to be useful as early indicators of externally-visible product quality. For these early indicators to be meaningful, they must be related (in a statistically significant and stable way) to the field quality/reliability of the product. The validation of such metrics requires the convincing demonstration that (1) the metric measures what it purports to measure and (2) the metric is associated with an important external metric, such as field reliability, maintainability, or fault-proneness. Software metrics have been used as indicators of software quality and fault proneness. There is a growing body of empirical results that supports the theoretical validity of the use of higher-order early metrics, such as OO metrics defined by Chidamber-Kemerer (CK) and the MOOD OO metric suites as predictors of field quality. However, general validity of these metrics (which are often unrelated to the actual operational profile of the product) is still open to criticism.",2004,0, 2452,Using simulation to empirically investigate test coverage criteria based on statechart,"A number of testing strategies have been proposed using state machines and statecharts as test models in order to derive test sequences and validate classes or class clusters. Though such criteria have the advantage of being systematic, little is known on how cost effective they are and how they compare to each other. This article presents a precise simulation and analysis procedure to analyze the cost-effectiveness of statechart-based testing techniques. We then investigate, using this procedure, the cost and fault detection effectiveness of adequate test sets for the most referenced coverage criteria for statecharts on three different representative case studies. Through the analysis of common results and differences across studies, we attempt to draw more general conclusions regarding the costs and benefits of using the criteria under investigation.",2004,0, 2453,Bi-criteria models for all-uses test suite reduction,"Using bi-criteria decision making analysis, a new model for test suite minimization has been developed that pursues two objectives: minimizing a test suite with regard to a particular level of coverage while simultaneously maximizing error detection rates. This new representation makes it possible to achieve significant reductions in test suite size without experiencing a decrease in error detection rates. Using the all-uses inter-procedural data flow testing criterion, two binary integer linear programming models were evaluated, one a single-objective model, the other a weighted-sums bi-criteria model. The applicability of the bi-criteria model to regression test suite maintenance was also evaluated. The data show that minimization based solely on definition-use association coverage may have a negative impact on the error detection rate as compared to minimization performed with a bi-criteria model that also takes into account the ability of test cases to reveal error. Results obtained with the bi-criteria model also indicate that test suites minimized with respect to a collection of program faults are effective at revealing subsequent program faults.",2004,0, 2454,Making resource decisions for software projects,"Software metrics should support managerial decision making in software projects. We explain how traditional metrics approaches, such as regression-based models for cost estimation fall short of this goal. Instead, we describe a causal model (using a Bayesian network) which incorporates empirical data, but allows it to be interpreted and supplemented using expert judgement. We show how this causal model is used in a practical decision-support tool, allowing a project manager to trade-off the resources used against the outputs (delivered functionality, quality achieved) in a software project. The model and toolset have evolved in a number of collaborative projects and hence capture significant commercial input. Extensive validation trials are taking place among partners on the EC funded project MODIST (this includes Philips, Israel Aircraft Industries and QinetiQ) and the feedback so far has been very good. The estimates are sensible and the causal modelling approach enables decision-makers to reason in a way that is not possible with other project management and resource estimation tools. To ensure wide dissemination and validation a version of the toolset with the full underlying model is being made available for free to researchers.",2004,0, 2455,Quality of service provisioning for VoIP applications with policy-enabled differentiated services,"The recent advancement in network technologies has made it possible for the convergence of voice and data networks. In particular, the differentiated services (DiffServ) model has helped in bridging the diverse performance requirements between voice and non real-time applications such as file transfer and e-mail. One key feature of the DiffServ model is its promise to bring scalable service discrimination to the Internet and intranets. With the heterogeneous nature of today's distributed systems, however, there is a demand for flexible management systems that can cope with not only changes in resource needs and the perceived quality of applications (which is often user-dependent and application-specific), but also changes in the configuration of the distributed environment. Policy-based management has been proposed as a means of providing dynamic change in the behavior of applications and systems at run-time rather than through reengineering. The paper proposes a policy-enabled DiffServ architecture to provide predictable and measurable quality of service (QoS) for voice over Internet Protocol (VoIP) applications, and illustrates its performance.",2004,0, 2456,A transparent and centralized performance management service for CORBA based applications,"The quest for service quality in enterprise applications is driving companies to profile their online performance. Application management tools come in handy to deliver the required diagnosis. However, distributed applications are hard to manage due to their complexity and geographical dispersion. To cope with this problem, this paper presents a Java based management solution for CORBA distributed applications. The solution combines XML, SNMP and portable interceptors to provide a nonintrusive performance management service. Components can be attached to client and server sides to monitor messages and gather data into a centralized database. A detailed analysis can then be performed to expose behavioral problems in specific parts of the application. Performance reports and charts are supplied through a Web console. A prototypical implementation was tested against two available ORB to assess functionality and interposed overhead.",2004,0, 2457,Performance evaluation and failure rate prediction for the soft implemented error detection technique,"This paper presents two error models to evaluate safety of a software error detection method. The proposed models analyze the impact on program overhead in terms of memory code area and increased execution time when the studied error detection technique is applied. For faults affecting the processor's registers, analytic formulas are derived to estimate the failure rate before program execution. These formulas are based on probabilistic methods and use statistics of the program, which are collected during compilation. The studied error detection technique was applied to several benchmark programs and then program overhead and failure rate was estimated. Experimental results validate the estimated performances and show the effectiveness of the proposed evaluation formulas.",2004,0, 2458,Fragment class analysis for testing of polymorphism in Java software,"Testing of polymorphism in object-oriented software may require coverage of all possible bindings of receiver classes and target methods at call sites. Tools that measure this coverage need to use class analysis to compute the coverage requirements. However, traditional whole-program class analysis cannot be used when testing incomplete programs. To solve this problem, we present a general approach for adapting whole-program class analyses to operate on program fragments. Furthermore, since analysis precision is critical for coverage tools, we provide precision measurements for several analyses by determining which of the computed coverage requirements are actually feasible for a set of subject components. Our work enables the use of whole-program class analyses for testing of polymorphism in partial programs, and identifies analyses that potentially are good candidates for use in coverage tools.",2004,0, 2459,Probabilistic regression suites for functional verification,"Random test generators are often used to create regression suites on-the-fly. Regression suites are commonly generated by choosing several specifications and generating a number of tests from each one, without reasoning which specification should he used and how many tests should he generated from each specification. This paper describes a technique for building high quality random regression suites. The proposed technique uses information about the probablity of each test specification covering each coverage task. This probability is used, in tun, to determine which test specifications should be included in the regression suite and how many tests should, be generated from each specification. Experimental results show that this practical technique can he used to improve the quality, and reduce the cost, of regression suites. Moreover, it enables better informed decisions regarding the size and distribution of the regression suites, and the risk involved.",2004,0, 2460,A framework for polysensometric multidimensional spatial visualization,"Typically any single sensor instrument suffers from physical/observation constraints. This paper discusses a generalized framework, called polymorphic visual information fusion framework (PVIF) that can enable information from multiple sensors to be fused and compared to gain broader understanding of a target of observation in multidimensional space. An automated software system supporting comparative cognition has been developed to form 3D models based on the datasets from different sensors, such as XPS and LSCM. This fusion framework not only provides an information engineering based tool to overcome the limitations of individual sensor's scope of observation but also provides a means where theoretical understanding surrounding a complex target can be mutually validated by comparative cognition about the object of interest and 3D model refinement. Some polysensometric data classification metrics are provided to measure the quality of input datasets for fusion visualization.",2004,0, 2461,"Design, implementation and performance of fault-tolerant message passing interface (MPI)","Fault tolerant MPI (FTMPI) enables fault tolerance to the MPICH, an open source GPL licensed implementation of MPI standard by Argonne National Laboratory's Mathematics and Computer Science Division. FTMPI is a transparent fault-tolerant environment, based on synchronous checkpointing and restarting mechanism. FTMPI relies on non-multithreaded single process checkpointing library to synchronously checkpoint an application process. Global replicated system controller and cluster node specific node controller monitors and controls check pointing and recovery activities of all MPI applications within the cluster. This work details the architecture to provide fault tolerance mechanism for MPI based applications running on clusters and the performance of NAS parallel benchmarks and parallelized medium range weather forecasting models, P-T80 and P-TI26. The architecture addresses the following issues also: Replicating system controller to avoid single point of failure. Ensuring consistency of checkpoint files based on distributed two phase commit protocol, and robust fault detection hierarchy.",2004,0, 2462,An incremental learning algorithm with confidence estimation for automated identification of NDE signals,"An incremental learning algorithm is introduced for learning new information from additional data that may later become available, after a classifier has already been trained using a previously available database. The proposed algorithm is capable of incrementally learning new information without forgetting previously acquired knowledge and without requiring access to the original database, even when new data include examples of previously unseen classes. Scenarios requiring such a learning algorithm are encountered often in nondestructive evaluation (NDE) in which large volumes of data are collected in batches over a period of time, and new defect types may become available in subsequent databases. The algorithm, named Learn++, takes advantage of synergistic generalization performance of an ensemble of classifiers in which each classifier is trained with a strategically chosen subset of the training databases that subsequently become available. The ensemble of classifiers then is combined through a weighted majority voting procedure. Learn++ is independent of the specific classifier(s) comprising the ensemble, and hence may be used with any supervised learning algorithm. The voting procedure also allows Learn++ to estimate the confidence in its own decision. We present the algorithm and its promising results on two separate ultrasonic weld inspection applications.",2004,0, 2463,A collaborative operation method between new energy-type dispersed power supply and EDLC,"In this paper, the modified Euler-type moving average prediction (EMAP) model is proposed to operate a new energy type dispersed power supply system in autonomous mode. This dispersed power supply system consists of a large-scale photovoltaic system (PV) and a fuel cell, as well as an electric double-layer capacitor (EDLC). This system can meet the multi-quality electric power requirements of customers, and ensures voltage stability and uninterruptible power supply function as well. Each subsystem of this distributed power supply contributes to the above-mentioned system performance with its own excellent characteristics. Based on the collaborative operation methods by EMAP model, the required capacity of EDLC to compensate the fluctuation of both PV output and load demand is examined by the simulation using software MATLAB/Simulink, and, response characteristics of this system is confirmed with simulation by software PSIM.",2004,0, 2464,Improved quantization structures using generalized HMM modelling with application to wideband speech coding,"In this paper, a low-complexity, high-quality recursive vector quantizer based on a generalized hidden Markov model of the source is presented. Capitalizing on recent developments in vector quantization based on Gaussian mixture models, we extend previous work on HMM-based quantizers to the case of continuous vector-valued sources, and also formulate a generalization of the standard HMM. This leads us to a family of parametric source models with very flexible modelling capabilities, with which are associated low-complexity recursive quantization structures. The performance of these schemes is demonstrated for the problem of wideband speech spectrum quantization, and shown to compare favorably to existing state-of-the-art schemes.",2004,0, 2465,A window adaptive hybrid vector filter for color image restoration,"A novel window adaptive hybrid filter for the removal of different types of noise which contaminate color images is proposed. At each pixel location, the image vector is first classified into several different signal activity areas through a modified quadtree decomposition on the luminance of the input image, with only one empirical parameter required to be controlled. Then, an optimal window and weight adaptive vector filtering operation is activated for the best tradeoff between noise suppression and detail preservation. The proposed filter has demonstrated superior performance in suppressing several distinctive types of color image noise, which include Gaussian, impulse, and mixed noise. Significant improvements have been achieved in terms of standard objective measurements as well as the perceived image quality.",2004,0, 2466,Evaluating cognitive complexity measure with Weyuker properties,"Cognitive complexity measure is based on cognitive informatics, which in turn helps in comprehending the software characteristics. Weyuker properties must be satisfied by every complexity measure to qualify as a good and comprehensive one. An attempt has been made to evaluate cognitive complexity measure in terms of nine Weyuker properties, through examples. It has been found that eight of nine Weyuker properties have been satisfied by the cognitive weight software complexity measure and hence establishes the cognitive complexity as a well structured one.",2004,0, 2467,An effective fault-tolerant routing methodology for direct networks,"Current massively parallel computing systems are being built with thousands of nodes, which significantly affect the probability of failure. M. E. Gomex proposed a methodology to design fault-tolerant routing algorithms for direct interconnection networks. The methodology uses a simple mechanism: for some source-destination pairs, packets are first forwarded to an intermediate node, and later, from this node to the destination node. Minimal adaptive routing is used along both subpaths. For those cases where the methodology cannot find a suitable intermediate node, it combines the use of intermediate nodes with two additional mechanisms: disabling adaptive routing and using misrouting on a per-packet basis. While the combination of these three mechanisms tolerates a large number of faults, each one requires adding some hardware support in the network and also introduces some overhead. In this paper, we perform an in-depth detailed analysis of the impact of these mechanisms on network behaviour. We analyze the impact of the three mechanisms separately and combined. The ultimate goal of this paper is to obtain a suitable combination of mechanisms that is able to meet the trade-off between fault-tolerance degree, routing complexity, and performance.",2004,0, 2468,Direct digital synthesis: a tool for periodic wave generation (part 2),"Direct digital synthesis (DDS) is a useful tool for generating periodic waveforms. In this two-part article, the basic idea of this synthesis technique is presented and then focused on the quality of the sinewave a DDS can create, introducing the SFDR quality parameter. Next effective methods to increase the SFDR are presented through sinewave approximations, hardware schemes such as dithering and noise shaping, and an extensive list of reference. When the desired output is a digital signal, the signal's characteristics can be accurately predicted using the formulas given in this article. When the desired output is an analog signal, the reader should keep in mind that the performance of the DDS is eventually limited by the performance of the digital-to-analog converter and the follow-on analog filter. Hoping that this article would incite engineers to use DDS either in integrated circuits DDS or software-implemented DDS. From the author's experience, this technique has proven valuable when frequency resolution is the challenge, particularly when using low-cost microcontrollers.",2004,0, 2469,Efficient multi-frame motion estimation algorithms for MPEG-4 AVC/JVT/H.264,"In H.264 standard, a lot of computational complexity is consumed in the encoder for motion estimation. It allows seven block sizes to perform the motion/compensation, and refers to previous five frames for searching the best motion vector to achieve lower bitrate and higher quality. Since the H.264 reference software uses the full search scheme to obtain the best performance, it spends a lot of searching time. In this paper we propose efficient searching algorithms by reuse of the motion vector information from the last reference frame. The proposed algorithms use the stored motion vectors to compose the current motion vector without performing the full search in each reference frame. Therefore, our proposed algorithms can obtain the speed up ratio 4 in average for encoding, which benefits from the prediction of the motion vector for reference frames in advance and maintain a good performance. Any fast search algorithm can be utilized to further largely reduce the computational load for the motion estimation from previous one frame.",2004,0, 2470,Measuring software product quality: a survey of ISO/IEC 9126,"To address the issues of software product quality, the Joint Technical Committee 1 of the International Organization for Standardization and International Electrotechnical Commission published a set of software product quality standards known as ISO/IEC 9126. These standards specify software product quality's characteristics and subcharacteristics and their metrics. Based on a user survey, this study of the standard helps clarity quality attributes and provides guidance for the resulting standards.",2004,0, 2471,End-to-end defect modeling,"In this context, computer models can help us predict outcomes and anticipate with confidence. We can now use cause-effect modeling to drive software quality, moving our organization toward higher maturity levels. Despite missing good software quality models, many software projects successfully deliver software on time and with acceptable quality. Although researchers have devoted much attention to analyzing software projects' failures, we also need to understand why some are successful - within budget, of high quality, and on time-despite numerous challenges. Restricting software quality to defects, decisions made in successful projects must be based on some understanding of cause-effect relationships that drive defects at each stage of the process. To manage software quality by data, we need a model describing which factors drive defect introduction and removal in the life cycle, and how they do it. Once properly built and validated, a defect model enables successful anticipation. This is why it's important that the model include all variables influencing the process response to some degree.",2004,0, 2472,System optimization with component reliability estimation uncertainty: a multi-criteria approach,"Summary & Conclusions-This paper addresses system reliability optimization when component reliability estimates are treated as random variables with estimation uncertainty. System reliability optimization algorithms generally assume that component reliability values are known exactly, i.e., they are deterministic. In practice, that is rarely the case. For risk-averse system design, the estimation uncertainty, propagated from the component estimates, may result in unacceptable estimation uncertainty at the system-level. The system design problem is thus formulated with multiple objectives: (1) to maximize the system reliability estimate, and (2) to minimize its associated variance. This formulation of the reliability optimization is new, and the resulting solutions offer a unique perspective on system design. Once formulated in this manner, standard multiple objective concepts, including Pareto optimality, were used to determine solutions. Pareto optimality is an attractive alternative for this type of problem. It provides decision-makers the flexibility to choose the best-compromise solution. Pareto optimal solutions were found by solving a series of weighted objective problems with incrementally varied weights. Several sample systems are solved to demonstrate the approach presented in this paper. The first example is a hypothetical series-parallel system, and the second example is the fault tolerant distributed system architecture for a voice recognition system. The results indicate that significantly different designs are obtained when the formulation incorporates estimation uncertainty. If decision-makers are risk averse, and wish to consider estimation uncertainty, previously available methodologies are likely to be inadequate.",2004,0, 2473,Adaptive video streaming in vertical handoff: a case study,"Video streaming has become a popular form of transferring video over the Internet. With the emergence of mobile computing needs, a successful video streaming solution demands 1) uninterrupted services even with the presence of mobility and 2) adaptive video delivery according to current link properties. We study the need and evaluate the performance of adaptive video streaming in vertical handoff scenarios. We use universal seamless handoff architecture (USHA) to create a seamless handoff environment, and use the video transfer protocol (VTP) to adapt video streaming rates according to ""eligible rate estimates"". Using testbed measurements experiments, we verify the importance of service adaptation, as well as show the improvement of user-perceived video quality, via adapting video streaming in the vertical handoffs.",2004,0, 2474,A system's view of numerical wave propagation modeling for airborne radio communication systems,"An end-to-end simulation for an airborne communication system in the UHF range is performed; it describes the radio signal quality between an aircraft and a ground station. Effects like the installed performance behavior of antennas as well as multipath wave propagation are considered. All parts are implemented by using only commercially available software. The interfaces between each software tool are illuminated as well. All partial results are transformed to a system level simulation with MatLab. The calculated EM field strength is used to determine the bit error rate along a special flight path, which can be used to improve the data predictability for the overall system design. For the system level approach, much more understanding is given to systems engineering, and each sub aspect is given even more importance.",2004,0, 2475,Distribution patterns of effort estimations,"Effort estimations within software development projects and the ability to work within these estimations are perhaps the single most important, and at the same time inadequately mastered, discipline for overall project success. This study examines some characteristics of accuracies in software development efforts and identifies patterns that can be used to increase the understanding of the effort estimation discipline as well as to improve the accuracy of effort estimations. The study complements current research by taking a more simplistic approach than usually found within mainstream research concerning effort estimations. It shows that there are useful patterns to be found as well as interesting causalities, usable to increase the understanding and effort estimation capability.",2004,0, 2476,Infrastructure support for controlled experimentation with software testing and regression testing techniques,"Where the creation, understanding, and assessment of software testing and regression testing techniques are concerned, controlled experimentation is an indispensable research methodology. Obtaining the infrastructure necessary to support such experimentation, however, is difficult and expensive. As a result, progress in experimentation with testing techniques has been slow, and empirical data on the costs and effectiveness of techniques remains relatively scarce. To help address this problem, we have been designing and constructing infrastructure to support controlled experimentation with testing and regression testing techniques. This paper reports on the challenges faced by researchers experimenting with testing techniques, including those that inform the design of our infrastructure. The paper then describes the infrastructure that we are creating in response to these challenges, and that we are now making available to other researchers, and discusses the impact that this infrastructure has and can be expected to have.",2004,0, 2477,Using empirical testbeds to accelerate technology maturity and transition: the SCRover experience,"This paper is an experience report on a first attempt to develop and apply a new form of software: a full-service empirical testbed designed to evaluate alternative software dependability technologies, and to accelerate their maturation and transition into project use. The SCRover testbed includes not only the specifications, code, and hardware of a public safety robot, but also the package of instrumentation, scenario drivers, seeded defects, experimentation guidelines, and comparative effort and defect data needed to facilitate technology evaluation experiments. The SCRover testbed's initial operational capability has been recently applied to empirically evaluate two architecture definition languages (ADLs) and toolsets, Mae and AcmeStudio. The testbed evaluation showed (1) that the ADL-based toolsets were complementary and cost-effective to apply to mission-critical systems; (2) that the testbed was cost-effective to use by researchers; and (3) that collaboration in testbed use by researchers and the Jet Propulsion Laboratory (JPL) project users resulted in actions to accelerate technology maturity and transition into project use. The evaluation also identified a number of lessons learned for improving the SCRover testbed, and for development and application of future technology evaluation testbeds.",2004,0, 2478,Tool-supported unobtrusive evaluation of software engineering process conformance,"Software engineers face the dilemma of verifying process conformance and having the Hawthorne effect distorting the results or not doing it and as a consequence not being able to know if the proposed process is actually carried out. We addressed this issue by classifying and proposing the use of some approaches based on unobtrusive observation. These approaches include cognitive labs, remote monitoring and metrics collection. In addition, a tool supporting the application of perspective based reading (PER) for requirements inspection has been used and is also presented to exemplify metrics collection. Among other features, PBR Tool unobtrusively collects metrics in order to monitor if the reading technique is faithfully applied. Two studies evaluating the feasibility of such a tool are also reported.",2004,0, 2479,Increasing the accuracy and reliability of analogy-based cost estimation with extensive project feature dimension weighting,"Accurate and reliable software cost estimation is a vital task in software project portfolio decisions like resource scheduling or bidding. A prominent and transparent method of supporting estimators is analogy-based cost estimation, which is based on finding similar projects in historical portfolio data. However, the various project feature dimensions used to determine project analogy represent project aspects differing widely in their relevance; they are known to have varying impact on the analogies - and in turn on the overall estimation accuracy and reliability - , which is not addressed by traditional approaches. This paper (a) proposes an improved analogy-based approach based on extensive dimension weighting, and (ii) empirically evaluates the accuracy and reliability improvements in the context of five real-world portfolio data sets. Main results are accuracy and reliability improvements for all analyzed portfolios and quality measures. Furthermore, the approach indicates a quality barrier for analogy-based estimation approaches using the same basic assumptions and quality measures.",2004,0, 2480,"Finding ""early"" indicators of UML class diagrams understandability and modifiability","Given the relevant role that models obtained in the early stages play in the development of OO systems, in the recent years special attention has been paid to the quality of such models. Adhering to this fact, the main objective of this work is to obtain ""early"" indicators of UML class diagrams understandability and modifiability. These indicators will allow OO designers to improve the quality of the diagrams they model and hence contribute improving the quality of the OO systems, which are finally delivered. The empirical data were obtained through a controlled experiment and its replication we carried out for obtaining prediction models of the Understandability and Modifiability Time of UML class diagrams based on a set of metrics previously defined for UML class diagrams structural complexity and size. The obtained results, reveal that the metrics that count the number of methods (NM), the number of attributes (NA), the number of generalizations (NGen), the number of dependencies (NDEP), the maximum depth of the generalization hierarchies (MaxDIT) and the maximum height of the aggregation hierarchies (MaxHAgg) could influence the effort needed to maintain UML class diagrams.",2004,0, 2481,Investigating the active guidance factor in reading techniques for defect detection,"Inspections are an established quality assurance technique. In order to optimize the inspection performance, different reading techniques, such as checklist-based reading and scenario-based reading have been proposed. Various experiments have been conducted to evaluate which of these techniques produces better inspection results (i.e., which finds more defects with less effort). However, results of these empirical investigations are not conclusive yet. Thus, the success factors of the different reading approaches need to be further analyzed. In this paper, we report on a preliminary empirical study that examined the influence of the active guidance factor (provided by scenario-based approaches) when inspecting requirements specification documents. First results show that active guidance is accepted with favor by inspectors and suggest that it is better suited for larger and more complex documents.",2004,0, 2482,Building a genetically engineerable evolvable program (GEEP) using breadth-based explicit knowledge for predicting software defects,"There has been extensive research in the area of data mining over the last decade, but relatively little research in algorithmic mining. Some researchers shun the idea of incorporating explicit knowledge with a Genetic Program environment. At best, very domain specific knowledge is hard wired into the GP modeling process. This work proposes a new approach called the Genetically Engineerable Evolvable Program (GEEP). In this approach, explicit knowledge is made available to the GP. It is considered breadth-based, in that all pieces of knowledge are independent of each other. Several experiments are performed on a NASA-based data set using established equations from other researchers in order to predict software defects. All results are statistically validated.",2004,0, 2483,Supporting quality of service in a non-dedicated opportunistic environment,"In This work we investigate the utilization of non-dedicated, opportunistic resources in a desktop environment to provide statistical assurances to a class of QoS sensitive, soft real-time applications. Supporting QoS in such an environment presents unique challenges: (1) soft real-time tasks must have continuous access to resources in order to deliver meaningful services. Therefore the tasks will fail if not enough idle resources are available in the system. (2) Although soft real-time tasks can be migrated from one machine to another, their QoS may be affected if there are frequent migrations. In this paper, we define two new QoS metrics (task failure rate and probability of bad migrations) to characterize these QoS failures/degradations. We also design admission control and resource recruitment algorithms to provide statistical guarantees on these metrics. Our model based simulation results show that the admission control algorithms are effective at providing the desired level of assurances, and are robust to different resource usage patterns. Our resource recruitment algorithm may need long time of observations to provide the desired guarantee. But even with moderate observations, we can reduce the probability of a bad migration from 12% to less than 4%, which is good enough for most real applications.",2004,0, 2484,Efficient design diversity estimation for combinational circuits,"Redundant systems are designed using multiple copies of the same resource (e.g., a logic network or a software module) in order to increase system dependability: Design diversity has long been used to protect redundant systems against common-mode failures. The conventional notion of diversity relies on ""independent"" generation of ""different"" implementations of the same logic function. In a recent paper, we presented a metric to quantify diversity among several designs. The problem of calculating the diversity metric is NP-complete (i.e., can be of exponential complexity). In this paper, we present efficient techniques to estimate the value of the design diversity metric. For datapath designs, we have formulated very fast techniques to calculate the value of the metric by taking advantage of the regularity in the datapath structures. For general combinational logic circuits, we present an adaptive Monte-Carlo simulation technique for estimating accurate bounds on the value of the metric.",2004,0, 2485,The framework and algorithm of a new phasor measurement unit,"Phasor measurement unit (PMU) has the ability to identify and track the phasor values of voltage and current synchronously on a power system in real time, which is crucial to the detection of disturbances and characterization of transient swings. This paper provides an introduction to the framework of a new-type PMU, including the hardware architecture and the software structure. Then, an improved phasor computation algorithm is proposed to correct the errors occurred by the recursive DFT in the presence of frequency offset. Digital simulations and lab tests were accomplished with the new phasor-tracking algorithm. Results validated the performance of the improved phasor-tracking algorithm and its advantages over the traditional DFT algorithm.",2004,0, 2486,Nanolab: a tool for evaluating reliability of defect-tolerant nano architectures,"As silicon manufacturing technology reaches the nanoscale, architectural designs need to accommodate the uncertainty inherent at such scales. These uncertainties are germane in the miniscule dimension of the device, quantum physical effects, reduced noise margins, system energy levels reaching computing thermal limits, manufacturing defects, aging and many other factors. Defect tolerant architectures and their reliability measures gain importance for logic and micro-architecture designs based on nano-scale substrates. Recently, a Markov random field (MRF) has been proposed as a model of computation for nanoscale logic gates. In this paper, we take this approach further by automating this computational scheme and a belief propagation algorithm. We have developed MATLAB based libraries and toolset for fundamental logic gates that can compute output probability distributions and entropies for specified input distributions. Our tool eases evaluation of reliability measures of combinational logic blocks. The effectiveness of this automation is illustrated in this paper by automatically deriving various reliability results for defect-tolerant architectures, such as triple modular redundancy (TMR), cascaded triple modular redundancy (CTMR) and multi-stage iterations of these. These results are used to analyze trade-offs between reliability and redundancy for these architectural configurations.",2004,0, 2487,Experimental study on QoS provisioning to heterogeneous VoIP sources in Diffserv environment,"The work presents a research activity focused on the experimental study of three different issues related to the provision of VoIP services with QoS guarantee in DiffServ networks. Firstly the study deals with the analysis of two dissimilar strategies for the setting of the parameters of a traffic control module in a DiffServ network node. Using these results, secondly the effectiveness of static and dynamic SLAs strategies for QoS provisioning in DiffServ environment is experimentally evaluated. This analysis is carried out considering aggregation of voice sources adopting two distinct codecs, i.e. G723.1 and G729. These codecs produce traffic with different statistical features (variable and constant bit rate respectively). Hence, this approach allows assessing the impact on the single sources' performance of the multiplexing of heterogeneous VoIP sources in a single class.",2004,0, 2488,Reliability-aware co-synthesis for embedded systems,"As technology scales, transient faults due to single event upsets have emerged as a key challenge for reliable embedded system design. This work proposes a design methodology that incorporates reliability into hardware-software co-design paradigm for embedded systems. We introduce an allocation and scheduling algorithm that efficiently handles conditional execution in multi-rate embedded systems, and selectively duplicates critical tasks to detect soft errors, such that the reliability of the system is increased. The increased reliability is achieved by utilizing the otherwise idle computation resources and incurs no resource or performance penalty. The proposed algorithm is fast and efficient, and is suitable for use in the inner loop of our hardware/software co-synthesis framework, where the scheduling routine has to be invoked many times.",2004,0, 2489,Estimation of software defects fix effort using neural networks,"Software defects fix effort is an important software development process metric that plays a critical role in software quality assurance. People usually like to apply parametric effort estimation techniques using historical lines of code and function points data to estimate effort of defects fixes. However, these techniques are neither efficient nor effective for a new different kind of project's fixing defects when code will be written within the context of a different project or organization. In this paper, we present a solution for estimating software defect fix effort using self-organizing neural networks.",2004,0, 2490,A change impact dependency measure for predicting the maintainability of source code,"We first articulate the theoretic difficulties with the existing metrics designed for predicting software maintainability. To overcome the difficulties, we propose to measure a purely internal and objective attribute of code, namely change impact dependency, and show how it can be modeled to predict real change impact. The proposed base measure can be further elaborated for evaluating software maintainability.",2004,0, 2491,A view-based control flow metric,"We present a new metric for control flow path sets based on a control flow abstraction technique and evaluate the concept in a study of a large set of artificially generated programs, using a special model of fault detection ability.",2004,0, 2492,WS-FIT: a tool for dependability analysis of Web services,This work provides an overview of fault injection techniques and their applicability to testing SOAP RPC based Web service systems. We also give a detailed example of the WS-FIT package and use it to detect a problem in a Web service based system.,2004,0, 2493,A computational framework for supporting software inspections,"Software inspections improve software quality by the analysis of software artifacts, detecting their defects for removal before these artifacts are delivered to the following software life cycle activities. Some knowledge regarding software inspections have been acquired by empirical studies. However, we found no indication that computational support for the whole software inspection process using appropriately such knowledge is available. This paper describes a computational framework whose requirements set was derived from knowledge acquired by empirical studies to support software inspections. To evaluate the feasibility of such framework, two studies have been accomplished: one case study, which has shown the feasibility of using the framework to support inspections, and an experimental study that evaluated the supported software inspection planning activity. Preliminary results of this experimental study suggested that unexperienced subjects are able to plan inspections with higher defect detection effectiveness, and in less time, when using this computational framework.",2004,0, 2494,Test-suite reduction for model based tests: effects on test quality and implications for testing,"Model checking techniques can be successfully employed as a test case generation technique to generate tests from formal models. The number of tests cases produced, however, is typically large for complex coverage criteria such as MCDC. Test-suite reduction can provide us with a smaller set of test cases that present the original coverage-often a dramatically smaller set. One potential drawback with test-suite reduction is that this might affect the quality of the test-suite in terms of fault finding. Previous empirical studies provide conflicting evidence on this issue. To further investigate the problem and determine its effect when testing formal models of software, we performed an experiment using a large case example of a flight guidance system, generated reduced test-suites for a variety of structural coverage criteria while presenting coverage, and recorded their fault finding effectiveness. Our results show that the size of the specification based test-suites can be dramatically reduced and that the fault detection of the reduced test-suites is adversely affected. In this report we describe our experiment, analyze the results, and discuss the implications for testing based on formal specifications.",2004,0, 2495,ISPIS: a framework supporting software inspection processes,"This paper describes ISPIS, a computational framework for supporting the software inspection process whose requirements set was derived from knowledge acquired by empirical studies. ISPIS allows the inspection of all artifact types by geographically distributed teams. Specific defect detection support is provided by the integration of external tools. A case study has shown the feasibility of using ISPIS to support real inspections",2004,0, 2496,The effects of fault counting methods on fault model quality,"Over the past few years, we have been developing software fault predictors based on a system's measured structural evolution. We have previously shown there is a significant linear relationship between code chum, a set of synthesized metrics, and the rate at which faults are inserted into the system in terms of number of faults per unit change in code chum. A limiting factor in this and other investigations of a similar nature has been the absence of a quantitative, consistent, and repeatable definition of what constitutes a fault. The rules for fault definition were not sufficiently rigorous to provide unambiguous, repeatable fault counts. Within the framework of a space mission software development effort at the Jet Propulsion Laboratory (JPL) we have developed a standard for the precise enumeration of faults. This new standard permits software faults to be measured directly from configuration control documents. Our results indicate that reasonable predictors of the number of faults inserted into a software system can be developed from measures of the system's structural evolution. We compared the new method of counting faults with two existing techniques to determine whether the fault counting technique has an effect on the quality of the fault models constructed from those counts. The new fault definition provides higher quality fault models than those obtained using the other definitions of fault",2004,0, 2497,Geotechnical application of borehole GPR - a case history,"Borehole GPR measurements were performed to complement the site characterization of a planned expansion of a cement plant. This expansion includes a mill and a reclaim facility adjacent to the present buildings, the whole site being located in a karstic environment. Twenty-one geotechnical exploration borings revealed that the depth to bedrock is very irregular (between 1.5 m and 18 m) and that the rocks likely have vertical alteration channels that extend many meters below the rock surface. The purpose of the GPR survey was to reveal the presence of potential cavities and to better determine the required vertical extent of the caissons of the foundations. In general, the subsurface conditions consist of a top fill layer, which is electrically conductive, a residual clay layer and a limestone bedrock. Very poor EM penetration prevented surface measurements. Hence, 100 MHz borehole antennas were used to perform single-hole reflection and cross-hole transmission measurements. Sixteen geotechnical holes were visited during the survey. All holes were surveyed in reflection mode. Nineteen tomographic panels were scanned. Velocity tomogram were obtained for all the data. Attenuation tomography was performed in fewer occasions, due to higher uncertainty in the quality of the amplitude data. Resistivity probability maps were drawn when both velocity and attenuation data were obtained. The velocity tomography calculations were constrained using velocity profiles along the borings. These profiles were obtained by inversion of the single-hole first arrival data. The velocity tomography results show that globally the area can be separated in two zones, one with an average velocity around 0.1 m/ns and one with a slower 0.09 m/ns average velocity. In addition, low velocity anomalies attributable to weathered zones appear in the tomogram. In conclusion, the GPR results revealed no cavities. Sensitive zones were located, which helped the planning and the budgeting of the caissons - onstruction.",2004,0, 2498,Polygonal and polyhedral contour reconstruction in computed tomography,"This paper is about three-dimensional (3-D) reconstruction of a binary image from its X-ray tomographic data. We study the special case of a compact uniform polyhedron totally included in a uniform background and directly perform the polyhedral surface estimation. We formulate this problem as a nonlinear inverse problem using the Bayesian framework. Vertice estimation is done without using a voxel approximation of the 3-D image. It is based on the construction and optimization of a regularized criterion that accounts for surface smoothness. We investigate original deterministic local algorithms, based on the exact computation of the line projections, their update, and their derivatives with respect to the vertice coordinates. Results are first derived in the two-dimensional (2-D) case, which consists of reconstructing a 2-D object of deformable polygonal contour from its tomographic data. Then, we investigate the 3-D extension that requires technical adaptations. Simulation results illustrate the performance of polygonal and polyhedral reconstruction algorithms in terms of quality and computation time.",2004,0, 2499,Maintenance issues in outsourced software components,"Software outsourcing is becoming a popular alternative in the software industry, and the evidence for the importance of assuring the quality of subcontractors' contributions is found in ISO 9000-3 Std and ISO/IEC 2001. There are risks and benefits of introducing subcontractors in the framework of the project. One of the major risks is the future maintenance difficulty. Reliable maintenance is only possible if adequate measures are taken in advance during the project's development and maintenance planning phase, and documented in the maintenance contract. In this paper, we present and make a justification for a set of recommendations to make such maintenance reliable and cost-effective. We analyze the associated risks and develop a model to estimate the overall product quality metrics during the maintenance phase.",2004,0, 2500,The application of LDPC codes in image transmission,"In this paper, we combine the adaptive modulation and coding (AMC) with LDPC (low-density parity-check) codes and apply it to image transmission under Rayleigh fading channels. The transmitter adjusts the modulation mode according to the channel state information. Through computer simulation, we get the performance of our adaptive LDPC scheme in image transmission under certain demand of BER, and we also study the impacts of channel estimation error. We can see that in image transmission, our adaptive LDPC scheme can provide an acceptable image quality, meanwhile a higher spectral efficiency.",2004,0, 2501,Modeling and identification for high-performance robot control: an RRR-robotic arm case study,"This paper explains a procedure for getting models of robot kinematics and dynamics that are appropriate for robot control design. The procedure consists of the following steps: 1) derivation of robot kinematic and dynamic models and establishing correctness of their structures; 2) experimental estimation of the model parameters; 3) model validation; and 4) identification of the remaining robot dynamics, not covered with the derived model. We give particular attention to the design of identification experiments and to online reconstruction of state coordinates, as these strongly influence the quality of the estimation process. The importance of correct friction modeling and the estimation of friction parameters are illuminated. The models of robot kinematics and dynamics can be used in model-based nonlinear control. The remaining dynamics cannot be ignored if high-performance robot operation with adequate robustness is required. The complete procedure is demonstrated for a direct-drive robotic arm with three rotational joints.",2004,0, 2502,Myrinet networks: a performance study,"As network computing become commonplace, the interconnection networks and the communication system software become critical in achieving high performance. Thus, it is essential to systematically assess the features and performance of the new networks. Recently, Myricom has introduced a two-port ""E-card"" Myrinet/PCl-X interface. In this paper, we present the basic performance of its GM2.I messaging layer, as well as a set of microbenchmarks designed to assess the quality of MPI implementation on top of GM. These microbenchmarks measure the latency, bandwidth, intra-node performance, computation/communication overlap, parameters of the LogP model, buffer reuse impact, different traffic patterns, and collective communications. We have discovered that the MPI basic performance is close to those offered at the GM. We find that the host overhead is very small in our system. The Myrinet network is shown to be sensitive to the buffer reuse patterns. However, it provides opportunities for overlapping computation with communication. The Myrinet network is able to deliver up to 2000MB/s bandwidth for the permutation patterns.",2004,0, 2503,Testing and defect tolerance: a Rent's rule based analysis and implications on nanoelectronics,"Defect tolerant architectures will be essential for building economical gigascale nanoelectronic computing systems to permit functionality in the presence of a significant number of defects. The central idea underlying a defect tolerant configurable system is to build the system out of partially perfect components, detect the defects and configure the available good resources using software. In this paper we discuss implications of defect tolerance on power area, delay and other relevant parameters for computing architectures. We present a Rent's rule based abstraction of testing for VLSI systems and evaluate the redundancy requirements for observability. It is shown that for a very high interconnect defect density, a prohibitively large number of redundant components are necessary for observability and this has adverse affect on the system performance. Through a unified framework based on a priori wire length estimation and Rent's rule we illustrate the hidden cost of supporting such an architecture.",2004,0, 2504,Evaluation of reward analysis methods with MRMSolve 2.0,"The paper presents the 2.0 release of MRMSolve, a Markov reward model (MRM) analysis tool. This new release integrates recent research results, such as MRMs with partial reward loss, second order MRMs and their combination, with old and widely known reward analysis methods, such as the ones by DeSouza-Gail, Nabli-Sericola and Donatiello-Grassi. The paper compares the mentioned direct distribution analysis methods with each other and with the moments based reward estimation methods.",2004,0, 2505,Estimating the size of Web applications by using a simplified function point method,"Software size estimation is a key factor to determine the amount of time and effort needed to develop software systems, and the Web applications are no exception. In this paper a simplified way of the IFPUG (International Function Point Users Group) function points based on the simplification ideas suggested by NESMA (Netherlands Software Metrics Association) to estimate size of management information systems is presented. In an empirical study, twenty Web applications were analyzed. The estimates using the simplified method were close to the ones using the IFPUG detailed method. Based on the results, it was possible to establish a simplified method to estimate the size of Web applications according to the development characteristics of the studied company.",2004,0, 2506,Wavelet based transformer protection,"This paper presents the development of a wavelet based transformer protection algorithm. In this paper, a laboratory transformer is modelled and simulated to see the accuracy of the proposed algorithm by using ATP-EMTP. The main objective of this paper is to analyze discontinuities in current signals during phase to ground faults and phase to phase faults. The generated data is used with the MATLAB software to test the performance of the proposed technique regarding the speed of response, computational burden and reliability. maximum likelihood parameter estimation is used to interpret detail coefficients.",2004,0, 2507,Electronic circuits tests in laboratory for superimposing voltages techniques,The tests carried out in electrical and electronic laboratories allow researchers to advance in the field of superposing voltages techniques. The use of simulation software applications are mentioned as part of this study. The real measurements have been taken in two environments. The first tests were developed over the line used to feed the laboratory equipment. The second activity has been carried out in an environment more secure for researchers and without monopolizing a power line to develop the tests.,2004,0, 2508,Checkpointing for peta-scale systems: a look into the future of practical rollback-recovery,"Over the past two decades, rollback-recovery via checkpoint-restart has been used with reasonable success for long-running applications, such as scientific workloads that take from few hours to few months to complete. Currently, several commercial systems and publicly available libraries exist to support various flavors of checkpointing. Programmers typically use these systems if they are satisfactory or otherwise embed checkpointing support themselves within the application. In this paper, we project the performance and functionality of checkpointing algorithms and systems as we know them today into the future. We start by surveying the current technology roadmap and particularly how Peta-Flop capable systems may be plausibly constructed in the next few years. We consider how rollback-recovery as practiced today will fare when systems may have to be constructed out of thousands of nodes. Our projections predict that, unlike current practice, the effect of rollback-recovery may play a more prominent role in how systems may be configured to reach the desired performance level. System planners may have to devote additional resources to enable rollback-recovery and the current practice of using ""cheap commodity"" systems to form large-scale clusters may face serious obstacles. We suggest new avenues for research to react to these trends.",2004,0, 2509,Instantaneous line-frequency measurement under nonstationary situations,"A virtual instrument for the measurement of instantaneous power-system-frequency is proposed. It is based on the frequency estimation of the voltage signal using three equidistant samples. An algorithm is further developed that diminishes the variance of the estimation. The procedure is applied to the case of single and three-phase networks and relative errors in the frequency estimation are obtained. Low cost hardware, consisting of compatible PC, standard data acquisition card and signal conditioning module, has been used in conjunction with a software application developed with LABVIEWâ„? Finally, measurements using single and three-phase signals, simulating severe conditions of signal quality, were performed. A variation of the frequency throughout the measurement time has been assumed, according to a sinusoidal signal of 5 Hz, within a ±1 Hz margin. The developed tool has been proven, with worst case data and relative errors of 0.1% and 0.025% having been obtained for single and three-phase signals, respectively.",2004,0, 2510,Collaborative multisensor network architecture based on smart Web sensors for power quality applications,"In this paper, the aim of the authors is to propose a multisensor network architecture for distributed power quality measurements, based on the use of low cost smart Web sensors and on a collaborative mobile agent mechanism that allow an easy remote access, for a generic user, to the measurement results. The network has a full decentralized processing architecture in which the sensors can collaborate in the signal analysis. A low cost smart Web sensor is also presented. It is mainly characterized by an architecture, both hardware and software, modular and adaptable to specific requirement that allows an easy integration into distributed measurement systems. Finally, in-field measurements, conducted on low voltage sub-networks, are reported for some of the main power quality disturbances.",2004,0, 2511,Synchronization techniques for power quality instruments,"Voltage characteristics measurement in power systems requires the accurate fundamental frequency estimation and the signal synchronization, even in the presence of disturbances. In this paper, the authors have developed and tested two innovative techniques for instrument synchronization. The first is based on signal spectral analysis techniques, performed by means of chirp-Z (CZT) analysis. The second is a PLL software, based upon a time-domain coordinate transformation and an innovative phase detection technique. In order to evaluate how synchronization techniques are adversely affected by the application of a disturbing influence, experimental tests have been carried out, taking into account the requirements of the standards. The proposed techniques have been compared with standard hardware solutions. In the paper, the proposed techniques are described, experimental results are presented and the accuracy specifications are discussed.",2004,0,3518 2512,"Software's secret sauce: the ""-ilities"" [software quality]","If beauty is in the eye of the beholder, then quality must be as well. We live in a world where beauty to one is a complete turnoff to another. Software quality is no different. We have the developer's perspective, the end users perspective, the testers perspective, and so forth. As you can see, meeting the requirements might be different from being fit for a purpose, which can also be different from complying with rules and regulations on how to develop and deploy the software. Yet we can think of all three perspectives as ways to determine how to judge and assess software quality. These three perspectives tie directly to the persistent software attributes focus section in this issue and, consequently, to the concept of software ""-ilities"". The -ilities (or software attributes) are a collection of closely related behaviors that by themselves have little or no value to the end users but that can greatly increase a software application or system's value when added.",2004,0, 2513,Measuring application error rates for network processors,"Faults in computer systems can occur due to a variety of reasons. In many systems, an error has a binary effect, i.e. the output is either correct or it is incorrect. However, networking applications exhibit different properties. For example, although a portion of the code behaves incorrectly due to a fault, the application can still work correctly. Integrity of a network system is often unchanged during faults. Therefore, measuring the effects of faults on the network processor applications require new measurement metrics to be developed. In this paper, we highlight essential application properties and data structures that can be used to measure the error behavior of network processors. Using these metrics, we study the error behavior of seven representative networking applications under different cache access fault probabilities.",2004,0, 2514,Who solved the optimal software release problems based on Markovian software reliability model?,"In this paper, we answer the titled question for the typical software release problems based on a Markovian software reliability model. Under the assumption that the software fault-detection process is described by a Markovian birth process with absorption, like Jelinski and Moranda model (1972), we formulate the total expected software costs with two different release policies, and compare them in terms of the cost minimization. Also, we consider the other software cost model taking account of gain in reliability. It can be concluded throughout numerical examples that the existing optimal software release policy underestimates and overestimates the real optimal software release time and its associated cost function, respectively.",2004,0, 2515,Power quality factor and line-disturbances measurements in three-phase systems,"A power quality meter (PQM) is presented for measuring, as a first objective, a single indicator, designated power quality factor (PQF), in the range between zero to one, which integrally reflect the power transfer quality of a general three phase network feeding unbalanced nonlinear loads. PQF definition is based on the analysis of functions in the frequency domain, separating the fundamental terms from the harmonic terms of the Fourier series. Then, quality aspects considered in the PQF definition can be calculated: a) the voltage and current harmonic levels b) the degree of unbalance and c) the phase displacement factor in the different phases at the fundamental frequency. As a second objective, the PQM has been designed for detecting, classifying and organizes power line disturbances. For monitoring power line disturbances, the PQM is configured as virtual instrument, which automatically classifies and organizes them in a database while they are being recorded. The type of disturbances includes: impulse, oscillation, sag, swell, interruption, undervoltage, overvoltage, harmonics and frequency variation. For amplitude disturbances (impulse, sag, swell, interruption, undervoltage and overvoltage), the PQM permits the measurement of parameters such as amplitude, start time and final time. Measurement of harmonic distortion allows recording and visual presentation of the spectrum of amplitudes and phases corresponding to the first 40 harmonics. Software tools use the database structure to present summaries of power disturbances and locate an event by severity or time of occurrence. Simulated measurements are included to demonstrate the versatility of the instrument.",2004,0, 2516,Weighted least square estimation algorithm with software phase-locked loop for voltage sag compensation by SMES,"A superconducting magnetic energy storage (SMES) system is developed to protect a critical load from momentary voltage sags. This system is composed of a 0.3 MJ SMES coil, a 150 KVA IGBT-based current source inverter and a phase-shift inductor. In order to compensate the load voltage effectively whenever a source voltage sag happens, it is crucial for the signal processing algorithm to extract the fundamental components from the sample signals quickly, precisely and stably. In this paper, an estimation algorithm based on the weighted least square principle is developed to detect the positive- and negative-sequence fundamental components from the measured AC voltages and currents. A software phase-locked loop (SPLL) is applied to track the positive-sequence component of the source voltage. Simulations and experiments are carried out to demonstrate the algorithms. The results are presented.",2004,0, 2517,Extracting facts from open source software,"Open source software systems are becoming increasingly important these days. Many companies are investing in open source projects and lots of them are also using such software in their own work. But because open source software is often developed without proper management, the quality and reliability of the code may be uncertain. The quality of the code needs to be measured and this can be done only with the help of proper tools. We describe a framework called Columbus with which we calculate the object oriented metrics validated by Basili et al. for illustrating how fault-proneness detection from the open source Web and e-mail suite called Mozilla can be done. We also compare the metrics of several versions of Mozilla to see how the predicted fault-proneness of the software system changed during its development. The Columbus framework has been further developed recently with a compiler wrapping technology that now gives us the possibility of automatically analyzing and extracting information from software systems without modifying any of the source code or makefiles. We also introduce our fact extraction process here to show what logic drives the various tools of the Columbus framework and what steps need to be taken to obtain the desired facts.",2004,0, 2518,Developing a multi-objective decision approach to select source-code improving transformations,"Our previous work on improving the quality of object-oriented legacy systems through re-engineering proposed a software transformation framework based on soft-goal inter-dependency graphs (Tahvildari and Kontogiannis, 2002). We considered a class of transformations where a program is transformed into another program in the same language (source-to-source transformations) and that the two programs may differ in specific qualities such as performance and maintainability. This paper defines a decision making process that determines a list of source-code improving transformations among several applicable transformations. The decision-making process is developed on a multi-objective decision analysis technique. This type of technique is necessary as there are a number of different, and sometimes conflicting, criterion among nonfunctional requirements. For the migrant system, the proposed approach uses heuristic estimates to guide the discovery process.",2004,0, 2519,"Analysis, testing and re-structuring of Web applications","The current situation in the development of Web applications is reminiscent of the early days of software systems, when quality was totally dependent on individual skills and lucky choices. In fact, Web applications are typically developed without following a formalized process model: requirements are not captured and design is not considered; developers quickly move to the implementation phase and deliver the application without testing it. Not differently from more traditional software system, however, the quality of Web applications is a complex, multidimensional attribute that involves several aspects, including correctness, reliability, maintainability, usability, accessibility, performance and conformance to standards. In this context, aim of this PhD thesis was to investigate, define and apply a variety of conceptual tools, analysis, testing and restructuring techniques able to support the quality of Web applications. The goal of analysis and testing is to assess the quality of Web applications during their development and evolution; restructuring aims at improving the quality by suitably changing their structure.",2004,0, 2520,A neuro-fuzzy tool for software estimation,"Accurate software estimation such as cost estimation, quality estimation and risk analysis is a major issue in software project management. We present a soft computing framework to tackle this challenging problem. We first use a preprocessing neuro-fuzzy inference system to handle the dependencies among contributing factors and decouple the effects of the contributing factors into individuals. Then we use a neuro-fuzzy bank to calibrate the parameters of contributing factors. In order to extend our framework into fields that lack of an appropriate algorithmic model of their own, we propose a default algorithmic model that can be replaced when a better model is available. Validation using industry project data shows that the framework produces good results when used to predict software cost.",2004,0, 2521,Probabilistic evaluation of object-oriented systems,"The goal of this study is the development of a probabilistic model for the evaluation of flexibility of an object-oriented design. In particular, the model estimates the probability that a certain class of the system gets affected when new functionality is added or when existing functionality is modified. It is obvious that when a system exhibits a large sensitivity to changes, the corresponding design quality is questionable. Useful conclusions can be drawn from this model regarding the comparative evaluation of two or more object-oriented systems or even the assessment of several generations of the same system, in order to determine whether or not good design principles have been applied. The proposed model has been implemented in a Java program that can automatically analyze the class diagram of a given system.",2004,0, 2522,An evaluation of k-nearest neighbour imputation using Likert data,"Studies in many different fields of research suffer from the problem of missing data. With missing data, statistical tests will lose power, results may be biased, or analysis may not be feasible at all. There are several ways to handle the problem, for example through imputation. With imputation, missing values are replaced with estimated values according to an imputation method or model. In the k-nearest neighbour (k-NN) method, a case is imputed using values from the k most similar cases. In this paper, we present an evaluation of the k-NN method using Likert data in a software engineering context. We simulate the method with different values of k and for different percentages of missing data. Our findings indicate that it is feasible to use the k-NN method with Likert data. We suggest that a suitable value of k is approximately the square root of the number of complete cases. We also show that by relaxing the method rules with respect to selecting neighbours, the ability of the method remains high for large amounts of missing data without affecting the quality of the imputation.",2004,0, 2523,The necessity of assuring quality in software measurement data,"Software measurement data is often used to model software quality classification models. Related literature has focussed on developing new classification techniques and schemes with the aim of improving classification accuracy. However, the quality of software measurement data used to build such classification models plays a critical role in their accuracy and usefulness. We present empirical case studies, which demonstrate that despite using a very large number of diverse classification techniques for building software quality classification models, the classification accuracy does not show a dramatic improvement. For example, a simple lines-of-code based classification performs comparatively to some other more advanced classification techniques such as neural networks, decision trees, and case-based reasoning. Case studies of the NASA JM1 and KC2 software measurement datasets (obtained through the NASA Metrics Data Program) are presented. Some possible reasons that affect the quality of a software measurement dataset include presence of data noise, errors due to improper software data collection, exclusion of software metrics that are better representative software quality indicators, and improper recording of software fault data. This study shows, through an empirical study, that instead of searching for a classification technique that perform well for given software measurement dataset, the software quality and development teams should focus on improving the quality of the software measurement dataset.",2004,0, 2524,Module-order modeling using an evolutionary multi-objective optimization approach,"The problem of quality assurance is important for software systems. The extent to which software reliability improvements can be achieved is often dictated by the amount of resources available for the same. A prediction for risk-based rankings of software modules can assist in the cost-effective delegation of the limited resources. A module-order model (MOM) is used to gauge the performance of the predicted rankings. Depending on the software system under consideration, multiple software quality objectives may be desired for a MOM; e.g., the desired rankings may be such that if 20% of modules were targeted for reliability enhancements then 80% of the faults would be detected. In addition, it may also be desired that if 50% of modules were targeted then 100% of the faults would be detected. Existing works related to MOM(s) have used an underlying prediction model to obtain the rankings, implying that only the average, relative, or mean square errors are minimized. Such an approach does not provide an insight into the behavior of a MOM, the performance of which focusses on how many faults are accounted for by the given percentage of modules enhanced. We propose a methodology for building MOM (s) by implementing a multiobjective optimization with genetic programming. It facilitates the simultaneous optimization of multiple performance objectives for a MOM. Other prediction techniques, e.g., multiple linear regression and neural networks, cannot achieve multiobjective optimization for MOM(s). A case study of a high-assurance telecommunications software system is presented. The observed results show a new promise in the modeling of goal-oriented software quality estimation models.",2004,0, 2525,A replicated experiment of usage-based and checklist-based reading,"Software inspection is an effective method to detect faults in software artefacts. Several empirical studies have been performed on reading techniques, which are used in the individual preparation phase of software inspections. Besides new experiments, replications are needed to increase the body of knowledge in software inspections. We present a replication of an experiment, which compares usage-based and checklist-based reading. The results of the original experiment show that reviewers applying usage-based reading are more efficient and effective in detecting the most critical faults from a user's point of view than reviewers using checklist-based reading. We present the data of the replication together with the original experiment and compares the experiments. The main result of the replication is that it confirms the result of the original experiment. This replication strengthens the evidence that usage-based reading is an efficient reading technique.",2004,0, 2526,A controlled experiment for evaluating a metric-based reading technique for requirements inspection,"Natural language requirements documents are often verified by means of some reading technique. Some recommendations for defining a good reading technique point out that a concrete technique must not only be suitable for specific classes of defects, but also for a concrete notation in which requirements are written. Following this suggestion, we have proposed a metric-based reading (MBR) technique used for requirements inspections, whose main goal is to identify specific types of defects in use cases. The systematic approach of MBR is basically based on a set of rules as ""if the metric value is too low (or high) the presence of defects of type de fType1,...de fTypen must be checked"". We hypothesised that if the reviewers know these rules, the inspection process is more effective and efficient, which means that the defects detection rate is higher and the number of defects identified per unit of time increases. But this hypotheses lacks validity if it is not empirically validated. For that reason the main goal is to describe a controlled experiment we carried out to ascertain if the usage of MBR really helps in the detection of defects in comparison with a simple checklist technique. The experiment result revealed that MBR reviewers were more effective at detecting defects than checklist reviewers, but they were not more efficient, because MBR reviewers took longer than checklist reviewers on average.",2004,0, 2527,Assessing the impact of active guidance for defect detection: a replicated experiment,"Scenario-based reading (SBR) techniques have been proposed as an alternative to checklists to support the inspectors throughout the reading process in the form of operational scenarios. Many studies have been performed to compare these techniques regarding their impact on the inspector performance. However, most of the existing studies have compared generic checklists to a set of specific reading scenarios, thus confounding the effects of two SBR key factors: separation of concerns and active guidance. In a previous work we have preliminarily conducted a repeated case study at the University of Kaiserslautern to evaluate the impact of active guidance on inspection performance. Specifically, we compared reading scenarios and focused checklists, which were both characterized as being perspective-based. The only difference between the reading techniques was the active guidance provided by the reading scenarios. We now have replicated the initial study with a controlled experiment using as subjects 43 graduate students in computer science at University of Bari. We did not find evidence that active guidance in reading techniques affects the effectiveness or the efficiency of defect detection. However, inspectors showed a better acceptance of focused checklists than reading scenarios.",2004,0, 2528,"Estimating effort by use case points: method, tool and case study","Use case point (UCP) method has been proposed to estimate software development effort in early phase of software project and used in a lot of software organizations. Intuitively, UCP is measured by counting the number of actors and transactions included in use case models. Several tools to support calculating UCP have been developed. However, they only extract actors and use cases and the complexity classification of them are conducted manually. We have been introducing UCP method to software projects in Hitachi Systems & Services, Ltd. To effective introduction of UCP method, we have developed an automatic use case measurement tool, called U-EST. This paper describes the idea to automatically classify the complexity of actors and use cases from use case model. We have also applied the U-EST to actual use case models and examined the difference between the value by the tool and one by the specialist. As the results, UCPs measured by the U-EST are similar to ones by the specialist.",2004,0, 2529,Assessment of software measurement: an information quality study,"This paper reports on the first phase of an empirical research project concerning methods to assess the quality of the information in software measurement products. Two measurement assessment instruments are developed and deployed in order to generate two sets of analyses and conclusions. These sets will be subjected to an evaluation of their information quality in phase two of the project. One assessment instrument was based on AIMQ, a generic model of information quality. The other instrument was developed by targeting specific practices relating to software project management and identifying requirements for information support. Both assessment instruments delivered data that could be used to identify opportunities to improve measurement The generic instrument is cheap to acquire and deploy, while the targeted instrument requires more effort to build. Conclusions about the relative merits of the methods, in terms of their suitability for improvement purposes, await the results from the second phase of the project.",2004,0, 2530,Assessing quantitatively a programming course,"The focus on assessment and measurement represents the main distinction between programming course and software engineering courses in computer curricula. We introduced testing as an essential asset of a programming course. It allows precise measurement of the achievements of the students and allows an objective assessment of the teaching itself. We measured the size and evolution of the programs developed by the students and correlated these metrics with the grades. We plan to collect progressively a large baseline. We compared the productivity and defect density of the program developed by the students during the exam to industrial data and similar academic experiences. We found that the productivity of our students is very high even compared to industrial settings. Our defect density (before rework) is higher than the industrial, which includes rework.",2004,0, 2531,Assessing usability through perceptions of information scent,Information scent is an establish concept for assessing how users interact with information retrieval systems. This paper proposes two ways of measuring user perceptions of information scent in order to assess the product quality of Web or Internet information retrieval systems. An empirical study is presented which validates these measures through an evaluation based on a live e-commerce application. This study shows a strong correlation between the measures of perceived scent and system usability. Finally the wider applicability of these methods is discussed.,2004,0, 2532,Software failure rate and reliability incorporating repair policies,"Reliability of a software application, its failure rate and the residual number of faults in an application are the three most important metrics that provide a quantitative assessment of the failure characteristics of an application. Typically, one of many stochastic models known as software reliability growth models (SRGMs) is used to describe the failure behavior of an application in its testing phase, and obtain an estimate of the above metrics. In order to ensure analytical tractability, SRGMs are based on an assumption of instantaneous repair and thus the estimates of the metrics obtained using SRGMs tend to be optimistic. In practice, fault repair activity consumes a nonnegligible amount of time and resources. Also, repair may be conducted according to many policies which are reflective of the schedule and budget constraints of a project. A few research efforts that have sought to incorporate repair into SRGMs are restrictive, since they consider only one of the several SRGMs, model the repair process using a constant rate, and provide an estimate of only the residual number of faults. These techniques do not address the issue of estimating application failure rate and reliability in the presence of repair. In this paper we present a generic framework which relies on the rate-based simulation technique in order to provide the capability to incorporate various repair policies into the finite failure nonhomogeneous Poisson process (NHPP) class of software reliability growth models. We also present a technique to compute the failure rate and the reliability of an application in the presence of repair. The potential of the framework to obtain quantitative estimates of the above three metrics taking into consideration different repair policies is illustrated using several scenarios.",2004,0, 2533,Adaptive random testing through dynamic partitioning,"Adaptive random testing (ART) describes a family of algorithms for generating random test cases that have been experimentally demonstrated to have greater fault-detection capacity than simple random testing. We outline and demonstrate two new ART algorithms, and demonstrate experimentally that they offer similar performance advantages, with considerably lower overhead than other ART algorithms.",2004,0, 2534,Towards a functional size measure for object-oriented systems from requirements specifications,This work describes a measurement protocol to map the concepts used in the OO-method requirements model onto the concepts used by the COSMIC full function points (COSMIC-FFP) functional size measurement method. This protocol describes a set of measurement operations for modeling and sizing object-oriented software systems from requirements specifications obtained in the context of the OO-method. This development method starts from a requirements model that allows the specification of software functional requirements and generates a conceptual model through a requirements analysis process. The main contribution of this work is an extended set of rules that allows estimating the functional size of OO systems at an early stage of the development lifecycle. A case study is introduced to report the obtained results from a practical point of view.,2004,0, 2535,Machine-learning techniques for software product quality assessment,"Integration of metrics computation in most popular computer-aided software engineering (CASE) tools is a marked tendency. Software metrics provide quantitative means to control the software development and the quality of software products. The ISO/IEC international standard (14598) on software product quality states, ""Internal metrics are of little value unless there is evidence that they are related to external quality"". Many different approaches have been proposed to build such empirical assessment models. In this work, different machine learning (ML) algorithms are explored with regard to their capacities of producing assessment/predictive models, for three quality characteristics. The predictability of each model is then evaluated and their applicability in a decision-making system is discussed.",2004,0, 2536,On the statistical properties of the F-measure,"The F-measure - the number of distinct test cases to detect the first program failure - is an effectiveness measure for debug testing strategies. We show that for random testing with replacement, the F-measure is distributed according to the geometric distribution. A simulation study examines the distribution of two adaptive random testing methods, to study how closely their sampling distributions approximate the geometric distribution, revealing that in the worst case scenario, the sampling distribution for adaptive random testing is very similar to random testing. Our results have provided an answer to a conjecture that adaptive random testing is always a more effective alternative to random testing, with reference to the F-measure. We consider the implications of our findings for previous studies conducted in the area, and make recommendations to future studies.",2004,0, 2537,A methodology for constructing maintainability model of object-oriented design,"It is obvious that qualities of software design heavily affects on qualities of software ultimately developed. One of claimed advantages of object-oriented paradigm is the ease of maintenance. The main goal of this work is to propose a methodology for constructing maintainability model of object-oriented software design model using three techniques. Two subcharacteristics of maintainability: understandability and modifiability are focused in this work. A controlled experiment is performed in order to construct maintainability models of object-oriented designs using the experimental data. The first maintainability model is constructed using metrics-discriminant technique. This technique analyzes the pattern of correlation between maintainability levels and structural complexity design metrics applying discriminant analysis. The second one is built using weighted-score-level technique. The technique uses a weighted sum method by combining understandability and modifiability levels which are converted from understandability and modifiability scores. The third one is created using weighted-predicted-level technique. Weighted-predicted-level uses a weighted sum method by combining predicted understandability and modifiability level, obtained from applying understandability and modifiability models. This work presents comparison of maintainability models obtained from three techniques.",2004,0, 2538,Providing data transfer with QoS as agreement-based service,"Over the last decade, grids have become a successful tool for providing distributed environments for secure and coordinated execution of applications. The successful deployment of many realistic applications in such environments on a large scale has motivated their use in experimental science [L. C. Pearlman et al., (2004), K. Keahey et al. (2004)] where grid-based computations are used to assist in ongoing experiments. In such scenarios, quality of service (QoS) guarantees on execution as well as data transfer is desirable. The recently proposed WS-Agreement model [K. Czajkowski et al. K. Keahey et al. (2004)] provides an infrastructure within which such quality of service can be negotiated and obtained. We have designed and implemented a data transfer service that exposes an interface based on this model and defines agreements which guarantee that, within a certain confidence level, file transfer can be completed under a specified time. The data transfer service accepts a client's request for data transfer and makes an agreement with the client based on QoS metrics (such as the transfer time and confidence level with which the service can be provided). In our approach we use prediction as a base for formulating an agreement with the client, and we combine prediction and rate limiting to adoptively ensure that the agreement is met.",2004,0, 2539,A WSLA-based monitoring system for grid service - GSMon,"The target of grid monitoring and management is to monitor services in the grid for fault detection, performance analysis, performance tuning, load balancing and scheduling. This paper introduces the design and implementation of a WSLA-based grid monitoring and management system - GSMon. In GSMon, we use a service-oriented architecture, give monitoring information of the grid service a unique format by using WSLA and develop extended modules in the OGSA container. GSMon conforms to grid service and Web service standards and uses dynamic deployment technology to solve the problem of monitoring new grid services. It is a novel infrastructure for grid monitoring and management with high flexibility and scalability.",2004,0, 2540,An integrated design of multipath routing with failure survivability in MPLS networks,"Multipath routing employs multiple parallel paths between the source and destination for a connection request to improve resource utilization of a network. In this paper, we provide an integrated design of multipath routing in MPLS networks. In addition, we take into account the quality of service (QoS) in carrying delay-sensitive traffic and failure survivability in the design. Path protection or restoration policies enables the network to accommodate link failures and avoid traffic loss. We evaluate the performance of the proposed schemes in terms of call blocking probability, network resource utilization and load balancing factor. The results demonstrate that the proposed integrated design framework can provide effective network failure survivability, and also achieve better load balancing and/or higher network resource utilization",2004,0, 2541,Assessing and improving state-based class testing: a series of experiments,"This work describes an empirical investigation of the cost effectiveness of well-known state-based testing techniques for classes or clusters of classes that exhibit a state-dependent behavior. This is practically relevant as many object-oriented methodologies recommend modeling such components with statecharts which can then be used as a basis for testing. Our results, based on a series of three experiments, show that in most cases state-based techniques are not likely to be sufficient by themselves to catch most of the faults present in the code. Though useful, they need to be complemented with black-box, functional testing. We focus here on a particular technique, Category Partition, as this is the most commonly used and referenced black-box, functional testing technique. Two different oracle strategies have been applied for checking the success of test cases. One is a very precise oracle checking the concrete state of objects whereas the other one is based on the notion of state invariant (abstract states). Results show that there is a significant difference between them, both in terms of fault detection and cost. This is therefore an important choice to make that should be driven by the characteristics of the component to be tested, such as its criticality, complexity, and test budget.",2004,0, 2542,A cognitive-based mechanism for constructing software inspection teams,"Software inspection is well-known as an effective means of defect detection. Nevertheless, recent research has suggested that the technique requires further development to optimize the inspection process. As the process is inherently group-based, one approach to improving performance is to attempt to minimize the commonality within the process and the group. This work proposes an approach to add diversity into the process by using a cognitively-based team selection mechanism. The paper argues that a team with diverse information processing strategies, as defined by the selection mechanism, maximize the number of different defects discovered.",2004,0, 2543,Efficient calculation of resolution and covariance for penalized-likelihood reconstruction in fully 3-D SPECT,"Resolution and covariance predictors have been derived previously for penalized-likelihood estimators. These predictors can provide accurate approximations to the local resolution properties and covariance functions for tomographic systems given a good estimate of the mean measurements. Although these predictors may be evaluated iteratively, circulant approximations are often made for practical computation times. However, when numerous evaluations are made repeatedly (as in penalty design or calculation of variance images), these predictors still require large amounts of computing time. In Stayman and Fessler (2000), we discussed methods for precomputing a large portion of the predictor for shift-invariant system geometries. In this paper, we generalize the efficient procedure discussed in Stayman and Fessler (2000) to shift-variant single photon emission computed tomography (SPECT) systems. This generalization relies on a new attenuation approximation and several observations on the symmetries in SPECT systems. These new general procedures apply to both two-dimensional and fully three-dimensional (3-D) SPECT models, that may be either precomputed and stored, or written in procedural form. We demonstrate the high accuracy of the predictions based on these methods using a simulated anthropomorphic phantom and fully 3-D SPECT system. The evaluation of these predictors requires significantly less computation time than traditional prediction techniques, once the system geometry specific precomputations have been made.",2004,0, 2544,Improvement of PD location in GIS [gas-insulated switchgear],"The measuring system we used consisted of two UHF sensors with a frequency band of 400 MHzâˆ?500 MHz and a digital oscilloscope with maximum 2 GS/s digital rate in its two channels. The difference in arrival times of UHF signals at two sensors can be measured to assist with locating defects in GIS. The test results in the laboratory indicate that the fault location precision depends on the digital rate of the oscilloscope. On the actual site test, if the sample rate is low, there may not be enough sample plots to recover the real pulse wavefronts, which makes it difficult to identify the arrival times at the two sensors of the first peak of the UHF waveform. In this paper, a new algorithm has been described, which applies cubic spline interpolation to increase points between the two real sample points, and improve the location precision of the time-of-flight method. And then the distance difference between the two waves can be determined by using a cross-correlation function. In addition, application software has been developed to help locate the PD source in GIS.",2004,0, 2545,Multi-antenna testbeds for research and education in wireless communications,"Wireless communication systems present unique challenges and trade-offs at various levels of the system design process. Since a variety of performance measures are important in wireless communications, a family of testbeds becomes essential to validate the gains reported by the theory. Wireless testbeds also play a very important role in academia for training students and enabling research. In this article we discuss a classification scheme for wireless testbeds and present an example of the testbeds developed at UCLA for each of these cases. We present the unique capabilities of these testbeds, provide the results of the experiments, and discuss the role they play in an educational environment.",2004,0, 2546,Anomaly detection using the emerald nanosatellite on board expert system,"An expert system is a type of program designed to model human knowledge and expertise. This work describes the design, implementation, and testing of an onboard expert system developed for the dual spacecraft Emerald small satellite mission. This system takes advantage of Emerald's distributed computing architecture and is currently being used for on-board fault detection. The distributed computing architecture is composed of a network of PICmicro and Atmel microcontrollers linked together by an I2C serial data communication bus which also supports sensor and component integration via Dallas 1-wire and RS232 standards. The expert system software is executed by an Atmel microcontroller within Emerald's expert subsystem hardware. The human knowledge and expertise that the system simulates is contained within software ""rules"" that can be easily modified from the ground. The flexibility offered by this system allows the ground operator to add, modify, or remove logical operations on-orbit and overcomes the limitations imposed by hardwired systems. While expert systems have been used on spacecraft in the past, its role on Emerald for on-board fault-detection using threat integrals and persistence counters further demonstrates the power and versatility of such systems. Results include experimental data verifying the expert system's performance and its ability to distinguish threat levels posed by out-of-limit sensor readings. This paper describes the technical design of the aforementioned features and its use as part of the Emerald satellite mission.",2004,0, 2547,Rapid detection of faults for safety critical aircraft operation,"Fault diagnosis typically assumes a sufficiently large fault signature and enough time for a reliable decision to be reached. However, for a class of safety critical faults on commercial aircraft engines, prompt detection is paramount within a millisecond range to allow accommodation to avert undesired engine behavior. At the same time, false positives must be avoided to prevent inappropriate control action. To address these issues, several advanced features were developed that operate on the residuals of a model based detection scheme. We show that these features pick up system changes reliably within the required time. A bank of binary classifiers determines the presence of the fault as determined by a maximum likelihood hypothesis test. We show performance results for four different faults at various levels of severity and show performance results throughout the entire flight envelope on a high fidelity aircraft engine model.",2004,0, 2548,A multiobjective module-order model for software quality enhancement,"The knowledge, prior to system operations, of which program modules are problematic is valuable to a software quality assurance team, especially when there is a constraint on software quality enhancement resources. A cost-effective approach for allocating such resources is to obtain a prediction in the form of a quality-based ranking of program modules. Subsequently, a module-order model (MOM) is used to gauge the performance of the predicted rankings. From a practical software engineering point of view, multiple software quality objectives may be desired by a MOM for the system under consideration: e.g., the desired rankings may be such that 100% of the faults should be detected if the top 50% of modules with highest number of faults are subjected to quality improvements. Moreover, the management team for the same system may also desire that 80% of the faults should be accounted if the top 20% of the modules are targeted for improvement. Existing work related to MOM(s) use a quantitative prediction model to obtain the predicted rankings of program modules, implying that only the fault prediction error measures such as the average, relative, or mean square errors are minimized. Such an approach does not provide a direct insight into the performance behavior of a MOM. For a given percentage of modules enhanced, the performance of a MOM is gauged by how many faults are accounted for by the predicted ranking as compared with the perfect ranking. We propose an approach for calibrating a multiobjective MOM using genetic programming. Other estimation techniques, e.g., multiple linear regression and neural networks cannot achieve multiobjective optimization for MOM(s). The proposed methodology facilitates the simultaneous optimization of multiple performance objectives for a MOM. Case studies of two industrial software systems are presented, the empirical results of which demonstrate a new promise for goal-oriented software quality modeling.",2004,0, 2549,Land cover classification of SSC image: unsupervised and supervised classification using ERDAS Imagine,"NASA's Earth Sciences program is primarily focused on providing high quality data products to its science community. NASA also recognizes the need to increase its involvement with the general public, including areas of information and education. The main objective of this study is to classify the vegetation, man-made structures, and miscellaneous objects from the Satellite Image of NASA Stennis Space Center (SSC), Mississippi, and USA, by using the software, ERDAS Imagine 8.5. The ERDAS Image software performs the classification of an image for identification of terrestrial features based on the spectral analysis. For classification of the SSC image, the multispectral data was used for categorization of terrestrial objects, vegetation and shadows of the trees. These are two ways to classify pixels into different categories: Supervised and unsupervised. The classification of unsupervised data through ERDAS Image helped in identifying the terrestrial objects in the Study Image (SSC). The spectral pattern present within the data for each pixel was used as the numerical basis for categorization. The first analysis of the Image SSC involved the use of generalized Unsupervised Classification with 4 categories (Grass, Trees, Man-Made and Unknown). The result of the Unsupervised Image was used to create another image by using Supervised classification. The key difference between the two images is the ability of supervised image to decipher similar images, such as the roofs of the buildings and the shadows of the trees. Image stacking was conducted to create a fully classified image to separate shadow, grass, man-made, and trees. This poster will describe the procedures for viewing and measuring image, Computer-guided (Unsupervised) and User-guided (Supervised) Procedures will be described on image stacking to view each classification one at a time and stack them into a complete Classified Image. The application of unsupervised and supervised classification in agriculture will be discussed by giving examples of measurement of field reflectance of two classes of giant salvinia [green giant salvinia (green foliage) and senesced giant salvinia (mixture of green and brown foliage)], and invasive aquatic weed in Texas.",2004,0, 2550,Assessing biogenic emission impact on the ground-level ozone concentration by remote sensing and numerical model,"Emission inventory data is one of the major inputs for all air quality simulation models. Emission inventory data for the prediction of ground-level ozone concentration, grouped as point, area, mobile and biogenic sources, are a composite of all reported and estimated pollutant emission information from many organizations. Before applying air quality simulation model, the emission inventory data generally require additional processing for meeting spatial, temporal, and speciation requirements using advanced information technologies. In this study, SMOKE was setup to update the essential emission processing. The emission processing work was performed to prepare emission input for U.S. EPA's Models-3/CMAQ. The fundamental anthropogenic emission inventory commonly used in Taiwan is the TEDS 4.2 software package. However, without the proper inclusion of accurate estimation of biogenic emission, the estimation of ground-level ozone concentration may not be meaningful. With the aid of SPOT satellite image, biogenic gas emission modeling analysis can be achieved to fit in BEIS-2 in SMOKE. Improved utilization of land use identification data, based on SPOT outputs and emission factors, may be influential in support of the modeling work. During this practice, land use was identified via an integrated assessment based on both geographical information system and remote sensing technologies; and emission factors were adapted from a series of existing database in the literature. The research findings clearly indicate that the majority of biogenic VOCs emissions occurred in the mountains and farmland actually exhibit fewer impacts on ground-level ozone concentration in populated areas than the anthropogenic emissions in South Taiwan. This implies that fast economic growth ends up with sustainability issue due to overwhelming anthropogenic emissions",2004,0, 2551,Crystal Ball® and Design for Six Sigma,"8 In today's competitive market, businesses are adopting new practices like Design For Six Sigma (DFSS), a customer driven, structured methodology for faster-to-market, higher quality, and less costly new products and services. Monte Carlo simulation and stochastic optimization can help DFSS practitioners understand the variation inherent in a new technology, process, or product, and can be used to create and optimize potential designs. The benefits of understanding and controlling the sources of variability include reduced development costs, minimal defects, and sales driven through improved customer satisfaction. This tutorial uses Crystal Ball Professional Edition, a suite of easy-to-use Microsoft® Excel-based software, to demonstrate how stochastic simulation and optimization can be used in all five phases of DFSS to develop the design for a new compressor.",2004,0, 2552,Improving the performance of dispatching rules in semiconductor manufacturing by iterative simulation,"In this paper, we consider semiconductor manufacturing processes that can be characterized by a diverse product mix, heterogeneous parallel machines, sequence-dependent setup times, a mix of different process types, i.e. single-wafer vs. batch processes, and reentrant process flows. We use dispatching rules that require the estimation of waiting times of the jobs. Based on the lead time iteration concept of Vepsalainen and Morton (1988), we obtain good waiting time estimates by using exponential smoothing techniques. We describe a database-driven architecture that allows for an efficient implementation of the suggested approach. We present results of computational experiments for reference models of semiconductor wafer fabrication facilities. The results demonstrate that the suggested approach leads to high quality solutions.",2004,0, 2553,An infinite server queueing approach for describing software reliability growth: unified modeling and estimation framework,"In general, the software reliability models based on the nonhomogeneous Poisson processes (NHPPs) are quite popular to assess quantitatively the software reliability and its related dependability measures. Nevertheless, it is not so easy to select the best model from a huge number of candidates in the software testing phase, because the predictive performance of software reliability models strongly depends on the fault-detection data. The asymptotic trend of software fault-detection data can be explained by two kinds of NHPP models; finite fault model and infinite fault model. In other words, one needs to make a hypothesis whether the software contains a finite or infinite number of faults, in selecting the software reliability model in advance. In this article, we present an approach to treat both finite and infinite fault models in a unified modeling framework. By introducing an infinite server queueing model to describe the software debugging behavior, we show that it can involve representative NHPP models with a finite and an infinite number of faults. Further, we provide two parameter estimation methods for the unified NHPP based software reliability models from both standpoints of Bayesian and nonBayesian statistics. Numerical examples with real fault-detection data are devoted to compare the infinite server queueing model with the existing one under the same probability circumstance.",2004,0, 2554,An exploratory study of groupware support for distributed software architecture evaluation process,"Software architecture evaluation is an effective means of addressing quality related issues quite early in the software development lifecycle. Scenario-based approaches to evaluate architecture usually involve a large number of stakeholders, who need to be collocated for evaluation sessions. Collocating a large number of stakeholders is an expensive and time-consuming exercise, which may prove to be a hurdle in the wide-spread adoption of architectural evaluation practices. Drawing upon the successful introduction of groupware applications to support geographically distributed teams in software inspection, and requirements engineering disciplines, we propose the concept of distributed architectural evaluation using Internet-based collaborative technologies. This paper illustrates the methodology of a pilot study to assess the viability of a larger experiment intended to investigate the feasibility of groupware support for distributed software architecture evaluation. In addition, the results of the pilot study provide some interesting findings on the viability of groupware-supported software architectural evaluation process.",2004,0, 2555,Performing high efficiency source code static analysis with intelligent extensions,"This paper presents an industry practice for highly efficient source code analysis to promote software quality. As a continuous work of previously reported source code analysis system, we researched and developed a few engineering-oriented intelligent extensions to implement more cost-effective extended code static analysis and engineering processes. These include an integrated empirical scan and filtering tool for highly accurate noise reduction, and a new code checking test tool to detect function call mismatch problems, which may lead to many severe software defects. We also extended the system with an automated defect filing and verification procedure. The results show that, for a huge code base of millions of lines, our intelligent extensions not only contribute to the completeness and effectiveness of static analysis, but also establish significant engineering productivity.",2004,0, 2556,Empirical evaluation of orthogonality of class mutation operators,"Mutation testing is a fault-based testing technique which provides strong quality assurance. Mutation testing has a very long history for the procedural programs at unit-level testing, but the research on mutation testing of object-oriented programs is still immature. Recently, class mutation operators are proposed to detect object-oriented specific faults. However, any analysis has not been conducted on the class mutation operators. In this paper, we evaluate the orthogonality of the class mutation operators by some experiment. The experimental results show the high possibility that each class mutation operator has fault-revealing power that is not achieved by other mutation operators, i.e. orthogonal. Also, the results show that the number of mutants from the class mutation operators is small so that the cost is not so high as procedural programs.",2004,0, 2557,Systematic operational profile development for software components,"An operational profile is a quantification of the expected use of a system. Determining an operational profile for software is a crucial and difficult part of software reliability assessment in general and it can be even more difficult for software components. This paper presents a systematic method for deriving an operational profile for software components. The method uses both actual usage data and intended usage assumptions to derive a usage structure, usage distribution and characteristics of parameters (including relationships between parameters). A usage structure represents the flow and interaction of operation calls. Statecharts are used to model the usage structures. A usage distribution represents probabilities of the operations. The method is illustrated on two Java classes but can be applied to any software component that is accessed through an application program interface (API).",2004,0, 2558,Rotor cage fault diagnosis in induction motors based on spectral analysis of current Hilbert modulus,"Hilbert transformation is an ideal phase shifting tool in data signal processing. Being Hilbert transformed, the conjugated one of a signal is obtained. The Hilbert modulus is defined as the square of a signal and its conjugation. This work presents a method by which rotor faults of squirrel cage induction motors, such as broken rotor bars and eccentricity, can be diagnosed. The method is based on the spectral analysis of the stator current Hilbert Modulus of the induction motors. Theoretical analysis and experimental results demonstrate that has the same rotor fault detecting ability as the extended Park' vector approach. The vital advantage of the former is the smaller hardware and software spending compared with the existing ones.",2004,0, 2559,Semi-supervised learning for software quality estimation,"A software quality estimation model is often built using known software metrics and fault data obtained from program modules of previously developed releases or similar projects. Such a supervised learning approach to software quality estimation assumes that fault data is available for all the previously developed modules. Considering the various practical issues in software project development, fault data may not be available for all the software modules in the training data. More specifically, the available labeled training data is such that a supervised learning approach may not yield good software quality prediction. In contrast, a supervised classification scheme aided by unlabeled data, i.e., semisupervised learning, may yield better results. This work investigates semisupervised learning with the expectation maximization (EM) algorithm for the software quality classification problem. Case studies of software measurement data obtained from two NASA software projects, JM1 and KC2, are used in our empirical investigation. A small portion of the JM1 dataset is randomly extracted and used as the labeled data, while the remaining JM1 instances are used as unlabeled data. The performance of the semisupervised classification models built using the EM algorithm is evaluated by using the KC2 project as a test dataset. It is shown that the EM-based semisupervised learning scheme improves the predictive accuracy of the software quality classification models.",2004,0, 2560,Using fuzzy theory for effort estimation of object-oriented software,"Estimating software effort and costs is a very important activity that includes very uncertain elements. The concepts of the fuzzy set theory has been successfully used for extending metrics such as FP and reducing human influence in the estimation process. However, when we consider object-oriented technologies, other models, such as the use case model, are used to represent the specification in the early stages of development. New metrics based on this model were proposed and the application of the fuzzy set theory in this context is also very important. This work introduces the metric FUSP (fuzzy use case size points) that allows gradual classifications in the estimation by using fuzzy numbers. Results of a study case show some advantages and limitations of the proposed metric.",2004,0, 2561,Noise identification with the k-means algorithm,"The presence of noise in a measurement dataset can have a negative effect on the classification model built. More specifically, the noisy instances in the dataset can adversely affect the learnt hypothesis. Removal of noisy instances will improve the learnt hypothesis; thus, improving the classification accuracy of the model. A clustering-based noise detection approach using the k-means algorithm is presented. We present a new metric for measuring the potentiality (noise factor) of an instance being noisy. Based on the computed noise factor values of the instances, the clustering-based algorithm is then used to identify and eliminate p% of the instances in the dataset. These p% of instances are considered the most likely to be noisy among the instances in the dataset - the p% value is varied from 1% to 40%. The noise detection approach is investigated with respect to two case studies of software measurement data obtained from NASA software projects. The two datasets are characterized by the same thirteen software metrics and a class label that classifies the program modules as fault-prone and not fault-prone. It is shown that as more noisy instances are removed, classification accuracy of the C4>5 learner improves. This indicates that the removed instances are most likely noisy instances that attributed to poor classification accuracy.",2004,0, 2562,Quantifying the quality of object-oriented design: the factor-strategy model,"The quality of a design has a decisive impact on the quality of a software product; but due to the diversity and complexity of design properties (e.g., coupling, encapsulation), their assessment and correlation with external quality attributes (e.g., maintenance, portability) is hard. In contrast to traditional quality models that express the ""goodness"" of design in terms of a set of metrics, the novel Factor-Strategy model proposed by This work, relates explicitly the quality of a design to its conformance with a set of essential principles, rules and heuristics. This model is based on a novel mechanism, called detection strategy, that raises the abstraction level in dealing with metrics, by allowing to formulate good-design rules and heuristics in a quantifiable manner, and to detect automatically deviations from these rules. This quality model provides a twofold advantage: (i) an easier construction and understanding of the model as quality is put in connection with design principles rather than ""raw numbers""; and (ii) a direct identification of the real causes of quality flaws. We have validated the approach through a comparative analysis involving two versions of a industrial software system.",2004,0, 2563,Failure analysis of open faults by using detecting/un-detecting information on tests,"Recently, manufacturing defects including opens in the interconnect layers have been increasing. Therefore, a failure analysis for open faults has become important in manufacturing. Moreover, the failure analysis for open faults under BIST environment is demanded. Since the quality of the failure analysis is engaged by the resolution of locating the fault, we propose the method for locating single open fault at a stem, based on only detecting/un-detecting information on tests. Our method deduces candidate faulty stems based on the number of detections for single stuck-at fault at each fan-out branches, by performing single stuck-at fault simulation with both detecting and un-detecting tests. To improve the ability of locating the fault, the method reduces the candidate faulty stems based on the number of detections for multiple stuck-at faults at fanout branches of the candidate faulty stem, by performing multiple stuck-at fault simulation with detecting tests.",2004,0, 2564,Estimating Dependability of Parallel FFT Application using Fault Injection,This paper discusses estimation of dependability of a parallel FFT application. The application uses FFTW library. Fault susceptibility is assessed using software implemented fault injection. The fault injection campaign and the experiment results are presented. The response classes to injected faults are analyzed. The accuracy of evaluated data is verified experimentally.,2004,0, 2565,Meter data management for the electricity market,"The quality of the meter data for billing and analysis is very important in the electricity market. The meter data is used primarily in billing & settlement for industrial customers and bulk trading partners who have direct access to data, and used secondarily in trading, transmission operations & planning, and load forecasting & scheduling who don't have direct access to data. A software system called the metering data management (MDM) system has been developed using the web service technology to support the meter usage data collection, validation, estimation, versioning, and publishing at Bonneville Power Administration (BPA), a US Federal Power Marketing Agency. One of its key features is the validation and estimation of the meter usage data based on statistical models. The paper presents the infrastructure and implementation details of MDM. It also addresses a novel approach to validating the meter data and estimating missing values in the meter data. The key performance criteria of MDM are scalability, collaboration, integration, in addition to good data acquisition and data persistence capabilities. Further more, use of the meter data with weather information for more accurate validation and estimation is also discussed",2004,0, 2566,Wireless download agent for high-capacity wireless LAN based ubiquitous services,"We propose a wireless download agent which effectively controls the point at which a download starts for providing a comfortable mobile computing environment. A user can get the desired data with the wireless download agent while walking through a service area without stopping. We conducted simulations to evaluate its performance in terms of throughput, download period, and probability of successful download. Our results show that the proposed scheme very well suits the wireless download agent in high-speed wireless access systems with many users. Furthermore, we describe the use of the proposed scheme considering the randomness of the walking directions of users.",2004,0, 2567,Link characterization for system level CDMA performance evaluation in high mobility,"The evaluation of the performance of a cellular system using a simulation environment usually involves a link-to-system mapping based on a channel quality indication. The mapping is done using a set of link curves characterizing the link behavior, that are generated offline. In some cases, the static performance of a link is a sufficient indicator. But in high mobility, the block-fading assumption introduces inaccuracies in the link-to-system mapping. This paper proposes a dual-variable mapping technique that can be used in high-mobility conditions, using the mean and the variance of the SINR seen during a frame. Simulations show that, using this methodology, the mapping can be created independent of the velocity and the multipath profile of the channel. This mapping technique can also be used as a channel quality indication for link adaptation purposes.",2004,0, 2568,Case study of condition based health maintenance of large power transformer at Rihand substation using on-line FDD-EPT,"The constant monitoring of large, medium and small power transformers for purposes of assessing their health, operating condition while maximizing personnel resources and preserving capital is often a topic of spirited discussions at both national and international conferences. Further, the considerations of transformers being out of service for extended periods of time due to conditions that, with the proper diagnostic monitoring equipment could have preemptively detected, diagnosed and prevented catastrophic equipment failure is of prime importance in today's economic conditions. These operating conditions are becoming more serious in several locations around the world. A recent case study of PD monitoring done at the Riband substation in India is discussed here in the sequel. Additional transformers tested in India for PGCIL (Ballabgarh) and NTPC (Noida, Delhi) in 1999 will be presented using the FDD-EPT system. Demonstrations of the FDD-EPT (fault diagnostic device for electrical power transformers) system on transformers for BHEL, MP and Tata Power in Mumbai also provided encouraging results. This will further illustrate the efficacy of this system.",2004,0, 2569,Improving novelty detection in short time series through RBF-DDA parameter adjustment,"Novelty detection in time series is an important problem with application in different domains. such as machine failure detection, fraud detection and auditing. We have previously proposed a method for time series novelty detection based on classification of time series windows by RBF-DDA neural networks. The paper proposes a method to be used in conjunction with this time series novelty detection method whose aim is to improve performance by adequately selecting the window size and the RBF-DDA parameter values. The method was evaluated on six real-world time series and the results obtained show that it greatly improves novelty detection performance.",2004,0, 2570,Robust quality management for differentiated imprecise data services,"Several applications, such as Web services and e-commerce, are operating in open environments where the workload characteristics, such as the load applied on the system and the worst-case execution times, are inaccurate or even not known in advance. This implies that transactions submitted to a real-time database cannot be subject to exact schedulability analysis given the lack of a priori knowledge of the workload. In this paper we propose an approach, based on feedback control, for managing the quality of service of real-time databases that provide imprecise and differentiated services, given inaccurate workload characteristics. For each service class, the database operator specifies the quality of service requirements by explicitly declaring the precision requirements of the data and the results of the transactions. The performance evaluation shows that our approach provides reliable quality of service even in the face of varying load and inaccurate execution time estimates.",2004,0, 2571,A hybrid genetic algorithm for tasks scheduling in heterogeneous computing systems,"Efficient application scheduling is critical for achieving high performance in heterogeneous computing systems (HCS). Because an application can be partitioned into a group of tasks and represented as a directed acyclic graph (DAG), the problem can be stated as finding a schedule for a DAG to be executed in a HCS so that the schedule length can be minimized. The tasks scheduling problem is NP-hard in general, except in a few simplified situations. In order to obtain optimal or suboptimal solutions, a large number of scheduling heuristics have been presented in the literature. Genetic algorithm (GA), as a power tool to achieve global optimal, has been successfully used in this field. This work presents a new hybrid genetic algorithm to solve the scheduling problem in HCS. It uses a direct method to encode a solution into chromosome. Topological sort of DAG is used to repair the offspring in order to avoid yielding illegal or infeasible solutions, and it also guarantees that all feasible solutions can be reached with some probability. In order to remedy the GA's weakness in fine-tuning, this paper uses a greedy strategy to improve the fitness of the individuals in crossover operator, based on Lamarckian theory in the evolution. The simulation results comparing with a typical genetic algorithm and a typical list heuristic, both from the literature, show that this algorithm produces better results in terms of both quality of solution and convergence speed.",2004,0, 2572,Applying SPC to autonomic computing,"Statistical process control (SPC) is proposed as the method to frame autonomic computing system. SPC follows a data-driven approach to characterize, evaluate, predict, and improve the system services. Perspectives that are central to process measurement including central tendency, variation, stability, capability are outlined. The principles of SPC hold that by establishing and sustaining stable levels of variability, processes will yield predictable results. SPC is explored to meet and support individual autonomic computing elements' requirement. One timetabling example illustrates how SPC discover and incorporate domain-specific knowledge, thus stabilize and optimize the application service quality. The example represents reasonable application of process control that has been demonstrated to be successful in engineering point of view.",2004,0, 2573,Development of on-line vibration condition monitoring system of hydro generators,"Mechanical vibration information is critical to diagnosing the health of a generator. Existing vibration monitoring systems have poor resistibility against the strong electromagnetic interference in field and difficult to extend. In order to solve these problems, an on-line monitoring system based on LonWorks control network has been developed. In this paper, the structure of hardware and software, the functions and characteristics of system have been described in detail. The analysis result of a vibration signal has proved that this monitoring system can detect fault of generators efficiently. The system has been employed to monitor the vibration of hydro generators operation in a water power plant in Hubei province, China.",2004,0, 2574,Unit testing in practice,"Unit testing is a technique that receives a lot of criticism in terms of the amount of time that it is perceived to take and in how much it costs to perform. However it is also the most effective means to test individual software components for boundary value behavior and ensure that all code has been exercise adequately (e.g. statement, branch or MC/DC coverage). In this paper we examine the available data from three safety related software projects undertaken by Pi Technology that have made use of unit testing. Additionally we discuss the different issues that have been found applying the technique at different phases of the development and using different methods to generate those test. In particular we provide an argument that the perceived costs of unit testing may be exaggerated and that the likely benefits in terms of defect detection are actually quite high in relation to those costs.",2004,0, 2575,A generic method for statistical testing,"This paper addresses the problem of selecting finite test sets and automating this selection. Among these methods, some are deterministic and some are statistical. The kind of statistical testing we consider has been inspired by the work of Thevenod-Fosse and Waeselynck. There, the choice of the distribution on the input domain is guided by the structure of the program or the form of its specification. In the present paper, we describe a new generic method for performing statistical testing according to any given graphical description of the behavior of the system under test. This method can be fully automated. Its main originality is that it exploits recent results and tools in combinatorics, precisely in the area of random generation of combinatorial structures. Uniform random generation routines are used for drawing paths from the set of execution paths or traces of the system under test. Then a constraint resolution step is performed, aiming to design a set of test data that activate the generated paths. This approach applies to a number of classical coverage criteria. Moreover, we show how linear programming techniques may help to improve the quality of test, i.e. the probabilities for the elements to be covered by the test process. The paper presents the method in its generality. Then, in the last section, experimental results on applying it to structural statistical software testing are reported.",2004,0, 2576,Are found defects an indicator of software correctness? An investigation in a controlled case study,"In quality assurance programs, we want indicators of software quality, especially software correctness. The number of found defects during inspection and testing are often used as the basis for indicators of software correctness. However, there is a paradox in this approach, since the remaining defects is what impacts negatively on software correctness, not the found ones. In order to investigate the validity of using found defects or other product or process metrics as indicators of software correctness, a controlled case study is launched. 57 sets of 10 different programs from the PSP course are assessed using acceptance test suites for each program. In the analysis, the number of defects found during the acceptance test are compared to the number of defects found during development, code size, share of development time spent on testing etc. It is concluded from a correlation analysis that 1) fewer defects remain in larger programs 2) more defects remain when larger share of development effort is spent on testing, and 3) no correlation exist between found defects and correctness. We interpret these observations as 1) the smaller programs do not fulfill the expected requirements 2) that large share effort spent of testing indicates a ""hacker"" approach to software development, and 3) more research is needed to elaborate this issue.",2004,0, 2577,Empirical studies of test case prioritization in a JUnit testing environment,"Test case prioritization provides a way to run test cases with the highest priority earliest. Numerous empirical studies have shown that prioritization can improve a test suite's rate of fault detection, but the extent to which these results generalize is an open question because the studies have all focused on a single procedural language, C, and a few specific types of test suites, in particular, Java and the JUnit testing framework are being used extensively in practice, and the effectiveness of prioritization techniques on Java systems tested under JUnit has not been investigated. We have therefore designed and performed a controlled experiment examining whether test case prioritization can be effective on Java programs tested under JUnit, and comparing the results to those achieved in earlier studies. Our analyses show that test case prioritization can significantly improve the rate of fault detection of JUnit test suites, but also reveal differences with respect to previous studies that can be related to the language and testing paradigm.",2004,0, 2578,Coverage metrics for Continuous Function Charts,"Continuous Function Charts are a diagrammatical language for the specification of mixed discrete-continuous embedded systems, similar to the languages of Matlab/Simulink, and often used in the domain of transportation systems. Both control and data flows are explicitly specified when atomic units of computation are composed. The obvious way to assess the quality of integration test suites is to compute known coverage metrics for the generated code. This production code does not exhibit those structures that would make it amenable to ""relevant"" coverage measurements. We define a translation scheme that results in structures relevant for such measurements, apply coverage criteria for both control and dataflows at the level of composition of atomic computational units, and argue for their usefulness on the grounds of detected errors.",2004,0, 2579,Multiple profile evaluation using a single test suite in random testing,"Using an integral formulation of random testing for regular continuous input, we present an approximate solution that enables highly accurate estimations of the results for arbitrary operational profiles, using strategically chosen input in a single test suite. The current approach, building on the Lagrange interpolation and Gaussian integration theorems, thus solves the longstanding issue of the necessity of unique test suites for fundamentally different operational profiles in random testing. Empirical comparisons are furthermore performed for a total of 27 profiles on five seeded faults in a numerical routine and a straightforward modulus fault. Using a high enough expansion in the approximation, the resulting failure frequencies and variances became statistically identical with the ""ordinary"" evaluations in all cases. It thus becomes possible to eliminate any extra test cases and still keep full accuracy when changing between operational profiles.",2004,0, 2580,From test count to code coverage using the Lognormal Failure Rate,"When testing software, both effort and delay costs are related to the number of tests developed and executed. However the benefits of testing are related to coverage achieved and defects discovered. We establish a novel relationship between test costs and the benefits, specifically between the quantity of testing and test coverage, based on the Lognormal Failure Rate model. We perform a detailed study of how code coverage increases as a function of number of test cases executed against the 29 files of an application termed SHARPE. Distinct, known coverage patterns are available for 735 tests against this application. By simulating execution of those tests in randomized sequences we determined the average empirical coverage as a function of number of test cases executed for SHARPE as a whole and for each of its individual files. Prior research suggests the branching nature of software causes code coverage to grow as the Laplace transform of the lognormal. We use the empirical SHARPE coverage data to validate the lognormal hypothesis and the derivation of the coverage growth model. The SHARPE data provides initial insights into how the parameters of the lognormal are related to program file characteristics, how the properties of the files combine into properties of the whole, and how data quantity and quality affects parameter estimation.",2004,0, 2581,Empirical study of session-based workload and reliability for Web servers,"The growing availability of Internet access has led to significant increase in the use of World Wide Web. If we are to design dependable Web-based systems that deal effectively with the increasing number of clients and highly variable workload, it is important to be able to describe the Web workload and errors accurately. In this paper we focus on the detailed empirical analysis of the session-based workload and reliability based on the data extracted from actual Web logs often Web servers. First, we address the data collection process and describe the methods for extraction of workload and error data from Web log files. Then, we introduce and analyze several intra-session and inter-session metrics that collectively describe Web workload in terms of user sessions. Furthermore, we analyze Web error characteristics and estimate the request-based and session-based reliability of Web servers. Finally, we identify the invariants of the Web workload and reliability that apply through all data sets considered. The results presented in this paper show that session-based workload and reliability are better indicators of the users perception of the Web quality than the request-based metrics and provide more useful measures for tuning and maintaining of the Web servers.",2004,0, 2582,Robust prediction of fault-proneness by random forests,"Accurate prediction of fault prone modules (a module is equivalent to a C function or a C+ + method) in software development process enables effective detection and identification of defects. Such prediction models are especially beneficial for large-scale systems, where verification experts need to focus their attention and resources to problem areas in the system under development. This paper presents a novel methodology for predicting fault prone modules, based on random forests. Random forests are an extension of decision tree learning. Instead of generating one decision tree, this methodology generates hundreds or even thousands of trees using subsets of the training data. Classification decision is obtained by voting. We applied random forests in five case studies based on NASA data sets. The prediction accuracy of the proposed methodology is generally higher than that achieved by logistic regression, discriminant analysis and the algorithms in two machine learning software packages, WEKA [I. H. Witten et al. (1999)] and See5. The difference in the performance of the proposed methodology over other methods is statistically significant. Further, the classification accuracy of random forests is more significant over other methods in larger data sets.",2004,0, 2583,Image fusion for a digital camera application based on wavelet domain hidden Markov models,"The traditional image fusion for a digital camera application may not be satisfactory to classify pixels in the source image by the statistical techniques. In this paper, we present a technique, based on wavelet domain hidden Markov models (HMMs) and max-likelihood estimation. The method presented here consists of deciding the quality of pixels in source images directly from the statistical techniques to overcome the shift-variant of inverse wavelet transform. We have studied several possibilities including all energy methods that are utilized in the standard image fusion for digital camera application and discuss the difference to our new method. The new framework uses two trained HMMs to decide if the wavelet coefficients of source images are in-focus or out-focus, and then judges the quality of pixels in the source images directly to overcome the shift-variant of inverse wavelet transform. The two trained HMMs are obtained separately from an in-focus image and a out-focus image using EM algorithm. We used the method to merge two images in which have two clocks with different focus and obtain the best fusion results in preserving edge information and avoiding shift-variant.",2004,0, 2584,Predicting class testability using object-oriented metrics,"We investigate factors of the testability of object-oriented software systems. The starting point is given by a study of the literature to obtain both an initial model of testability and existing OO metrics related to testability. Subsequently, these metrics are evaluated by means of two case studies of large Java systems for which JUnit test cases exist. The goal of This work is to define and evaluate a set of metrics that can be used to assess the testability of the classes of a Java system.",2004,0, 2585,What do you mean my board test stinks?,"Board level testing account for the functionality and performance of all the device placed on the board, and also account for how the devices are assembled on to the board and how the device interact with one another. Three areas of board and system level testing are structural testing, functional testing and parametric testing. Structural testing focuses on the assembly process given above. Parametric tests have well defined metrics like bit error rate and jitter. Functional testing are targeted very less in order to target the defects.",2004,0, 2586,Model gain scheduling and reporting for ethylene plant on-line optimizer,"An ethylene plant is one of the largest chemical plants. As such, there are frequent changes in hydrocarbon feed mix, individual feed quality and demand for its olefin products. This makes it difficult to operate an ethylene plant in an optimal way. Showa Denko K.K. (SDK) has applied on-line optimization software packages of Honeywell Process Solutions to its ethylene plant. The software packages consist of large-scale model predictive control (MPC) controllers and optimization systems. As a function of the software, furnace yield model gains of MPC are scheduled by using a rigorous first principle model to cope with the strong non-linearity of the reactions in the furnace. In addition, non-linearity of consumption of fuel and steam is important, especially when user demand or storage tank room is limited and productions rates are kept constant. In that case, the direction of optimization of fuel and steam users could be wrong if non-linearity is ignored. The software, however, cannot cope with that non-linearity. Therefore, the achievable performance of the original software is limited. To cope with non-linearity, SDK developed a function of gain scheduling for utility models. As a result, both throughput and efficiency were improved more than expected. Since the on-line optimizer copes with many constraints and checks benefits, it is difficult for operators to intuitively understand the optimization results. Therefore, SDK also developed user interfaces and daily reporting systems for the on-line optimizer. The user interfaces indicate the operating condition and the bottlenecks in the ethylene plant every minute. Thus, operators can recognize the operation determined by the on-line optimizer. In daily reports, process trends of bottleneck points and manipulated variables of the on-line optimizer are reported. The daily reports are automatically generated and sent to members in associated sections by e-mail. With the report, the operation of the ethylene plant can be monitored and fine-tuning of the on-line optimizer can be performed.",2004,0, 2587,Robust sensor fault reconstruction for an inverted pendulum using right eigenstructure assignment,"This work presents a robust sensor fault reconstruction scheme applied to an inverted pendulum. The scheme utilized a linear observer, using right eigenstructure assignment in order to the effect of nonlinearities and disturbances on the fault reconstruction. The design method is adapted and modified from existing work in the literature. A suitable interface between the pendulum and a computer enabled the application. Very good results were obtained.",2004,0, 2588,Assessing Fault Sensitivity in MPI Applications,"Today, clusters built from commodity PCs dominate high-performance computing, with systems containing thousands of processors now being deployed. As node counts for multi-teraflop systems grow to thousands and with proposed petaflop system likely to contain tens of thousands of nodes, the standard assumption that system hardware and software are fully reliable becomes much less credible. Concomitantly, understanding application sensitivity to system failures is critical to establishing confidence in the outputs of large-scale applications. Using software fault injection, we simulated single bit memory errors, register file upsets and MPI message payload corruption and measured the behavioral responses for a suite of MPI applications. These experiments showed that most applications are very sensitive to even single errors. Perhaps most worrisome, the errors were often undetected, yielding erroneous output with no user indicators. Encouragingly, even minimal internal application error checking and program assertions can detect some of the faults we injected.",2004,0, 2589,COSMAD: a Scilab toolbox for output-only modal analysis and diagnosis of vibrating structures,"Modal analysis of vibrating structures is a usual technique for design and monitoring in many industrial sectors: car manufacturing, aerospace, civil structures. We present COSMAD a software environment for in-operation situation without any measured or controlled input. COSMAD is an identification and detection Scilab toolbox. It covers modal identification with visual inspection of the results via a GUI or fully automated modal identification and monitoring. The toolbox offers a complete package for signal visualization, filtering and down-sampling, the tracking of frequency and damping over time, mode-shape visualization, MAC computation, etc. The detection part is also a comprehensive toolbox for modal diagnosis and physical localization of faulty components",2004,0, 2590,Admission control in deadline-based network resource management,"In our deadline-based network resource management framework, each application data unit (ADU) is characterized by a (size, deadline) pair, which can be used to convey the resource and QoS requirements of the ADU. Specifically, the ADU's bandwidth requirement can be implicitly estimated by the ratio: size / (deadline - current time), and the time at which the ADU should be delivered is specified by the deadline. The ADU deadline is mapped onto deadlines at the network layer, which are carried by packets and used by routers for channel scheduling. In an earlier work, we have shown that deadline-based channel scheduling achieves good performance in terms of the percentage of ADUs that are delivered on-time. However, when a network is under heavy load, congestion may occur, and deadline-based scheduling alone may not be sufficient to prevent performance degradation. The level of congestion can be reduced by admission control. In this paper, two application-layer admission control algorithms are developed. The performance of these two algorithms is evaluated by simulation.",2004,0, 2591,Regression benchmarking with simple middleware benchmarks,"The paper introduces the concept of regression benchmarking as a variant of regression testing focused at detecting performance regressions. Applying the regression benchmarking in the area of middleware development, the paper explains how regression benchmarking differs from middleware benchmarking in general. On a real-world example of TAO, the paper shows why the existing benchmarks do not give results sufficient for regression benchmarking, and proposes techniques for detecting performance regressions using simple benchmarks.",2004,0, 2592,Applications of fuzzy-logic-wavelet-based techniques for transformers inrush currents identification and power systems faults classification,"The advent of wavelet transforms (WTs) and fuzzy-inference mechanisms (FIMs) with the ability of the first to focus on system transients using short data windows and of the second to map complex and nonlinear power system configurations provide an excellent tool for high speed digital relaying. This work presents a new approach to real-time fault classification in power transmission systems, and identification of power transformers magnetising inrush currents using fuzzy-logic-based multicriteria approach Omar A.S. Youssef [2004, 2003] with a wavelet-based preprocessor stage Omar A.S. Youssef [2003, 2001]. Three inputs, which are functions of the three line currents, are utilised to detect fault types such as LG, LL, LLG as well as magnetising inrush currents. The technique is based on utilising the low-frequency components generated during fault conditions on the power system and/or magnetising inrush currents. These components are extracted using an online wavelet-based preprocessor stage with data window of 16 samples (based on 1.0 kHz sampling rate and 50 Hz power frequency). Generated data from the simulation of an 330 Δ/33Y kV, step-down transformer connected to a 330 kV model power system using EMTP software were used by the MATLAB program to test the performance of the technique as to its speed of response, computational burden and reliability. Results are shown and they indicate that this approach can be used as an effective tool for high-speed digital relaying, and that computational burden is much simpler than the recently postulated fault classification.",2004,0, 2593,Applications of fuzzy inference mechanisms to power system relaying,"Most transmission line protective schemes are based on deterministic computations on a well defined model of the system to be protected. This results in difficulty because of the complexity of the system model, the lack of knowledge of its parameters, the great number of information to be processed, and the difficulty in taking into consideration any system variation as the rules are fixed. The application of fuzzy logic for exploring complex, nonlinear systems, diagnosis systems and other expert systems, particularly when there is no simple mathematical model to be performed, provides a very powerful and attractive solution to classification problems. In this paper, a feasibility study on the application of different fuzzy reasoning mechanisms to power system relaying algorithms is conducted. Those mechanisms are namely, Mamdani's mechanism, Larsen's mechanism, Takagi-Sugeno's mechanism, and Tsukamoto mechanism. A comparative analysis on the application of these fuzzy inference mechanisms to a novel fault detection and phase selection technique on EHV transmission lines is reported. The proposed scheme utilises only the phase angle between two line currents for the decision making part of the scheme. A sample three-phase power system was simulated using the EMTP software. An online wavelet-based preprocessor stage is used with data window of 10 samples (based on 4.5 kHz sampling rate and 50 Hz power frequency). The performance of the proposed model was extensively tested in each case of fuzzy inference mechanism using the MATLAB software. The advantages and disadvantages of each mechanism are reported and compared together. Some of the test results are included in this paper.",2004,0, 2594,Medical software control quality using the 3D Mojette projector,"The goal of this paper is to provide a tool that allows to assess a set of 2D projection data from a 3D object which can be either real or synthetic data. However, the generated 2D projection set is not sampled onto a classic orthogonal grid but uses a regular grid depending on the discrete angle of projection and the 3D orthogonal grid. This allows a representation of the set of projections that can easily be described in spline spaces. The subsequent projections set is used after an interpolation scheme to compare (in the projection space) the adequation between the original dataset and the obtained reconstruction. These measures are performed from a 3D multiresolution stack in the case of 3D PET projector. Its direct use for the digital radiography reprojection control quality assessment is finally exposed.",2004,0, 2595,Image-quality assessment in optical tomography,"Modern medical imaging systems often rely on complicated hardware and sophisticated algorithms to produce useful digital images. It is essential that the imaging hardware and any reconstruction algorithms used are optimized, enabling radiologists to make the best decisions and quantify a patient's health status. Optimization of the hardware often entails determining the physical design of the system, such as the locations of detectors in optical tomography or the design of the collimator in SPECT systems. For software or reconstruction algorithm optimization one is often determining the values of regularization parameters or the number of iterations in an iterative algorithm. In this paper, we present an overview of many approaches to measuring task performance as a means to optimize imaging systems and algorithms. Much of the work in this area has taken place in the areas of nuclear-medicine and X-ray imaging. The purpose of this paper is to present some of the task-based measures of image quality that are directly applicable to optical tomography.",2004,0, 2596,A real-time network simulation application for multimedia over IP,"This paper details a secure voice over IP (SVoIP) development tool, the network simulation application (Netsim), which provides real-time network performance implementation of quality of service (QoS) statistics in live real-time data transmission over packet networks. This application is used to implement QoS statistics including packet loss and inter-arrival delay jitter. Netsim is written in Visual C++ using MFC for MS Windows environments. The program acts as a transparent gateway for a SVoIP server/client pair connected via IP networks. The user specifies QoS parameters such as mean delay, standard deviation of delay, unconditional loss probability, conditional loss probability, etc. Netsim initiates and accepts connection, controls packet flow, records all packet sending / arrival times, generates a data log, and performs statistical calculations.",2004,0, 2597,The use of impedance measurement as an effective method of validating the integrity of VRLA battery production,The response of batteries to an injection of AC current to give an indication of battery state-of-health has been well established and extensively reported over a number years but it has been used largely as a means of assessing the condition of batteries in service. In this paper the use of impedance measurement as a quality assurance procedure during manufacture will be described. There are a number of commercially available meters that are used for monitoring in field operations but they have not been developed for use in a manufacturing environment. After extensive laboratory testing a method specifically designed for impedance measurement system at the end of manufacturing lines in order to assure higher product integrity was devised and validated. A special testing station was designed for this purpose and includes battery conditioning prior to the test and sophisticated software driven data analysis in order to increase the effectiveness and reliability of the measurement. The paper reports the analysis of the data collected over two years of monitoring of the entire production with the impedance testing station at the end of the production line. Data comparison with earlier production shows an increase in the number of batteries scrapped internally and correspondingly a reduction in the number of defective products reaching distribution centres and also the field. The accumulated experience has also been very helpful in getting better information about the effect of the various parameters that affect the measured impedance value and this will assist in improving the reliability of impedance measurements in field service.,2004,0, 2598,Consolidating software tools for DNA microarray design and manufacturing,"As the human genome project progresses and some microbial and eukaryotic genomes are recognized, a novel technology, DNA microarray (also called gene chip, biochip, gene microarray, and DNA chip) technology, has attracted increasing number of biologists, bioengineers and computer scientists recently. This technology promises to monitor the whole genome at once, so that researchers can study the whole genome on the global level and have a better picture of the expressions among millions of genes simultaneously. Today, it is widely used in many fields - disease diagnosis, gene classification, gene regulatory network, and drug discovery. We present a concatenated software solution for the entire DNA array flow exploring all steps of a consolidated software tool. The proposed software tool has been tested on Herpes B virus as well as simulated data. Our experiments show that the genomic data follow the pattern predicted by simulated data although the number of border conflicts (quality of the DNA array design) is several times smaller than for simulated data. We also report a trade-off between the number of border conflicts and the running time for several proposed algorithmic techniques employed in the physical design of DNA arrays.",2004,0, 2599,Parametric imaging in dynamic susceptibility contrast MRI-phantom and in vivo studies,"Possibility of quantitative perfusion imaging with DSC MRI is still under discussion. In this work, the quantitative related parameters are analyzed and DSC-MRI limitations are discussed. It includes investigation of measurement procedures/conditions as well as parametric image synthesis methodology. The set of phantoms was constructed and used to inspect the role of Gd-DTPA concentration estimation by EPI measurements, the influence of partial volume effect on concentration estimation, the role of a phantom and its pipes orientation, etc. Additionally, parametric image synthesis methodology was investigated by analysis of influence of a bolus dispersion, bolus arrival time, and other signal parameters on an image quality. As a conclusion testing software package is proposed.",2004,0, 2600,Dynamic load balancing performance in cellular networks with multiple traffic types,"Several multimedia applications are being introduced to cellular networks. Since the quality of service (QoS) requirements such as bandwidth for different services might be different, the analysis of conventional multimedia cellular networks has been done using multi-dimensional Markov-chains in previous works. In these analyses, it is assumed that a call request will be blocked if the number of available channels is not sufficient to support the service. However, it has been shown in previous works that the call blocking rate can be reduced significantly, if a dynamic load balancing scheme is employed. In this paper, we develop an analytical framework for the analysis of dynamic load balancing schemes with multiple traffic types. To illustrate the impact of dynamic load balancing on the performance, we study the integrated cellular and ad hoc relay (iCAR) system. Our results show that with a proper amount of load balancing capability (i.e., load balancing channels), the call blocking probability for all traffic types can be reduced significantly.",2004,0, 2601,Modeling and optimization of heterogeneous wireless LAN,"A sophisticated approach to automated prediction for the optimal layout and quantity of WLAN access points to achieve the desired network parameters is introduced. The algorithms were implemented in Web-based software for RF planning of a complete wireless local area system with multiple radio frequency technologies (IEEE 802.11a, 802.11b/g, and Bluetooth) based on a mesh topology. The implemented optimization method based on evolution strategies offers easy specification of various network aspects and QoS requirements (hotspots of various technologies, proffered areas near Ethernet ports and power outlets, throughput, number of concurrent users, etc.). The first few months of operation and usage in many real-world scenarios have shown good performance and appropriate accuracy. The application is also able to model site-specific coverage and the capacity of the wireless network using semi-empirical propagation models.",2004,0, 2602,Reliable and valid measures of threat detection performance in X-ray screening,"Over the last decades, airport security technology has evolved remarkably. This is especially evident when state-of-the-art detection systems are concerned. However, such systems are only as effective as the personnel who operate them. Reliable and valid measures of screener detection performance are important for risk analysis, screener certification and competency assessment, as well as for measuring quality performance and effectiveness of training systems. In many of these applications the hit rate is used in order to measure detection performance. However, measures based on signal detection theory have gained popularity in recent years, for example in the analysis of data from threat image projection (TIP) or computer based training (CBT) systems. In this study, computer-based tests were used to measure detection performance for improvised explosive devices (IEDs). These tests were conducted before and after training with an individually adaptive CBT system. The following measures were calculated: pHit, d', Δm, Az, A' p(c)max. All measures correlated well, but ROC curve analysis suggests that ""nonparametric"" measures are more valid to measure detection performance for IEDs. More specifically, we found systematic deviations in the ROC curves that are consistent with two-state low threshold theory of R.D. Luce (1963). These results have to be further studied and the question rises if similar results could be obtained for other X-ray screening data. In any case, it is recommended to use A' in addition to d' in practical applications such as certification, threat image projection and CBT rather than the hit rate alone.",2004,0, 2603,Upward looking ice profiler sonar instruments for ice thickness and topography measurements,"Scientific and engineering studies in polar and marginal ice zones require detailed information on sea ice thickness and topography. Until recently, vertical ice dimension data have been largely inferred from aerial and satellite remote-sensing sensors. The capabilities of these sensors are still very limited for establishing accurate ice thicknesses and do not address details of ice topography. Alternative under-ice measurement methodologies continue to be major sources of accurate sea ice thickness and topography data for basic ice-covered ocean studies and, increasingly, for addressing important navigation, offshore structure design/safety, and climate change issues. Upward-looking sonar (ULS) methods characteristically provide under-ice topography data with high horizontal and vertical spatial resolution. Originally, the great bulk of data of this type was acquired from ULS sensors mounted on polar-traversing submarines during the cold war era. Unfortunately, much of the collected information was, and remains, hard to access. Consequently, the development of sea-floor based moored upward looking sonar (ULS) instrumentation, or ice profilers, over the past decade has begun to yield large, high quality, databases on ice undersurface topography and ice draft/thickness for scientific, engineering and operational users. Recent applications of such data include regional oceanographic studies, force-on-structure analyses, real-time ice jam detection, and tactical AUV operations. Over 100 deployments of moored and AUV-mounted ice profiler sonars, associated with an overall data recovery rate of 95%, are briefly reviewed. Prospective new applications of the technology will be presented and related to likely directions of future developments in profiler hardware and software.",2004,0, 2604,Teaming assessment: is there a connection between process and product?,"It is reasonable to suspect that team process influences the way students work, the quality of their learning and the excellence of their product. This study addresses the relations between team process variables on the one hand, and behaviors and outcomes, on the other. We measured teaming skill, project behavior and performance, and project product grades. We found that knowledge of team process predicts team behavior, but that knowledge alone does not predict performance on the project. Second, both effort and team skills, as assessed by peers, were related to performance. Third, team skills did not correlate with the students' effort. This pattern of results suggests that instructors should address issues of teaming and of effort separately. It also suggests that peer ratings of teammates tap aspects of team behavior relevant to project performance, whereas declarative knowledge of team process does not.",2004,0, 2605,Estimating user-perceived Web QoS by means of passive measurement,"To enforce SLA management, SLA metrics and quality assessment method for network services or application services are required. In this paper two metrics for Web application service intrinsic performance, average round-trip time and delivery speed are defined taking both implementation mechanism of Web applications and user access behavior into account. In three cases, Web traffic traces for a specific Web site were recorded at client-side to calculate above two metrics; meanwhile the user-perceived service qualities of the Web site were evaluated subjectively. Linear regress analysis of the observation data shows that user-perceived quality of Web application service can be objectively estimated by a linear regression function that uses the two metrics as independent variables.",2004,0, 2606,Automated comprehensive assessment and visualization of voltage sag performance,"This paper describes modular software for the automated assessment and visualization of voltage sag performance. The software allows in-depth analysis of voltage sag performance of the individual buses, the performance of the entire network at different voltage levels and inside the industry facility. The prediction and characterization of voltage sag, identifying the area of vulnerability and the area affected by the fault, and the propagation of voltage sags can be done automatically taking into account different fault statistics for symmetrical and asymmetrical faults, and different fault distributions. The module also considers the protection system and effects of its failure on the duration of voltage sags. The software capabilities are demonstrated on a generic distribution network and the results of the module are presented in the graphical and tabular form using specially developed graphical user interface (GUI).",2004,0, 2607,Perceptron-Based Branch Confidence Estimation,"Pipeline gating has been proposed for reducing wasted speculative execution due to branch mispredictions. As processors become deeper or wider, pipeline gating becomes more important because the amount of wasted speculative execution increases. The quality of pipeline gating relies heavily on the branch confidence estimator used. Not much work has been done on branch confidence estimators since the initial work [6]. We show the accuracy and coverage characteristics of the initial proposals do not sufficiently reduce mis-speculative execution on future deep pipeline processors. In this paper, we present a new, perceptron-based, branch confidence estimator, which is twice as accurate as the current best-known method and achieves reasonable mispredicted branch coverage. Further, the output of our predictor is multi-valued, which enables us to classify branches further as ""strongly low confident"" and ""weakly low confident"". We reverse the predictions of ""strongly low confident"" branches and apply pipeline gating to the ""weakly low confident"" branches. This combination of pipeline gating and branch reversal provides a spectrum of interesting design options ranging from significantly reducing total execution for only a small performance loss, to lower but still significant reductions in total execution, without any performance loss.",2004,0, 2608,SUMMARY: efficiently summarizing transactions for clustering,"Frequent itemset mining was initially proposed and has been studied extensively in the context of association rule mining. In recent years, several studies have also extended its application to the transaction (or document) classification and clustering. However, most of the frequent-itemset based clustering algorithms need to first mine a large intermediate set of frequent itemsets in order to identify a subset of the most promising ones that can be used for clustering. In this paper, we study how to directly find a subset of high quality frequent itemsets that can be used as a concise summary of the transaction database and to cluster the categorical data. By exploring some properties of the subset of itemsets that we are interested in, we proposed several search space pruning methods and designed an efficient algorithm called SUMMARY. Our empirical results have shown that SUMMARY runs very fast even when the minimum support is extremely low and scales very well with respect to the database size, and surprisingly, as a pure frequent itemset mining algorithm, it is very effective in clustering the categorical data and summarizing the dense transaction databases.",2004,0, 2609,GXP : An Interactive Shell for the Grid Environment,"We describe GXP, a shell for distributed multi-cluster environments. With GXP, users can quickly submit a command to many nodes simultaneously (approximately 600 milliseconds on over 300 nodes spread across five local-area networks). It therefore brings an interactive and instantaneous response to many cluster/network operations, such as trouble diagnosis, parallel program invocation, installation and deployment, testing and debugging, monitoring, and dead process cleanup. It features (1) a very fast parallel (simultaneous) command submission, (2) parallel pipes (pipes between local command and all parallel commands), and (3) a flexible and efficient method to interactively select a subset of nodes to execute subsequent commands on. It is very easy to start using GXP, because it is designed not to require cumbersome per-node setup and installation and to depend only on a very small number of pre-installed tools and nothing else. We describe how GXP achieves these features and demonstrate through examples how they make many otherwise boring and error-prone tasks simple, efficient, and fun",2004,0, 2610,Fast multiple reference frame selection method for motion estimation in JVT/H.264,"The three main reasons why the new H.264 (MPEG-4 AVC) video coding standard has a significant performance better than the other standards are the adoption of variable block sizes, multiple reference frames, and the consideration of rate distortion optimization within the codec. However, these features incur a considerable increase in encoder complexity. As for the multiple reference frames motion estimation, the increased computation is in proportion to the number of searched reference frames. In this paper, a fast multi-frame selection method is proposed for H.264 video coding. The proposed scheme can efficiently determine the best reference frame from the allowed five reference frames. Simulation results show that the speed of the proposed method is over two times faster than that of the original scheme adopted in JVT reference software JM73 while keeping the similar video quality and bit-rate.",2004,0, 2611,Incorporating imperfect debugging into software fault processes,"For the traditional SRGMs, it is assumed that a detected fault is immediately removed and is perfectly repaired with no new faults being introduced. In reality, it is impossible to remove all faults from the fault correction process and have a fault-free effect on the software development environment. In order to relax this perfect debugging assumption, we introduce the possibility of imperfect debugging phenomenon. Furthermore, most of the traditional SRGMs have focused on the failure detection process. Consideration of fault correction process in the existing models is limited. However, to achieve desired level of software quality, it is very important to apply powerful technologies for removing the errors in the fault correction process. Therefore, we divide these processes into different two nonhomogeneous Poisson processes (NHPPs). Moreover, these models are considered to be more practical to depict the fault-removal phenomenon in software development.",2004,0, 2612,Multimedia coding using adaptive regions of interest,"The key attributes of neural processing essential to adaptive multimedia processing are presented. The objective is to show why neural networks are a core technology for efficient representation for image information. It is demonstrated how neural network technology gives a solution for extracting a segmentation mask in adaptive region-of-interest (ROI) video coding. The algorithm uses a two-layer neural network architecture that classifies video frames in ROI and non-ROI areas, also being able to adapt its performance automatically to scene changes. The algorithm is incorporated in motion-compensated discrete cosine transform (MC-DCT) based coding schemes, optimally allocating more bits to ROI than to non-ROI areas, achieving better image quality, as well as signal-to-noise ratio improvements compared to standard MPEG MC-DCT encoders.",2004,0, 2613,Fault detection in model predictive controller,"Real-time monitoring and maintaining model predictive controller (MPC) is becoming an important issue with its wide implementation in the industries. In this paper, a measure is proposed to detect faults in MFCs by comparing the performance of the actual controller with the performance of the ideal controller. The ideal controller is derived from the dynamic matrix control (DMC) in an ideal work situation and treated as a measure benchmark. A detection index based on the comparison is proposed to detect the state change of the target controller. This measure is illustrated through the implementation for a water tank process.",2004,0, 2614,Implementation of a sensor fault reconstruction scheme on an inverted pendulum,"This paper presents a robust sensor fault reconstruction scheme, using an unknown input observer, applied to an inverted pendulum. The scheme is adapted from existing work in the literature. A suitable interface between the pendulum and a computer enabled the application. Very good results were obtained.",2004,0, 2615,Testability analysis of reactive software,"This paper is about testability analysis for reactive software. We describe an application of the SATAN method, which allows testability of data-flow designs to be measured, to analyze testability of the source code of reactive software, such as avionics software. We first propose the transformation of the source code generated from data-flow designs into the static single assignment (SSA) form; then we describe the algorithm to automatically translate the SSA form into a testability model. Thus, analyzing the testability model can allow the detection of the software parts which induce a testability weakness.",2004,0, 2616,Early metrics for object oriented designs,"To produce high quality object-oriented systems, a strong emphasis on the development process is necessary. This implies two implicit and complementary goals. First, to ensure a full control over the whole process, enabling accurate cost and delay estimation, resource efficient management, and a better overall understanding. Second, to improve quality all along the system lifecycle at development and maintenance stage. On the client side, a steady control over the development process implies a better detection and elimination of faults, raising the viability and usability of the system. This paper introduces a realistic example of metrics integration in the development process of object-oriented software. By addressing early stages of the design (ie. class diagram), we can anticipate design-level errors or warnings thus enabling a reduction of immediate and further costs and problems. Metrics used are issued from state of the art object-oriented research, and integrated in the widespread unified process of software development.",2004,0, 2617,Instruction level test methodology for CPU core software-based self-testing,"TIS (S. Shamshiri et al., 2004) is an instruction level methodology for CPU core self-testing that enhances the instruction set of a CPU with test instructions. Since the functionality of test instructions is the same as the NOP instruction, NOP instructions can be replaced with test instructions so that online testing can be done with no performance penalty. TIS tests different parts of the CPU and detects stuck-at faults. This method can be employed in offline and online testing of all kinds of processors. Hardware-oriented implementation of TIS was proposed previously (S. Shamshiri et al., 2004) that tests just the combinational units of the processor. Contributions of this paper are first, a software-based approach that reduces the hardware overhead to a reasonable size and second, testing the sequential parts of the processor besides the combinational parts. Both hardware and software oriented approaches are implemented on a pipelined CPU core and their area overheads are compared. To demonstrate the appropriateness of the TIS test technique, several programs are executed and fault coverage results are presented.",2004,0, 2618,Rule-based noise detection for software measurement data,"The quality of training data is an important issue for classification problems, such as classifying program modules into the fault-prone and not fault-prone groups. The removal of noisy instances will improve data quality, and consequently, performance of the classification model. We present an attractive rule-based noise detection approach, which detects noisy instances based on Boolean rules generated from the measurement data. The proposed approach is evaluated by injecting artificial noise into a clean or noise-free software measurement dataset. The clean dataset is extracted from software measurement data of a NASA software project developed for realtime predictions. The simulated noise is injected into the attributes of the dataset at different noise levels. The number of attributes subjected to noise is also varied for the given dataset. We compare our approach to a classification filter, which considers and eliminates misclassified instances as noisy data. It is shown that for the different noise levels, the proposed approach has better efficiency in detecting noisy instances than the C4.5-based classification filter. In addition, the noise detection performance of our approach increases very rapidly with an increase in the number of attributes corrupted.",2004,0, 2619,Generating multiple noise elimination filters with the ensemble-partitioning filter,"We present the ensemble-partitioning filter which is a generalization of some common filtering techniques developed in the literature. Filtering the training dataset, i.e., removing noisy data, can be used to improve the accuracy of the induced data mining learners. Tuning the few parameters of the ensemble-partitioning filter allows filtering a given data mining problem appropriately. For example, it is possible to specialize the ensemble-partitioning filter into the classification, ensemble, multiple-partitioning, or iterative-partitioning filter. The predictions of the filtering experts are then utilized such that if an instance is misclassified by a certain number of experts or learners, it is identified as noisy. The conservativeness of the ensemble-partitioning filter depends on the filtering level and the number of filtering iterations. A case study of software metrics data from a high assurance software project analyzes the similarities between the filters obtained from the specialization of the ensemble-partitioning filter. We show that over 25% of the time, the filters at different levels of conservativeness agree on labeling instances as noisy. In addition, the classification filter has the lowest agreement with the other filters.",2004,0, 2620,Transmission line model influence on fault diagnosis,"Artificial neural networks have been used to develop software applied to fault identification and classification in transmission lines with satisfactory results. The input data to the neural network are the sampled values of voltage and current waveforms. The values proceed from the digital fault recorders, which monitor the transmission lines and make the data available in their analog channels. It is extremely important, for the learning process of the neural network, to build databases that represent the fault scenarios properly. The aim of this paper is to evaluate the influence of transmission line models on fault diagnosis, using constant and frequency-dependent parameters.",2004,0, 2621,A mathematical morphological method to thin edge detection in dark region,"The performance of image segmentation depends on the output quality of the edge detection process. Typical edge detecting method is based on detecting pixels in an image with high gradient values, and then applies a global threshold value to extract the edge points of the image. By these methods, some detected edge points may not belong to the edge and some thin edge points in dark regions of the image are being eliminated. These eliminated edges may be with important features of the image. This paper proposes a new mathematical morphological edge-detecting algorithm based on the morphological residue transformation derived from dilation operation to detect and preserve the thin edges. Moreover, this work adopts five bipolar oriented edge masks to prune the miss detected edge points. The experimental results show that the proposed algorithm is successfully to preserve the thin edges in the dark regions.",2004,0, 2622,Smooth ergodic hidden Markov model and its applications in text to speech systems,"In text-to-speech systems, the accuracy of information extraction from text is crucial in producing high quality synthesized speech. In this paper, a new scheme for converting text into its equivalent phonetic spelling is proposed and developed. This method has many advantages over its predecessors and it can complement many other text to speech converting systems in order to get improved performance.",2004,0, 2623,Using software tools and metrics to produce better quality test software,"Automatic test equipment (ATE) software is often written by test equipment engineers without professional software training. This may lead to poor designs and an excessive number of defects. The Naval Surface Warfare Center (NSWC), Corona Division, as the US Navy's recognized authority on test equipment assessment, has reviewed a large number of test software programs. As an aid in the review process, various software tools have been used such as PC-lintâ„?/sup> or Understand for C++â„?/sup>. This paper focus on software tools for C compilers since C is the most error prone language in use today. The McCabe cyclomatic complexity metric and the Halstead complexity measures are just two of the ways to measure ""software quality"". Applying the best practices of industry including coding standards, software tools, configuration management and other practices produce better quality code in less time. Good quality code would also be easier to write, understand, maintain and upgrade.",2004,0, 2624,An emotional decision making help provision approach to distributed fault tolerance in MAS,"Fault is inevitable especially in MASS (multi-agent system) because of their distributed nature. This paper introduces a new approach for fault tolerance using help provision and emotional decision-making. Tasks that are split into criticality-assigned real-time subtasks according to their precedence graph are distributed among specialized agents with different skills. If a fault occurs for an agent in a way that it cannot continue its task, the agent requests help from the others with the same skill to redo or continue its task. Requested agents would help the faulty agent based on their nervousness on their own tasks compared to his task. It Is also possible for an agent to discover death of another agent, which has accepted one of his tasks, by polling. An implementation using JADE platform is presented in this paper and the results are reported.",2004,0, 2625,A ratio sensitive image quality metric,"Many applications require a fast quality measure for digitally coded images, such as the mobile multimedia communication. Although the widely used peak signal-noise ratio (PSNR) has low computation cost, it fails to predict structured errors that dominate in digital images. On the other hand, human visual system (HVS) based metrics can improve the prediction accuracy. However, they are computationally complex and time consuming. This paper proposes a very simple image quality measure defined by a ratio based mathematical formulation that attempts to simulate the scaling of the human visual sensation to brightness. The current results demonstrate that the novel image quality metric predicts the subjective ratings better and has lower computation cost than the PSNR. As an alternative to the PSNR, the proposed metric can be used for on-line or real-time picture quality assessment applications.",2004,0, 2626,A median based interpolation algorithm for deinterlacing,"In this paper, state-of-the-art interpolation algorithms for deinterlacing within a single frame are investigated. Based on Chen's directional measurements and on the median filter, a novel interpolation algorithm for deinterlacing is proposed. By efficiently estimating the diagonal and vertical directional correlations of the neighboring pixels, the proposed method performs better than existing techniques on different images and video sequences, for both subjective and objective measurements. Additionally, the proposed method has a simple structure with low computation complexity, which therefore makes it simple to implement in hardware.",2004,0, 2627,Room environment monitoring system from PDA terminal,"A room environment monitoring and control system, via a PDA terminal, was designed and fabricated. An RF wireless sensor module with several air quality monitoring sensors was developed for an indoor environment monitoring system in home networking. The module has the capacity for various kinds of sensors such as humidity sensor, temperature sensor, CO2 sensor, flying dust sensor, etc. The developed module is very convenient for installation on the wall of a room or office, and the sensors can be easily replaced due to a well-designed module structure and RF connection method. To reduce the system cost, only one RF transmission block was used for sensor signal transmission to an 8051 microcontroller board using a time sharing method. In this home networking system, various indoor environmental parameters could be monitored in real time from the sensor module. Indoor vision was transferred to the client PC or PDA from a surveillance camera installed indoors or at a desired site. A Web server using Oracle DB was used for saving the vision data from the Web-camera and various data from the wireless sensor module.",2004,0, 2628,"Bluespec System Verilog: efficient, correct RTL from high level specifications","Bluespec System Verilog is an EDL toolset for ASIC and FPGA design offering significantly higher productivity via a radically different approach to high-level synthesis. Many other attempts at high-level synthesis have tried to move the design language towards a more software-like specification of the behavior of the intended hardware. By means of code samples, demonstrations and measured results, we illustrate how Bluespec System Verilog, in an environment familiar to hardware designers, can significantly improve productivity without compromising generated hardware quality.",2004,0, 2629,PD diagnosis on medium voltage cables with oscillating voltage (OWTS),"Detecting, locating and evaluating of partial discharges (PD) in the insulating material, terminations and joints provides the opportunity for a quality control after installation and preventive detection of arising service interruption. A sophisticated evaluation is necessary between PD in several insulating materials and also in different types of terminations and joints. For a most precise evaluation of the degree and risk caused by PD it is suggested to use a test voltage shape that is preferably like the same under service conditions. Only under these requirements the typical PD parameters like inception and extinction voltage, PD level and PD pattern correspond to significant operational values. On the other hand the stress on the insulation should be limited during the diagnosis to not create irreversible damages and thereby worsening the condition of the test object. The paper introduces an oscillating wave test system (OWTS), which meets these mentioned demands well. The design of the system, its functionality and especially the operating software are made for convenient field application. Field data and experience reports was presented and discussed. This field data serve also as good guide for the level of danger to the different insulating systems due to partial discharges.",2004,0, 2630,A data collection scheme for reliability evaluation and assessment-a practical case in Iran,"Data collection is an essential element of reliability assessment and many utilities throughout the world have established comprehensive procedures for assessing the performance of their electric power systems. Data collection is also a constituent part of quantitative power system reliability assessment in which system past performance and prediction of future performance are evaluated. This paper presents an overview of the Iran electric power system data collection scheme and the procedure to its reliability analysis. The scheme contains both equipment reliability data collection procedure and structure of reliability assessment. The former constitutes generation, transmission and distribution equipment data. The latter contains past performance and predictive future performance of the Iran power system. The benefits of this powerful data base within an environment of change and uncertainty will help utilities to keep down cost, while meeting the multiple challenges of providing high degrees of reliability and power quality of electrical energy.",2004,0, 2631,A software phase locked loop for unbalanced and distorted utility conditions,"A software phase locked loop (PLL) for custom power devices is described in this paper. The PLL uses the previous and current samples of the 3-phase utility voltages and separates them into the sequence components. The phase angle and frequency of the estimated positive sequence component are tracked by the first two stages of the PLL. The third stage estimates the parameter Δθ which is required by the first stage. Simulations of the performance of the PLL have been presented, as well as the performance of a dynamic voltage restorer incorporating the proposed PLL. The proposed PLL performs well under unbalanced and distorted utility conditions. It tracks the utility phase angle and frequency smoothly and also extracts the sequence components of the utility voltages. A DVR incorporating the PLL provides voltage support to sensitive loads and also acts as a harmonic filter.",2004,0, 2632,Implementation of DSP based relaying with particular reference to effect of STATCOM on transmission line protection,"The presence of flexible AC transmission system (FACTS) devices in the loop significantly affects the apparent resistance and reactance seen by a distance relay, as FACTS devices respond to changes in the power system configuration. The objective of this paper is to investigate the effect of mid-point compensation of STATic synchronous COMpensator (STATCOM) on the performance of impedance distance relay under normal load and fault conditions and to implement the adaptive distance-relaying scheme for transmission line protection. From the simulation studies carried out in PSCAD software and using analytical calculations, it is identified that there is a need for the distance relay to adapt to the new settings in its characteristic for the detection of fault within the zone of protection of transmission line. Apparent impedance is simulated for different loading and line to ground (L-G) fault conditions at different locations on the power system network. The proposed adaptive distance relay scheme has been implemented on TMS320C50 Digital signal processor (DSP) system.",2004,0, 2633,Development of a portable digital radiographic system based on FOP-coupled CMOS image sensor and its performance evaluation,"As a continuation of our digital X-ray imaging sensor R&D, we have developed a cost-effective, portable, digital radiographic system based on the CMOS image sensor coupled with a fiber optic plate (FOP) and a conventional scintillator. The imaging system mainly consists of a commercially available CMOS image sensor of 48times48 mum2 pixel size and 49.2times49.3 mm2 active area (RadEyetrade2 from Rad-icon Imaging Corp.), a FOP bundled with several millions of glass fibers of about 6 mum in diameter, a phosphor screen such as Min-R or Lanex series, a readout IC board and a GUI software we developed, and a battery-operated X-ray generator (20-60 kVp; up to 1 mA). Here the FOP was incorporated into the imaging system to reduce the performance degradation of the CMOS sensor and the readout IC board caused by irradiation, and also to improve image qualities. In this paper, we describe each imaging component of the fully-integrated portable digital radiographic system in detail, and also present its performance analysis with experimental measurements and acquired X-ray images in terms of system response with exposure, contrast-to-noise ratio (CNR), modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE).",2004,0, 2634,Development and performances of a dental digital radiographic system using a high resolution CCD image sensor,"Dental digital radiographic (DDR) system using a high resolution charge-coupled device (CCD) imaging sensor was developed and the performances of this system for dental clinic imaging was evaluated. In order to determine the performances of the system, the modulation transfer function (MTF), the signal to noise ratio according to X-ray exposure, the dose reduction effects and imaging quality of the system were investigated. This system consists of a CCD imaging sensor (pixel size: 22 mum) to detect X-ray, an electrical signal processing circuit and a graphical user interface software to display the images and diagnosis. The MTF was obtained from a Fourier transform of the line spread function (LSF), which was itself derived from the edge spread function (ESF) of a sharp edge image acquired. The spatial resolution of the system was measured at a 10% contrast in terms of the corresponding MTF value and the distance between the X-ray source and the CCD image sensor was fixed at 20 cm. The best image quality obtained at the exposure conditions of 60 kVp, 7 mA and 0.05 sec. At this time, the signal to noise ratio and X-ray dose were 23 and 41% (194 muGy) of a film-based method (468 muGy). The spatial resolution of this system, the result of MTF, was approximately 12 line pairs per mm at the 0.05 exposure time. Based on the results, the developed DDR system using a CCD imaging sensor could be suitably applied for intra-oral radiographic imaging because of its low dose, real time acquisition, no chemical processing, image storage and retrieval etc.",2004,0, 2635,PET reconstruction with system matrix derived from point source measurements,"The quality of images reconstructed by statistical iterative methods depends on an accurate model of the relationship between image space and projection space through the system matrix. A method of acquiring the system matrix on the CPS Innovations the HiRez scanner was developed. The system matrix was derived by positioning the point source in the scanner field of view and processing the response in projection space. Such responses include geometrical and detection physics components of the system matrix. The response is parameterized to correct point source location and to smooth projection noise. Special attention was paid to span concepts of HiRez scanner. The projection operator for iterative reconstruction was constructed, taking into account estimated response parameters. The computer generated and acquired data were used to compare reconstruction obtained by the HiRez standard software and produced by better modeling. Results showed that the better resolution and noise property can be achieved.",2004,0, 2636,Spectral gamma detectors for hand-held radioisotope identification devices (RIDs) for nuclear security applications,"The in-situ identification of the isotope causing a radiation alarm at a border monitor is essential for a quick resolution of the case. Commonly, hand-held radioisotope identification devices (RIDs) are used to solve this task. Although a number of such devices are commercially available, we see still a gap between the requirements of the users and what is actually available. Two components are essential for the optimal performance of such devices: i) the quality of the raw data - the gamma spectra, and ii) isotope identification software matching the specifics of a certain gamma detector. In this paper, we investigate new spectral gamma detectors for potential use in hand-held radioisotope identification devices. The standard NaI detector is compared with new scintillation and room temperature semiconductor detectors. The following spectral gamma detectors were included into the study: NaI(Tl), LaCl3(Ce), 6LiI(Eu), CdWO4 , hemispheric and coplanar CZT detectors. In the paper basic spectrometric properties are measured and compared",2004,0,2908 2637,The real-time computing model for a network based control system,"This paper studies a network based real-time control system, and proposes to model this system as a periodic real-time computing system. With efficient scheduling algorithms and software fault-tolerance deadline mechanism, this model proves that the system can meet its task timing constraints while tolerating system faults. The simulation study shows that in cases with high failure probability, the lower priority tasks suffer a lot in completion rate. In cases with low failure probability, this algorithm works well with both high priority and lower priority task. This conclusion suggests that an Internet based control system should manage to keep the failure rate to the minimum to achieve a good system performance.",2004,0, 2638,Effects of low power electronics & computer equipment on power quality at distribution grid-measurements and forecast,"The paper discusses power quality issues in case of simultaneous connection of a large number of low power nonlinear loads, like personal computers (PC), TV receivers, audio and video devices and similar electronic devices to the grid. Such consumers are at one-hand sensitive devices that require a stable and high quality of supply, but on the other hand they are sources of harmonics. Operation of a single PC, as a representative of such loads, is modeled using Matlab-Simulink software and verified by measurement performed in laboratory. The simultaneous operation of large number of PC-s is investigated by measurement on real site. The measurements are performed at several location connected to the grids with large number of PC type consumers (Computer Center) and other low power loads (highly populated residential area). The results are analyzed and compared with IEEE Standard 519 limits. In order to get the forecast of possible future situation, the developed model has been used for prediction of the ""worst"" case i.e. the highest possible harmonic level. It is shown that single PC is a significant source of harmonics. If a large number of PCs is connected to the grid, than they may present a serious threat to power quality, i.e. they can be a possible source of some negative effects.",2004,0, 2639,An implementation of a general regression network on FPGA with direct Matlab link,"Neural networks play a key role in many electronic applications, we can find them from industrial control applications to predictive models. They are mainly implemented as software entities because they require a great amount of complex mathematical operations. With the increasing power and capabilities of current FPGAs, now it is possible to translate them into hardware. This hardware implementations increase both the speed and usefulness of this neural networks. This paper presents a hardware implementation of a particular neural network, the general regression neural network. This network is able to approximate functions and it is used in control, prediction, fault diagnosis, engine management and many others. The paper describes an implementation of this neural network using different hardware platforms and using different implementation for each hardware target. This paper also presents an integrated development environment to produce the final hardware description in VHDL code from the Matlab generated neural network. The paper also describes a simulation scheme to test if the assumptions made to increase the performance of the network have a negative impact on the precision for the particular implementation.",2004,0, 2640,Supporting architecture evaluation process with collaborative applications,"The software architecture community has proposed several approaches to assess the capability of a system's architecture with respect to desired quality attributes (such as maintainability and performance). Scenario-based approaches are considered effective and mature. However, these methods heavily rely on face-to-face meetings, which are expensive and time consuming. Encouraged by the successful adoption of Internet-based technologies for several meeting based activities, we have been developing an approach to support architecture evaluation using Web-based collaborative applications, in this paper, we present a preliminary framework for conducting architecture evaluation in a distributed arrangement. We identify some supportive technologies and their expected benefits. We also present some of the initial findings of a research program designed to assess the effectiveness of the proposed idea. Findings of this study provide some support for distributed architecture evaluation.",2004,0, 2641,A Case for Clumsy Packet Processors,"Hardware faults can occur in any computer system. Although faults cannot be tolerated for most systems (e.g., servers or desktop processors), many applications (e.g., networking applications) provide robustness in software. However, processors do not utilize this resiliency, i.e., regardless of the application at hand, a processor is expected to operate completely fault-free. In this paper, we will question this traditional approach of complete correctness and investigate possible performance and energy optimizations when this correctness constraint is released. We first develop a realistic model that estimates the change in the fault rates according to the clock frequency of the cache. Then, we present a scheme that dynamically adjusts the clock frequency of the data caches to achieve the desired optimization goal, e.g., reduced energy or reduced access latency. Finally, we present simulation results investigating the optimal operation frequency of the data caches, where reliability is compromised in exchange of reduced energy and increased performance. Our simulation results indicate that the clock frequency of the data caches can be increased as much as 4 times without incurring a major penalty on the reliability. This also results in 41% reduction in the energy consumed in the data caches and a 24% reduction in the energy-delay-fallibility product.",2004,0, 2642,Bandwidth adaptive multimedia streaming for PDA applications over WLAN environment,"Network management and bandwidth adaptation are important areas for wireless multimedia streaming applications. We have developed the managed wireless network with bandwidth adaptive media streaming technique for IEEE 802.11x applications. The unique contributions of this project are: (1) a real-time detection of wireless bandwidth change via XML/Java based SNMP system, (2) dynamic adaptation of streaming media to optimize the quality, and (3) coordination and performance optimization among SNMP server, media streaming server, wireless AP points, and wireless clients. Based on this technique, we are able to deliver wireless streaming services adapted to the bandwidth change. The SNMP server, media-streaming server, and wireless AP points are coupled and operated in a well-coordinated fashion. The software implementation was written in the combination of Java, XML and C#. The bandwidth adaptation is user transparent while the media streaming settings are modified on the fly. The experiments are conducted and they confirmed our design.",2004,0, 2643,Evolving legacy systems through a multi-objective decision process,"Our previous work on improving the quality of object-oriented legacy systems includes: i) devising a quality-driven re-engineering framework (L. Tahvildari et al., 2003); ii) proposing a software transformation framework based on soft-goal interdependency graphs to enhance quality (L. Tahvildari and K. Kontogiannis, 2002); and iii) investigating the usage of metrics for detecting potential design flaws (L. Tahvildari and K. Kontogiannis, 2004). This paper defines a decision making process that determines a list of source-code improving transformations among several applicable transformations. The decision-making process is developed on a multi-objective decision analysis technique. This type of technique is necessary as there are a number of different, and sometimes conflicting, criterion among non-functional requirements. For the migrant system, the proposed approach uses heuristic estimates to guide the discovery process",2004,0, 2644,Enabling SMT for real-time embedded systems,"In order to deal with real time constraints, current embedded processors are usually simple in-order processors with no speculation capabilities to ensure that execution times of applications are predictable. However, embedded systems require ever more compute power and the trend is that they will become as complex as current high performance systems. SMTs are viable candidates for future high performance embedded processors, because of their good cost/performance trade-off. However, current SMTs exhibit unpredictable performance. Hence, the SMT hardware needs to be adapted in order to meet real time constraints. This paper is a first step toward the use of high performance SMT processors in future real time systems. We present a novel collaboration between OS and SMT processors that entails that the OS exercises control over how resources are shared inside the processor. We illustrate this collaboration by a mechanism in which the OS cooperates with the SMT hardware to guarantee that a given thread runs at a specific speed, enabling SMT for real-time systems.",2004,0, 2645,Finding predictors of field defects for open source software systems in commonly available data sources: a case study of OpenBSD,"Open source software systems are important components of many business software applications. Field defect predictions for open source software systems may allow organizations to make informed decisions regarding open source software components. In this paper, we remotely measure and analyze predictors (metrics available before release) mined from established data sources (the code repository and the request tracking system) as well as a novel source of data (mailing list archives) for nine releases of OpenBSD. First, we attempt to predict field defects by extending a software reliability model fitted to development defects. We find this approach to be infeasible, which motivates examining metrics-based field defect prediction. Then, we evaluate 139 predictors using established statistical methods: Kendall's rank correlation, Pearson's rank correlation, and forward AIC model selection. The metrics we collect include product metrics, development metrics, deployment and usage metrics, and software and hardware configurations metrics. We find the number of messages to the technical discussion mailing list during the development period (a deployment and usage metric captured from mailing list archives) to be the best predictor of field defects. Our work identifies predictors of field defects in commonly available data sources for open source software systems and is a step towards metrics-based field defect prediction for quantitatively-based decision making regarding open source software components",2005,1, 2646,Empirical validation of object-oriented metrics on open source software for fault prediction,"Open source software systems are becoming increasingly important these days. Many companies are investing in open source projects and lots of them are also using such software in their own work. But, because open source software is often developed with a different management style than the industrial ones, the quality and reliability of the code needs to be studied. Hence, the characteristics of the source code of these projects need to be measured to obtain more information about it. This paper describes how we calculated the object-oriented metrics given by Chidamber and Kemerer to illustrate how fault-proneness detection of the source code of the open source Web and e-mail suite called Mozilla can be carried out. We checked the values obtained against the number of bugs found in its bug database - called Bugzilla - using regression and machine learning methods to validate the usefulness of these metrics for fault-proneness prediction. We also compared the metrics of several versions of Mozilla to see how the predicted fault-proneness of the software system changed during its development cycle.",2005,1, 2647,Use of relative code churn measures to predict system defect density,"Software systems evolve over time due to changes in requirements, optimization of code, fixes for security and reliability bugs etc. Code churn, which measures the changes made to a component over a period of time, quantifies the extent of this change. We present a technique for early prediction of system defect density using a set of relative code churn measures that relate the amount of churn to other variables such as component size and the temporal extent of churn. Using statistical regression models, we show that while absolute measures of code chum are poor predictors of defect density, our set of relative measures of code churn is highly predictive of defect density. A case study performed on Windows Server 2003 indicates the validity of the relative code churn measures as early indicators of system defect density. Furthermore, our code churn metric suite is able to discriminate between fault and not fault-prone binaries with an accuracy of 89.0 percent.",2005,1, 2648,Measurement of harmonics/inter-harmonics of time-varying frequencies,"A novel method of extraction and measurement of individual harmonics of a signal with time-varying frequency is presented. The proposed method is based on a nonlinear, adaptive mechanism. Compared with the well-established techniques such as DFT, the proposed method offers (i) higher degree of accuracy, (ii) structural/performance robustness, and (iii) frequency-adaptivity. The structural simplicity of the algorithm renders it suitable for both software and hardware implementations. The limitation of the proposed method as compared with DFT-based methods is its slower transient response. Based on simulation studies, performance of the method is presented and its accuracy and response time are compared with a DFT-based method.",2005,0, 2649,High-impedance fault detection using discrete wavelet transform and frequency range and RMS conversion,"High-impedance faults (HIFs) are faults which are difficult to detect by overcurrent protection relays. Various pattern recognition techniques have been suggested, including the use of wavelet transform . However this method cannot indicate the physical properties of output coefficients using the wavelet transform. We propose to use the Discrete Wavelet Transform (DWT) as well as frequency range and rms conversion to apply a pattern recognition based detection algorithm for electric distribution high impedance fault detection. The aim is to recognize the converted rms voltage and current values caused by arcs usually associated with HIF. The analysis using discrete wavelet transform (DWT) with the conversion yields measurement voltages and currents which are fed to a classifier for pattern recognition. The classifier is based on the algorithm using nearest neighbor rule approach. It is proposed that this method can function as a decision support software package for HIF identification which could be installed in an alarm system.",2005,0, 2650,Channel and source considerations of a bit-rate reduction technique for a possible wireless communications system's performance enhancement,"In wireless commercial and military communications systems, where bandwidth is at a premium, robust low-bit-rate speech coders are essential. They operate at fix bit rates and those bit rates cannot be altered without major modifications in the vocoder design. A novel approach to vocoders, in order to reduce the bit rate required to transmit speech signal, is proposed. While traditional low-bit-rate vocoders code original input speech, the proposed procedure operates on the time-scale modified signal. The proposed method offers any bit rate from 2400 b/s to downwards without modifying the principle vocoder structure, which is the new NATO standard, Stanag 4591, Mixed Excitation Linear Prediction (MELP) vocoder. We consider the application of transmitting MELP-encoded speech over noisy communication channels by applying different modulation techniques, after time-scale compression is applied. Three different time-scale modification algorithms have been evaluated and waveform similarity overlap and add (WSOLA) algorithm has been selected for time-scale modification purposes. Computer simulation results, both source and channel, are presented in terms of objective speech quality metrics and informal subjective listening tests. Design parameters such as codec complexity and delay are also investigated. Simulation results lead to a possible wireless communications system, whose performance might be enhanced by using the spared bits offered by the procedure.",2005,0, 2651,An Alternative to Technology Readiness Levels for Non-Developmental Item (NDI) Software,"Within the Department of Defense, Technology Readiness Levels (TRLs) are increasingly used as a tool in assessing program risk. While there is considerable evidence to support the utility of using TRLs as part of an overall risk assessment, some characteristics of TRLs limit their applicability to software products, especially Non-Developmental Item (NDI) software including Commercial-Off-The-Shelf, Government-Off-The-Shelf, and Open Source Software. These limitations take four principle forms: 1) ""blurring-together"" various aspects of NDI technology/product readiness; 2) the absence of some important readiness attributes; 3) NDI product ""decay;"" and 4) no recognition of the temporal nature of system development and acquisition context. This paper briefly explores these issues, and describes an alternate methodology which combines the desirable aspects of TRLs with additional readiness attributes, and defines an evaluation framework which is easily understandable, extensible, and applicable across the full spectrum of NDI software.",2005,0, 2652,Reusability metrics for software components,"Summary form only given. Assessing the reusability, adaptability, compose-ability and flexibility of software components is more and more of a necessity due to the growing popularity of component based software development (CBSD). Even if there are some metrics defined for the reusability of object-oriented software (OOS), they cannot be used for CBSD because these metrics require analysis of source code. The aim of this paper is to study the adaptability and compose-ability of software components, both qualitatively and quantitatively. We propose metrics and a mathematical model for the above-mentioned characteristics of software components. The interface characterization is the starting point of our evaluation. The adaptability of a component is discussed in conjunction with the complexity of its interface. The compose-ability metric defined for database components is extended for general software components. We also propose a metric for the complexity and adaptability of the problem solved by a component, based on its use cases. The number of alternate flows from the use case narrative is considered as a measurement for the complexity of the problem solved by a component. This was our starting point in developing a set of metrics for evaluating components functionality-wise. The main advantage of defining these metrics is the possibility to measure adaptability, reusability and quality of software components, and therefore to identify the most effective reuse strategy.",2005,0, 2653,Pseudo dynamic metrics [software metrics],"Summary form only given. Software metrics have become an integral part of software development and are used during every phase of the software development life cycle. Research in the area of software metrics tends to focus predominantly on static metrics that are obtained by static analysis of the software artifact. But software quality attributes such as performance and reliability depend on the dynamic behavior of the software artifact. Estimating software quality attributes based on dynamic metrics for the software system are more accurate and realistic. The research presented in this paper attempts to narrow the gap between static metrics and dynamic metrics, and lay the foundation for a more systematic approach to estimate the dynamic behavior of a software system early in the software development cycle. Focusing on coupling metrics, we present an empirical study to analyze the relationship between static and dynamic coupling metrics and propose the concept of pseudo dynamic metrics to estimate the dynamic behavior early in the software development lifecycle.",2005,0, 2654,A technique based on the OMG metamodel and OCL for the definition of object-oriented metrics applied to UML models,"Summary form only given. The development of wide software systems is an activity that consumes great quantities of time and resources. Even with the increase of the automation in the software development activities, the resources stay scarce. As a consequence of this, there is a big interest in the metrics of software due to its potential for a better, more efficient use of the resources. Tools help and assist in the planning and in the estimation of the complexity of the applications to develop during different stage of a process. We describe a technique to define metrics using the OMG (Object Management Group) standard specification. The semantic in each metrics is specified formally with OCL (Object Constraint Language) based on OMG metamodels. For their concrete specification, we establish a series of steps that allows define uniformly each metric in the different RUP models. Furthermore, the present paper shows the use of metrics defined using this technique and the relation between data obtained from the application of the metrics to thirteen object oriented systems. They encompass projects from the capture of requirements until its implementation, showing the metrics applied in different stage of the deployment.",2005,0, 2655,Quantifying software architectures: an analysis of change propagation probabilities,"Summary form only given. Software architectures are an emerging discipline in software engineering as they play a central role in many modern software development paradigms. Quantifying software architectures is an important research agenda, as it allows software architects to subjectively assess quality attributes and rationalize architecture-related decisions. In this paper, we discuss the attribute of change propagation probability, which reflects the likelihood that a change that arises in one component of the architecture propagates (i.e. mandates changes) to other components.",2005,0, 2656,An extended Chi2 algorithm for discretization of real value attributes,"The variable precision rough sets (VPRS) model is a powerful tool for data mining, as it has been widely applied to acquire knowledge. Despite its diverse applications in many domains, the VPRS model unfortunately cannot be applied to real-world classification tasks involving continuous attributes. This requires a discretization method to preprocess the data. Discretization is an effective technique to deal with continuous attributes for data mining, especially for the classification problem. The modified Chi2 algorithm is one of the modifications to the Chi2 algorithm, replacing the inconsistency check in the Chi2 algorithm by using the quality of approximation, coined from the rough sets theory (RST), in which it takes into account the effect of degrees of freedom. However, the classification with a controlled degree of uncertainty, or a misclassification error, is outside the realm of RST. This algorithm also ignores the effect of variance in the two merged intervals. In this study, we propose a new algorithm, named the extended Chi2 algorithm, to overcome these two drawbacks. By running the software of See5, our proposed algorithm possesses a better performance than the original and modified Chi2 algorithms.",2005,0, 2657,Model-based performance risk analysis,"Performance is a nonfunctional software attribute that plays a crucial role in wide application domains spreading from safety-critical systems to e-commerce applications. Software risk can be quantified as a combination of the probability that a software system may fail and the severity of the damages caused by the failure. In this paper, we devise a methodology for estimation of performance-based risk factor, which originates from violations, of performance requirements, (namely, performance failures). The methodology elaborates annotated UML diagrams to estimate the performance failure probability and combines it with the failure severity estimate which is obtained using the functional failure analysis. We are thus able to determine risky scenarios as well as risky software components, and the analysis feedback can be used to improve the software design. We illustrate the methodology on an e-commerce case study using step-by step approach, and then provide a brief description of a case study based on large real system.",2005,0, 2658,Class point: an approach for the size estimation of object-oriented systems,"In this paper, we present an FP-like approach, named class point, which was conceived to estimate the size of object-oriented products. In particular, two measures are proposed, which are theoretically validated showing that they satisfy well-known properties necessary for size measures. An initial, empirical validation is also performed, meant to assess the usefulness and effectiveness of the proposed measures to predict the development effort of object-oriented systems. Moreover, a comparative analysis is carried out, taking into account several other size measures.",2005,0, 2659,Estimation of complex permittivity of arbitrary shape and size dielectric samples using cavity measurement technique at microwave frequencies,"In this paper, a simple cavity measurement technique is presented to estimate the complex permittivity of arbitrary shape and size dielectric samples. Measured shift in resonant frequency and change in quality factor due to the dielectric sample loading in the cavity is compared with simulated values obtained using the finite-element method software high frequency structure simulator and matched using the Newton-Raphson method to estimate the complex permittivity of a arbitrary shape and size dielectric sample. Complex permittivity of Teflon (PTFE) and an MgO-SiC composite is estimated in the S-band using this method for four different samples of varying size and shapes. The result for Teflon shows a good agreement with the previously published data, and for the MgO-SiC composite, the estimated real and imaginary parts of permittivity for four different shape and size samples are within 10%, proving the usefulness of the method. This method is particularly suitable for estimation of complex permittivity of high-loss materials",2005,0, 2660,"Analysis, fast algorithm, and VLSI architecture design for H.264/AVC intra frame coder","Intra prediction with rate-distortion constrained mode decision is the most important technology in H.264/AVC intra frame coder, which is competitive with the latest image coding standard JPEG2000, in terms of both coding performance and computational complexity. The predictor generation engine for intra prediction and the transform engine for mode decision are critical because the operations require a lot of memory access and occupy 80% of the computation time of the entire intra compression process. A low cost general purpose processor cannot process these operations in real time. In this paper, we proposed two solutions for platform-based design of H.264/AVC intra frame coder. One solution is a software implementation targeted at low-end applications. Context-based decimation of unlikely candidates, subsampling of matching operations, bit-width truncation to reduce the computations, and interleaved full-search/partial-search strategy to stop the error propagation and to maintain the image quality, are proposed and combined as our fast algorithm. Experimental results show that our method can reduce 60% of the computation used for intra prediction and mode decision while keeping the peak signal-to-noise ratio degradation less than 0.3 dB. The other solution is a hardware accelerator targeted at high-end applications. After comprehensive analysis of instructions and exploration of parallelism, we proposed our system architecture with four-parallel intra prediction and mode decision to enhance the processing capability. Hadamard-based mode decision is modified as discrete cosine transform-based version to reduce 40% of memory access. Two-stage macroblock pipelining is also proposed to double the processing speed and hardware utilization. The other features of our design are reconfigurable predictor generator supporting all of the 13 intra prediction modes, parallel multitransform and inverse transform engine, and CAVLC bitstream engine. A prototype chip is fabricated with TSMC 0.25-μm CMOS 1P5M technology. Simulation results show that our implementation can process 16 mega-pixels (4096×4096) within 1 s, or namely 720×480 4:2:0 30 Hz video in real time, at the operating frequency of 54 MHz. The transistor count is 429 K, and the core - size is only 1.855×1.885 mm2.",2005,0, 2661,Evaluation of effects of pair work on quality of designs,"Quality is a key issue in the development of software products. Although the literature acknowledges the importance of the design phase of software lifecycle and the effects of the design process and intermediate products on the final product, little progress has been achieved in addressing the quality of designs. This is partly due to difficulties associated in defining quality attributes with precision and measurement of the many different types and styles of design products, as well as problems with assessing the methodologies utilized in the design process. In this research we report on an empirical investigation that we conducted to examine and evaluate quality attributes of design products created through a process of pair-design and solo-design. The process of pair-design methodology involves pair programming principles where two people work together and periodically switch between the roles of driver and navigator. The evaluation of the quality of design products was based on ISO/IEC 9126 standards. Our results show some mixed findings about the effects of pair work on the quality of design products.",2005,0, 2662,Detecting indirect coupling,"Coupling is considered by many to be an important concept in measuring design quality There is still much to be learned about which aspects of coupling affect design quality or other external attributes of software. Much of the existing work concentrates on direct coupling, that is, forms of coupling that exists between entities that are directly related to each other. A form of coupling that has so far received little attention is indirect coupling, that is, coupling between entities that are not directly related. What little discussion there is in the literature suggests that any form of indirect coupling is simple the transitive closure of a form of direct coupling. We demonstrate that this is not the case, that there are forms of indirect coupling that cannot be represented in this way and suggest ways to measure it. We present a tool that identifies a particular form of indirect coupling that is integrated in the Eclipse IDE.",2005,0, 2663,SWIFT: software implemented fault tolerance,"To improve performance and reduce power, processor designers employ advances that shrink feature sizes, lower voltage levels, reduce noise margins, and increase clock rates. However, these advances make processors more susceptible to transient faults that can affect correctness. While reliable systems typically employ hardware techniques to address soft-errors, software techniques can provide a lower-cost and more flexible alternative. This paper presents a novel, software-only, transient-fault-detection technique, called SWIFT. SWIFT efficiently manages redundancy by reclaiming unused instruction-level resources present during the execution of most programs. SWIFT also provides a high level of protection and performance with an enhanced control-flow checking mechanism. We evaluate an implementation of SWIFT on an Itanium 2 which demonstrates exceptional fault coverage with a reasonable performance cost. Compared to the best known single-threaded approach utilizing an ECC memory system, SWIFT demonstrates a 51% average speedup.",2005,0, 2664,ADAMS Re-Trace: A Traceability Recovery Tool,"We present the traceability recovery tool developed in the ADAMS artefact management system. The tool is based on an Information Retrieval technique, namely Latent Semantic Indexing and aims at supporting the software engineer in the identification of the traceability links between artefacts of different types. We also present a case study involving seven student projects which represented an ideal workbench for the tool. The results emphasise the benefits provided by the tool in terms of new traceability links discovered, in addition to the links manually traced by the software engineer. Moreover, the tool was also helpful in identifying cases of lack of similarity between artefacts manually traced by the software engineer, thus revealing inconsistencies in the usage of domain terms in these artefacts. This information is valuable to assess the quality of the produced artefacts.",2005,0, 2665,Towards the Optimization of Automatic Detection of Design Flaws in Object-Oriented Software Systems,"In order to increase the maintainability and the flexibility of a software, its design and implementation quality must be properly assessed. For this purpose a large number of metrics and several higher-level mechanisms based on metrics are defined in literature. But the accuracy of these quantification means is heavily dependent on the proper selection of threshold values, which is oftentimes totally empirical and unreliable. In this paper we present a novel method for establishing proper threshold values for metrics-based rules used to detect design flaws in object-oriented systems. The method, metaphorically called ""tuning machine"", is based on inferring the threshold values based on a set of reference examples, manually classified in ""flawed"" respectively ""healthy"" design entities (e.g., classes, methods). More precisely, the ""tuning machine"" searches, based on a genetic algorithm, for those thresholds which maximize the number of correctly classified entities. The paper also defines a repeatable process for collecting examples, and discusses the encouraging and intriguing results while applying the approach on two concrete metrics-based rules that quantify two well-known design flaws i.e., ""God Class"" and ""Data Class"".",2005,0, 2666,A Tool for Static and Dynamic Model Extraction and Impact Analysis,"Planning changes is often an imprecise task and implementing changes is often time consuming and error prone. One reason for these problems is inadequate support for efficient analysis of the impacts of the performed changes. The paper presents a technique, and associated tool, that uses a mixed static and dynamic model extraction for supporting the analysis of the impacts of changes.",2005,0, 2667,Reducing Corrective Maintenance Effort Considering Module's History,"A software package evolves in time through various maintenance release steps whose effectiveness depends mainly on the number of faults left in the modules. The testing phase is therefore critical to discover these faults. The purpose of this paper is to show a criterion to estimate an optimal repartition of available testing time among software modules in a maintenance release. In order to achieve this objective we have used fault prediction techniques based both on classical complexity metrics and an additional, innovative factor related to the module’s age in terms of release. This method can actually diminish corrective maintenance effort, while assuring a high reliability for the delivered software.",2005,0, 2668,Experimental VoIP capacity measurements for 802.11b WLANs,"There is an increasing interest in supporting voice over IP (VoIP) applications over wireless local area networks (WLANs). To provide quality of service guarantee to voice traffic, an admission control mechanism must be administered to avoid over loading the network. In this paper, we present an experimental evaluation of VoIP capacity in WLANs under various data and voice loads. The obtained capacity estimates can be used in admission control engine to make an admission or a rejection decision. In this paper, we identify a suitable evaluation metric to quantify the performance of a voice call. The metric accounts for packet losses and packet delays for each voice flow. We then define voice capacity in a WLAN and present experimental capacity measurements under various background data traffic loads. The experimental capacity measurements are immediately useful for WLAN hotspot providers and enterprise WLAN architects.",2005,0, 2669,Application fault tolerance with Armor middleware,"Many current approaches to software-implemented fault tolerance (SIFT) rely on process replication, which is often prohibitively expensive for practical use due to its high performance overhead and cost. The adaptive reconfigurable mobile objects of reliability (Armor) middleware architecture offers a scalable low-overhead way to provide high-dependability services to applications. It uses coordinated multithreaded processes to manage redundant resources across interconnected nodes, detect errors in user applications and infrastructural components, and provide failure recovery. The authors describe the experiences and lessons learned in deploying Armor in several diverse fields.",2005,0, 2670,Quality metric for approximating subjective evaluation of 3-D objects,"Many factors, such as the number of vertices and the resolution of texture, can affect the display quality of three-dimensional (3-D) objects. When the resources of a graphics system are not sufficient to render the ideal image, degradation is inevitable. It is, therefore, important to study how individual factors will affect the overall quality, and how the degradation can be controlled given limited resources. In this paper, the essential factors determining the display quality are reviewed. We then integrate two important ones, resolution of texture and resolution of wireframe, and use them in our model as a perceptual metric. We assess this metric using statistical data collected from a 3-D quality evaluation experiment. The statistical model and the methodology to assess the display quality metric are discussed. A preliminary study of the reliability of the estimates is also described. The contribution of this paper lies in: 1) determining the relative importance of wireframe versus texture resolution in perceptual quality evaluation and 2) proposing an experimental strategy for verifying and fitting a quantitative model that estimates 3-D perceptual quality. The proposed quantitative method is found to fit closely to subjective ratings by human observers based on preliminary experimental results.",2005,0, 2671,Addressing the performance of two software reliability modeling methods,"
First Page of the Article
",2005,0, 2672,Variance expressions for software reliability growth models,"
First Page of the Article
",2005,0, 2673,Embedded system engineering using C/C++ based design methodologies,This paper analyzes and compares the effectiveness of various system level design methodologies in assessing performance of embedded computing systems from the earliest stages of the design flow. The different methodologies are illustrated and evaluated by applying them to the design of an aircraft pressurization system (APS). The APS is mapped on a heterogeneous hardware/software platform consisting of two ASICs and a microcontroller. The results demonstrate the high impact of computer aided design (CAD) tools on design time and quality.,2005,0, 2674,Development life cycle management: a multiproject experiment,"A variety of life cycle models for software systems development are generally available. However, it is generally difficult to compare and contrast the methods and very little literature is available to guide developers and managers in making choices. Moreover in order to make informed decisions developers require access to real data that compares the different models and the results associated with the adoption of each model. This paper describes an experiment in which fifteen software teams developed comparable software products using four different development approaches (V-model, incremental, evolutionary and extreme programming). Extensive measurements were taken to assess the time, quality, size, and development efficiency of each product. The paper presents the experimental data collected and the conclusions related to the choice of method, its impact on the project and the quality of the results as well as the general implications to the practice of systems engineering project management.",2005,0, 2675,Encoder-Assisted Adaptive Video Frame Interpolation,"
First Page of the Article
",2005,0, 2676,Dynamic routing in translucent WDM optical networks: the intradomain case,"Translucent wavelength-division multiplexing optical networks use sparse placement of regenerators to overcome physical impairments and wavelength contention introduced by fully transparent networks, and achieve a performance close to fully opaque networks at a much less cost. In previous studies, we addressed the placement of regenerators based on static schemes, allowing for only a limited number of regenerators at fixed locations. This paper furthers those studies by proposing a dynamic resource allocation and dynamic routing scheme to operate translucent networks. This scheme is realized through dynamically sharing regeneration resources, including transmitters, receivers, and electronic interfaces, between regeneration and access functions under a multidomain hierarchical translucent network model. An intradomain routing algorithm, which takes into consideration optical-layer constraints as well as dynamic allocation of regeneration resources, is developed to address the problem of translucent dynamic routing in a single routing domain. Network performance in terms of blocking probability, resource utilization, and running times under different resource allocation and routing schemes is measured through simulation experiments.",2005,0, 2677,Software-synchronized all-optical sampling for fiber communication systems,"This paper describes a software-synchronized all-optical sampling system that presents synchronous eye diagrams and data patterns as well as calculates accurate Q values without requiring clock recovery. A synchronization algorithm is presented that calculates the offset frequency between the data bit rate and the sampling rate, and as a result, synchronous eye diagrams can be presented. The algorithm is shown to be robust toward poor signal quality and adds less than 100-fs timing drift to the eye diagrams. An extension of the software synchronization algorithm makes it possible to automatically find the pattern length of a periodic data pattern in a data signal. As a result, individual pulses can be investigated and detrimental effects present on the data signal can be identified. Noise averaging can also be applied. To measure accurate Q values without clock recovery, a high sampling rate is required in order to establish the noise statistics of the measured signal before any timing drift occurs. This paper presents a system with a 100-MHz sampling rate that measures accurate Q values at bit rates as high as 160 Gb/s. The high bandwidth of the optical sampling system also contributes to sampling more noise, which in turn results in lower Q values compared with conventional electrical sampling with a lower bandwidth. A theory that estimates the optically sampled Q values as a function of the sampling gate width is proposed and experimentally verified.",2005,0, 2678,Self-Managing Sensor-Based Middleware for Performance Monitoring and Data Integration in Grids,"This paper describes a sensor-based middleware for performance monitoring and data integration in the Grid that is capable of self-management. The middleware unifies both system and application monitoring in a single system, storing various types of monitoring and performance data in decentralized storages, and providing a uniform interface to access that data. We have developed event-driven and demand-driven sensors to support rule-based monitoring and data integration. Grid service-based operations and TCP-based data delivery are exploited to balance tradeoffs between interoperability, flexibility and performance. Peer-to-peer features have been integrated into the middleware, enabling self-managing capabilities, supporting group-based and automatic data discovery, data query and subscription of performance and monitoring data.",2005,0, 2679,A Cost-Effective Main Memory Organization for Future Servers,"Today, the amount of main memory in mid-range servers is pushing practical limits with as much as 192 GB memory in a 24 processor system. Further, with the onset of multi-threaded, multi-core processor chips, it is likely that the number of memory chips per processor chip will start to increase, making DRAM cost and size an even larger burden. We investigate in this paper an alternative main memory organization - a two-level noninclusive memory hierarchy - where the second level is substantially slower than the first level, with the aim of reducing total system cost and spatial requirements of servers of today and the future. We quantitatively investigate how big and how slow the second level can be. Surprisingly, we find that only 30% of the entire memory resources typically needed must be accessed at DRAM speed whereas the rest can be accessed at a speed that is an order of magnitude slower with a negligible (1.2% on average) performance impact. We also present a cost-effective implementation of how to manage such a hierarchy and how it can bring down memory cost by leveraging memory compression and sharing of memory resources among servers.",2005,0, 2680,Resource Allocation for Periodic Applications in a Shipboard Environment,"Providing efficient workload management is an important issue for a large-scale heterogeneous distributed computing environment where a set of periodic applications is executed. The considered distributed system is expected to operate in an environment where the input workload is likely to change unpredictably, possibly invalidating a resource allocation that was based on the initial workload estimate. The tasks consist of multiple application strings, each made up of an ordered sequence of applications. There are quality of service (QoS) constraints that must be satisfied for each string. This work addresses the problem of finding a robust initial allocation of resources to application strings that is able to absorb some level of unknown input workload increase without rescheduling. An allocation feasibility analysis is presented followed by four heuristics for finding a near-optimal allocation of resources. The performance of the proposed heuristics is evaluated and compared using simulation. The proposed heuristics also are compared to a mathematically derived upper bound.",2005,0, 2681,Optimizing checkpoint sizes in the C3 system,"The running times of many computational science applications are much longer than the mean-time-between-failures (MTBF) of current high-performance computing platforms. To run to completion, such applications must tolerate hardware failures. Checkpoint-and-rest art (CPR) is the most commonly used scheme for accomplishing this - the state of the computation is saved periodically on stable storage, and when a hardware failure is detected, the computation is restarted from the most recently saved state. Most automatic CPR, schemes in the literature can be classified as system-level checkpointing schemes because they take core-dump style snapshots of the computational state when all the processes are blocked at global barriers in the program. Unfortunately, a system that implements this style of checkpointing is tied to a particular platform amd cannot optimize the checkpointing process using application-specific knowledge. We are exploring an alternative called automatic application-level checkpointing. In our approach, programs are transformed by a pre-processor so that they become self-checkpointing and self-rest art able on any platform. In this paper, we evaluate a mechanism that utilizes application knowledge to minimize the amount of information saved in a checkpoint.",2005,0, 2682,"A model-driven performance analysis framework for distributed, performance-sensitive software systems","Large-scale, distributed, performance-sensitive software (DPSS) systems from the basis of mission- and often safety-critical applications. DPSS systems comprise of many independent artifacts, such as network/bus interconnects, many coordinated local and remote endsystems, and multiple layers of software. DPSS systems demand multiple, simultaneous predictable performance requirements such as end-to-end latencies, throughput, reliability and security, while also requiring the ability to control and adapt operating characteristics for applications with respect to such features as time, quantity of information, and quality of service (QoS) properties including predictability, controllability, and adaptability of operating characteristics for applications with respect to such features as time, quantity of information, accuracy, confidence and synchronization. All these issues become highly volatile in large-scale DPSS systems due to the dynamic interplay of the many interconnected parts that are often constructed from smaller parts.",2005,0, 2683,Understanding perceptual distortion in MPEG scalable audio coding,"In this paper, we study coding artifacts in MPEG-compressed scalable audio. Specifically, we consider the MPEG advanced audio coder (AAC) using bit slice scalable arithmetic coding (BSAC) as implemented in the MPEG-4 reference software. First we perform human subjective testing using the comparison category rating (CCR) approach, quantitatively comparing the performance of scalable BSAC with the nonscaled TwinVQ and AAC algorithms. This testing indicates that scalable BSAC performs very poorly relative to TwinVQ at the lowest bitrate considered (16 kb/s) largely because of an annoying and seemingly random mid-range tonal signal that is superimposed onto the desired output. In order to better understand and quantify the distortion introduced into compressed audio at low bit rates, we apply two analysis techniques: Reng bifrequency probing and time-frequency decomposition. Using Reng probing, we conclude that aliasing is most likely not the cause of the annoying tonal signal; instead, time-frequency or spectrogram analysis indicates that its cause is most likely suboptimal bit allocation. Finally, we describe the energy equalization quality metric (EEQM) for predicting the relative perceptual performance of the different coding algorithms and compare its predictive ability with that of ITU Recommendation ITU-R BS.1387-1.",2005,0, 2684,Tool-based configuration of real-time CORBA middleware for embedded systems,"Real-time CORBA is a middleware standard that has demonstrated successes in developing distributed, realtime, and embedded (DRE) systems. Customizing real-time CORBA for an application can considerably reduce the size of the middleware and improve its performance. However, customizing middleware is an error-prone task and requires deep knowledge of the CORBA standard as well as the middleware design. This paper presents ZEN-kit, a graphical tool for customizing RTZen (an RTSJ-based implementation of real-time CORBA). This customization is achieved through modularizing the middleware so that features may be inserted or removed based on the DRE application requirements. This paper presents three main contributions: 1) it describes how real-time CORBA features can be modularized and configured in RTZen using components and aspects, 2) it provides a configuration strategy to customize real-time middleware to achieve low-footprint ORBs, and 3) it presents ZEN-kit, a graphical tool for composing customized real-time middleware.",2005,0, 2685,Customizing event ordering middleware for component-based systems,"The stringent performance requirements of distributed realtime embedded systems often require highly optimized implementations of middleware services. Performing such optimizations manually can be tedious and error-prone. This paper proposes a model-driven approach to generate customized implementations of event ordering services in the context of component based systems. Our approach is accompanied by a number of tools to automate the customization. Given an application App, an event ordering service Order and a middleware platform P, we provide tools to analyze high-level specifications of App to extract information relevant to event ordering and to use the extracted application information to obtain a customized service, Order(App), with respect to the application usage.",2005,0, 2686,Do SQA programs work - CMM works. a meta analysis,"Many software development professionals and managers of software development organizations are not fully convinced in the profitability of investments for the advancement of SQA systems. The results included in each of the articles we found, cannot lead to general conclusions on the impact of investments in upgrading an SQA system. Our meta analysis was based on CMM level transition (CMMLT) analysis of available publications and was for the seven most common performance metric. The CMMLT analysis is applicable for combined analysis of empirical data from many sources. Each record in our meta analysis database is calculated as ""after-before ratio"", which is nearly free of the studied organization's characteristics. Because the CMM guidelines and SQA requirement are similar, we claim that the results for CMM programs are also applicable to investments in SQA systems. The extensive database of over 1,800 projects from a variety of 19 information sources leading to the meta analysis results - proved that investments in CMM programs and similarly in SQA systems contribute to software development performance.",2005,0, 2687,Estimating the number of faults remaining in software code documents inspected with iterative code reviews,"Code review is considered an efficient method for detecting faults in a software code document. The number of faults not detected by the review should be small. Current methods for estimating this number assume reviews with several inspectors, but there are many cases where it is practical to employ only two inspectors. Sufficiently accurate estimates may be obtained by two inspectors employing an iterative code review (ICR) process. This paper introduces a new estimator for the number of undetected faults in an ICR process, so the process may be stopped when a satisfactory result is estimated. This technique employs the Kantorowitz estimator for N-fold inspections, where the N teams are replaced by N reviews. The estimator was tested for three years in an industrial project, where it produced satisfactory results. More experiments are needed in order to fully evaluate the approach.",2005,0, 2688,Multilevel-converter-based VSC transmission operating under fault AC conditions,"A study of a floating-capacitor (FC) multilevel-converter-based VSC transmission system operating under unbalanced AC conditions is presented. The control strategy is based on the use of two controllers, i.e. a main controller, which is implemented in the synchronous d-q frame without involving positive and negative sequence decomposition, and an auxiliary controller, which is implemented in the negative sequence d-q frame with the negative sequence current extracted. Automatic power balancing during AC fault is achieved without communication between the two converters by automatic power modulation on the detection of abnormal DC voltages. The impact of unbalanced floating capacitor voltages of the FC converter on power devices is described. A software-based method, which adds square waves whose amplitudes vary with the capacitor voltage errors to the nominal modulation signals for fast capacitor voltage balancing during faults, is proposed. Simulations on a 300 kV DC, 300 MW VSC transmission system based on a four-level FC converter show good performance of the proposed control strategy during unbalanced conditions caused by single-phase to ground fault.",2005,0, 2689,Theoretical consideration of adaptive fault tolerance architecture for open distributed object-oriented computing,"In distributed object-oriented computing, a complex set of requirements has to be considered for maintaining the required level of reliability of objects to give higher level of performance. The authors proposed a mechanism to analyze the complexity of these underlying environments and design a dynamically reconfigurable architecture according to the changes of the underlying environment. The replication should have a suitable architecture for the adaptable fault tolerance so that it can handle the different situations of the underlying system before the fault occurs. The system can provide the required level of reliability by measuring the reliability of the underlying environment and then either adjusting the replication degree or migrating the object appropriately. It is also possible to improve the reliability with shifting to a suitable replication protocol adaptively. However, there should be a mechanism to overcome the problems when ""replication protocol transition period"" exists. This will also find a client-server group communication protocol that communicates with objects under both different environmental conditions and different replication protocols.",2005,0, 2690,High-abstraction level complexity analysis and memory architecture simulations of multimedia algorithms,"An appropriate complexity analysis stage is the first and fundamental step for any methodology aiming at the implementation of today's (complex) multimedia algorithms. Such a stage may have different final implementation goals such as defining a new architecture dedicated to the specific multimedia standard under study, or defining an optimal instruction set for a selected processor architecture, or to guide the software optimization process in terms of control-flow and data-flow optimization targeting a specific architecture. The complexity of nowadays multimedia standards, in terms of number of lines of codes and cross-relations among processing algorithms that are activated by specific input signals, goes far beyond what the designer can reasonably grasp from the ""pencil and paper"" analysis of the (software) specifications. Moreover, depending on the implementation goal different measures and metrics are required at different steps of the implementation methodology or design flow. The process of extracting the desired measures needs to be supported by appropriate automatic tools, since code rewriting, at each design stage, may result resource consuming and error prone. This paper reviews the state of the art of complexity analysis methodologies oriented to the design of multimedia systems and presents an integrated tool for automatic analysis capable of producing complexity results based on rich and customizable metrics. The tool is based on a C virtual machine that allows extracting from any C program execution the operations and data-flow information, according to the defined metrics. The tool capabilities include the simulation of virtual memory architectures. This paper shows some examples of complexity analysis results that can be yielded with the tool and presents how the tools can be used at different stages of implementation methodologies.",2005,0, 2691,Opportunistic file transfer over a fading channel under energy and delay constraints,"We consider transmission control (rate and power) strategies for transferring a fixed-size file (finite number of bits) over fading channels under constraints on both transmit energy and transmission delay. The goal is to maximize the probability of successfully transferring the entire file over a time-varying wireless channel modeled as a finite-state Markov process. We study two implementations regarding the delay constraints: an average delay constraint and a strict delay constraint. We also investigate the performance degradation caused by the imperfect (delayed or erroneous) channel knowledge. The resulting optimal policies are shown to be a function of the channel-state information (CSI), the residual battery energy, and the number of residual information bits in the transmit buffer. It is observed that the probability of successful file transfer increases significantly when the CSI is exploited opportunistically. When the perfect instantaneous CSI is available at the transmitter, the faster channel variations increase the success probability under delay constraints. In addition, when considering the power expenditure in the pilot for channel estimation, the optimal policy shows that the transmitter should use the pilot only if there is sufficient energy left for packet transfer; otherwise, a channel-independent policy should be used.",2005,0, 2692,Design Aspects for Wide-Area Monitoring and Control Systems,"This paper discusses the basic design and special applications of wide-area monitoring and control systems, which complement classical protection systems and Supervisory Control and Data Acquisition/Energy Management System applications. Systemwide installed phasor measurement units send their measured data to a central computer, where snapshots of the dynamic system behavior are made available online. This new quality of system information opens up a wide range of new applications to assess and actively maintain system's stability in case of voltage, angle or frequency instability, thermal overload, and oscillations. Recent developed algorithms and their design for these application areas are introduced. With practical examples, the benefits in terms of system security are shown.",2005,0, 2693,Managing a relational database with intelligent agents,"A prototype relational database system was developed that has indexing capability, which threads into data acquisition and analysis programs used by a wide range of researchers. To streamline the user interface and table design, free-formatted table entries were used as descriptors for experiments. This approach potentially could increase data entry errors, compromising system index and retrieval capabilities. A methodology of integrating intelligent agents with the relational database was developed to cleanse and improve the data quality for search and retrieval. An intelligent agent was designed using JACKâ„?/sup> (Agent Oriented Software Group) and integrated with an Oracle-based relational database. The system was tested by triggering agent corrective measures and was found to improve the quality of the data entries. Wider testing protocols and metrics for assessing its performance are subjects for future studies. This methodology for designing intelligent-based database systems should be useful in developing robust large-scale database systems.",2005,0, 2694,Advanced suppression of stochastic pulse shaped partial discharge disturbances,"Digital partial discharge (PD) diagnosis and testing systems are state of the art for performing quality assurance or fault identification on high voltage apparatus as well as commissioning tests. However their on-site application is a rather difficult task as PD information is generally superimposed with electromagnetic disturbances. These disturbances affect negatively all known PD evaluation systems. Especially the detection and suppression of stochastically distributed pulse shaped disturbances is a major problem. To determine such disturbances fast machine intelligent recognition systems are being developed. Three different strategies based on digital signal processing are discussed in this paper, while focusing on time resolved neural signal recognition. The system investigated more closely is currently able to distinguish between PD pulses and disturbances with the disturbances values being ten times higher than the peak values of the PD pulses. Therefore a measuring system acquires the input data in the VHF range (20-100 MHz). The discrimination of the pulses is performed in real time in time domain using fast neural network hardware. With that signal recognition system a noise reassessed phase resolved pulse sequence (PRPS) data set in the CIGRE data format is generated that can be the input source of most PD evaluation software. Optionally a noise reassessed analogue data stream can be generated that is suitable for any conventional PD measuring system.",2005,0, 2695,Pro-active Page Replacement for Scientific Applications: A Characterization,"Paging policies implemented by today's operating systems cause scientific applications to exhibit poor performance, when the application's working set does not fit in main memory. This has been typically attributed to the sub-optimal performance of LRU-like virtual-memory replacement algorithms. On one end of the spectrum, researchers in the past have proposed fully automated compiler-based techniques that provide crucial information on future access patterns (reuse-distances, release hints etc) of an application that can be exploited by the operating system to make intelligent prefetching and replacement decisions. Static techniques like the aforementioned can be quite accurate, but require that the source code be available and analyzable. At the other end of the spectrum, researchers have also proposed pure system-level algorithmic innovations to improve the performance of LRU-like algorithms, some of which are only interesting from the theoretical sense and may not really be implementable. Instead, in this paper we explore the possibility of tracking application's runtime behavior in the operating system, and find that there are several useful characteristics in the virtual memory behavior that can be anticipated and used to pro-actively manage physical memory usage. Specifically, we show that LRU-like replacement algorithms hold onto pages long after they outlive their usefulness and propose a new replacement algorithm that exploits the predictability of the application's page-fault patterns to reduce the number of page-faults. Our results demonstrate that such techniques can reduce page-faults by as much as 78% over both LRU and EELRU that is considered to be one of the state-of-the-art algorithms towards addressing the performance shortcomings of LRU. Further, we also present an implementable replacement algorithm within the operating system, that performs considerably better than the Linux kernel's replacement algorithm",2005,0, 2696,Load balancing of services with server initiated connections,"The growth of on-line Internet services using wireless mobile handsets has increased the demand for scalable and dependable wireless services. The systems hosting such services face high quality-of-service (QoS) requirements in terms of quick response time and high availability. With growing traffic and loads using wireless applications, the servers and networks hosting these applications need to handle larger loads. Hence, there is a need to host such services on multiple servers to distribute the processing and communications tasks and balance the load across various similar entities. Load balancing is especially important for servers facilitating hosting of various wireless applications where it is difficult to predict the load delivered to a server. A load-balancer (LB) is a node that accepts all requests from external entities and directs them to internal nodes for processing based on their processing capabilities and current load patterns. We present the problem of load balancing among multiple servers, each having server initiated connections with other network entities. Challenges involved in balancing loads arising from such connections are presented and some practical solutions are proposed. As a case study, the architectures of load balancing schemes on a set of SMS (short message service) gateway servers is presented, along with deployment strategies. Performance and scalability issues are also highlighted for different possible solutions.",2005,0, 2697,Design and evaluation of hybrid fault-detection systems,"As chip densities and clock rates increase, processors are becoming more susceptible to transient faults that can affect program correctness. Up to now, system designers have primarily considered hardware-only and software-only fault-detection mechanisms to identify and mitigate the deleterious effects of transient faults. These two fault-detection systems, however, are extremes in the design space, representing sharp trade-offs between hardware cost, reliability, and performance. In this paper, we identify hybrid hardware/software fault-detection mechanisms as promising alternatives to hardware-only and software-only systems. These hybrid systems offer designers more options to fit their reliability needs within their hardware and performance budgets. We propose and evaluate CRAFT, a suite of three such hybrid techniques, to illustrate the potential of the hybrid approach. For fair, quantitative comparisons among hardware, software, and hybrid systems, we introduce a new metric, mean work to failure, which is able to compare systems for which machine instructions do not represent a constant unit of work. Additionally, we present a new simulation framework which rapidly assesses reliability and does not depend on manual identification of failure modes. Our evaluation illustrates that CRAFT, and hybrid techniques in general, offer attractive options in the fault-detection design space.",2005,0, 2698,Modeling and analysis of non-functional requirements as aspects in a UML based architecture design,"The problem of effectively designing and analyzing software system to meet its nonfunctional requirements such as performance, security, and adaptability is critical to the system's success. The significant benefits of such work include detecting and removing defects earlier, reducing development time and cost while improving the quality. The formal design analysis framework (FDAF) is an aspect-oriented approach that supports the design and analysis of non-functional requirements for distributed, real-time systems. In the FDAF, nonfunctional requirements are defined as reusable aspects in the repository and the conventional UML has been extended to support the design of these aspects. FDAF supports the automated translation of extended, aspect-oriented UML designs into existing formal notations, leveraging an extensive body of formal methods work. In this paper, the design and analysis of response time performance aspect is described. An example system, the ATM/banking system has been used to illustrate this process.",2005,0, 2699,Parallel processors and an approach to the development of inference engine,"The reliable and fault tolerant computers are key to the success to aerospace, and communication industries where failures of the system can cause a significant economic impact and loss of life. Designing a reliable digital system, and detecting and repairing the faults are challenging tasks in order for the digital system to operate without failures for a given period of time. The paper presents a new and systematic software engineering approach of performing fault diagnosis of digital systems, which have employed multiple processors. The fault diagnosis model is based on the classic PMC model to generate data obtained on the basis of test results performed by the processors. The PMC model poses a tremendous challenge to the user in doing fault analysis on the basis of test results performed by the processors. This paper will perform one fault model for developing software. The effort has been made to preserve the necessary and sufficient.",2005,0, 2700,Flexible Consistency for Wide Area Peer Replication,"The lack of a flexible consistency management solution hinders P2P implementation of applications involving updates, such as read-write file sharing, directory services, online auctions and wide area collaboration. Managing mutable shared data in a P2P setting requires a consistency solution that can operate efficiently over variable-quality failure-prone networks, support pervasive replication for scaling, and give peers autonomy to tune consistency to their sharing needs and resource constraints. Existing solutions lack one or more of these features. In this paper, we described a new consistency model for P2P sharing of mutable data called composable consistency, and outline its implementation in a wide area middleware file service called Swarm. Composable consistency lets applications compose consistency semantics appropriate for their sharing needs by combining a small set of primitive options. Swarm implements these options efficiently to support scalable, pervasive, failure-resilient, wide-area replication behind a simple yet flexible interface. Two applications was presented to demonstrate the expressive power and effectiveness of composable consistency: a wide area file system that outperforms Coda in providing close-to-open consistency over WANs, and a replicated BerkeleyDB database that reaps order-of-magnitude performance gains by relaxing consistency for queries and updates",2005,0, 2701,Magnetic field measurements for fast-changing magnetic fields,"Several recent applications for fast ramped magnets have been found that require rapid measurement of the field quality during the ramp. (In one instance, accelerator dipoles will be ramped at 1 T/sec, with measurements needed to the accuracy typically required for accelerators.) We have built and tested a new type of magnetic field measuring system to meet this need. The system consists of 16 stationary pickup windings mounted on a cylinder. The signals induced in the windings in a changing magnetic field are sampled and analyzed to obtain the field harmonics. To minimize costs, printed circuit boards were used for the pickup windings and a combination of amplifiers and ADC's used for the voltage readout system. New software was developed for the analysis. Magnetic field measurements of a model dipole developed for the SIS200 accelerator at GSI are presented. The measurements are needed to ensure that eddy currents induced by the fast ramps do not impact the field quality required for successful accelerator operation.",2005,0, 2702,Developing and assuring trustworthy Web services,"Web services are emerging technologies that are changing the way we develop and use computer systems and software. Current Web services testing techniques are unable to assure the desired level of trustworthiness, which presents a barrier to WS applications in mission and business critical environments. This paper presents a framework that assures the trustworthiness of Web services. New assurance techniques are developed within the framework, including specification verification via completeness and consistency checking, specification refinement, distributed Web services development, test case generation, and automated Web services testing. Traditional test case generation methods only generate positive test cases that verify the functionality of software. The Swiss cheese test case generation method proposed in this paper is designed to perform both positive and negative testing that also reveal the vulnerability of Web services. This integrated development process is implemented in a case study. The experimental evaluation demonstrates the effectiveness of this approach. It also reveals that the Swiss cheese negative testing detects even more faults than positive testing and thus significantly reduces the vulnerability of Web services.",2005,0, 2703,Effect of preventive rejuvenation in communication network system with burst arrival,"Long running software systems are known to experience an aging phenomenon called software aging, one in which the accumulation of errors during the execution of software leads to performance degradation and eventually results in failure. To counteract this phenomenon a proactive fault management approach, called software rejuvenation, is particularly useful. It essentially involves gracefully terminating an application or a system and restarting it in a clean internal state. In this paper, we perform the dependability analysis of a client/server software system with rejuvenation under the assumption that the requests arrive according to the Markov modulated Poisson process. Three dependability measures, steady-state availability, loss probability of requests and mean response time on tasks, are derived through the hidden Markovian analysis based on the time-based software rejuvenation scheme. In numerical examples, we investigate the sensitivity of some model parameters to the dependability measures.",2005,0, 2704,Digital gas fields produce real time scheduling based on intelligent autonomous decentralized system,"In the paper, we focus on exploring to construct a novel digital gas fields produce real-time scheduling system, which is based on the theory of intelligent autonomous decentralized system (IADS). The system combines the practical demand and the characteristics of the gas fields, and it aims at dealing with the real-time property, dynamic and complexity during gas fields produce scheduling. Besides embodying on-line intelligent expansion, intelligent fault tolerance and on-line intelligent maintenance of IADS particular properties, the scheme adequately attaches importance to the flexibility. The model & method based on intelligent information pull push (IIPP) and intelligent management (IM) is applied to the system, thus it is helpful to improve the performance of the scheduling system. In according with the current requirement of gas fields produce scheduling, the concrete solvent is put forward. The related system architecture and software structure is presented. An effective method of solving the real-time property was developed for use in scheduling of the gas field produce. We simulated some experiments, and the result demonstrates the validity of the system. At Last, we predict its promising research trend in the future of gas fields practice application.",2005,0, 2705,A comprehensive model for software rejuvenation,"Recently, the phenomenon of software aging, one in which the state of the software system degrades with time, has been reported. This phenomenon, which may eventually lead to system performance degradation and/or crash/hang failure, is the result of exhaustion of operating system resources, data corruption, and numerical error accumulation. To counteract software aging, a technique called software rejuvenation has been proposed, which essentially involves occasionally terminating an application or a system, cleaning its internal state and/or its environment, and restarting it. Since rejuvenation incurs an overhead, an important research issue is to determine optimal times to initiate this action. In this paper, we first describe how to include faults attributed to software aging in the framework of Gray's software fault classification (deterministic and transient), and study the treatment and recovery strategies for each of the fault classes. We then construct a semi-Markov reward model based on workload and resource usage data collected from the UNIX operating system. We identify different workload states using statistical cluster analysis, estimate transition probabilities, and sojourn time distributions from the data. Corresponding to each resource, a reward function is then defined for the model based on the rate of resource depletion in each state. The model is then solved to obtain estimated times to exhaustion for each resource. The result from the semi-Markov reward model are then fed into a higher-level availability model that accounts for failure followed by reactive recovery, as well as proactive recovery. This comprehensive model is then used to derive optimal rejuvenation schedules that maximize availability or minimize downtime cost.",2005,0, 2706,Project overlapping and its influence on the product quality,"Time to market, quality and cost are the three most important factors when developing software. In order to achieve and retain a leading position in the market, developers are forced to produce more complex functionalities, much faster and more frequently. In such conditions, it is hard to keep a high quality level and low cost. The article focuses on the reliability aspect of quality as one of the most important quality factors to the customer. Special attention is devoted to the reliability of the software product being developed in project overlapping conditions. The Weibull reliability growth model for predicting the reliability of a product during the development process is adapted and applied to historical data of a sequence of overlapping projects. A few useful tips on reliability modeling for project management and planning in project overlapping conditions are presented",2005,0, 2707,"""De-Randomizing"" congestion losses to improve TCP performance over wired-wireless networks","Currently, a TCP sender considers all losses as congestion signals and reacts to them by throttling its sending rate. With Internet becoming more heterogeneous with more and more wireless error-prone links, a TCP connection may unduly throttle its sending rate and experience poor performance over paths experiencing random losses unrelated to congestion. The problem of distinguishing congestion losses from random losses is particularly hard when congestion is light: congestion losses themselves appear to be random. The key idea is to ""de-randomize"" congestion losses. This paper proposes a simple biased queue management scheme that ""de-randomizes"" congestion losses and enables a TCP receiver to diagnose accurately the cause of a loss and inform the TCP sender to react appropriately. Bounds on the accuracy of distinguishing wireless losses and congestion losses are analytically established and validated through simulations. Congestion losses are identified with an accuracy higher than 95% while wireless losses are identified with an accuracy higher than 75%. A closed form is derived for the achievable improvement by TCP endowed with a discriminator with a given accuracy. Simulations confirm this closed form. TCP-Casablanca, a TCP-Newreno endowed with the proposed discriminator at the receiver, yields through simulations an improvement of more than 100% on paths with low levels of congestion and about 1% random wireless packet loss rates. TCP-Ifrane, a sender-based TCP-Casablanca yields encouraging performance improvement.",2005,0, 2708,Fast multi-frame motion estimation for H.264 and its applications to complexity-aware streaming,"JVT/H.264 achieves higher compression efficiency than previous video coding standards such as MPEG4 and H.263. However, this comes at the cost of increased complexity due to the use of variable block-size, multiple reference frame motion compensation, and other advanced coding algorithms. In this paper, we present a fast motion estimation algorithm. This algorithm is based on an adaptive search strategy and a flexible multi-frame search scheme. Compared to the H.264 reference software JM 8.5, this algorithm achieves, on average, a 522% reduction of encoding time, with negligible PSNR drop and bit-rate increase. Performance comparison results are described, and the architecture of a complexity-aware streaming server that applies the algorithm to provide differentiated services is discussed.",2005,0, 2709,Implementation aspects of a novel speech packet loss concealment method,"A speech data packet loss concealment algorithm based on pitch period repetition is presented and a novel low complexity method to refine a pitch period estimate is introduced. Objective performance measurements show that this pitch refinement improves the quality of packet loss concealment. Hardware-software codesign techniques have been investigated to implement the algorithm. Using a coprocessor approach, a processing delay of 0.9 ms and a overall speedup of 3.3 was achieved.",2005,0, 2710,A locally adaptive subsampling algorithm for software based motion estimation,"Motion estimation is an important technology in video coding systems. Although many algorithms have been proposed to reduce its computational complexity, they are still insufficient for implementation of a software based system. The paper proposes several techniques for low-complexity motion estimation, such as a locally adaptive subsampling. Since they reduce the number of SAD (sum of absolute differences) operations, the computational complexity can be dramatically reduced while maintaining high image quality. Simulation results show that the proposed algorithm has about 20 times the speedup as TSS (three step search). This means that the proposed algorithm can attain enough processing performance for real-time implementation on embedded processors. Furthermore, PSNR of our method is about 0.2 dB superior to that of TSS.",2005,0, 2711,Algorithmic optimization of H.264/AVC encoder,"Several platform independent optimizations for a baseline profile H.264/AVC encoder are described. The optimizations include adaptive diamond pattern based motion estimation, fast sub-pel motion vector refinement and heuristic intra prediction. In addition, loop unrolling, early out thresholds and adaptive inverse transforms are used. An experimental complexity analysis is presented studying effect of optimizations on the encoding frame rate on the AMD Athlon processor. Trade-offs in rate-distortion performance are also measured. Compared to a public reference encoder, speed-ups of 4-8 have been obtained with 0.6-0.8 dB loss in image quality. In practice, our software only H.264 encoder achieves an encoding rate of 86 QCIF frames/s that is well above real-time limits.",2005,0, 2712,Digital system for detection and classification of electrical events,"The paper describes an algorithm to detect and classify electrical events related to power quality. The events detection is based on monitoring the statistical characteristics of the energy of the error signal, which is defined as the difference between the monitored waveform and a sinusoidal wave generated with the same magnitude, frequency, and phase as the fundamental sinusoidal component. The novel feature is event recognition based on a neural network that uses the error signal as input. Multi-rate techniques are also employed to improve the system operation for on-line applications. Software tests were performed showing the good performance of the system.",2005,0, 2713,FD-HGAC: a hybrid heuristic/genetic algorithm hardware/software co-synthesis framework with fault detection,"Embedded real-time systems are becoming increasingly complex. To combat the rising design cost of those systems, co-synthesis tools that map tasks to systems containing both software and specialized hardware have been developed. As system transient fault rates increase due to technology scaling, embedded systems must be designed in fault tolerant ways to maintain system reliability. This paper presents and analyzes FD-HGAC, a tool using a genetic algorithm and heuristics to design real-time systems with partial fault detection. Results of numerous trials of the tool are shown to produce systems with average 22% detection coverage that incurs no cost or performance penalty.",2005,0, 2714,Using loop invariants to fight soft errors in data caches,"Ever scaling process technology makes embedded systems more vulnerable to soft errors than in the past. One of the generic methods used to fight soft errors is based on duplicating instructions either in the spatial or temporal domain and then comparing the results to see whether they are different. This full duplication based scheme, though effective, is very expensive in terms of performance, power, and memory space. In this paper, we propose an alternate scheme based on loop invariants and present experimental results which show that our approach catches 62% of the errors caught by full duplication, when averaged over all benchmarks tested. In addition, it reduces the execution cycles and memory demand of the full duplication strategy by 80% and 4%, respectively.",2005,0, 2715,Bi-layer segmentation of binocular stereo video,"This paper describes two algorithms capable of real-time segmentation of foreground from background layers in stereo video sequences. Automatic separation of layers from colour/contrast or from stereo alone is known to be error-prone. Here, colour, contrast and stereo matching information are fused to infer layers accurately and efficiently. The first algorithm, layered dynamic programming (LDP), solves stereo in an extended 6-state space that represents both foreground/background layers and occluded regions. The stereo-match likelihood is then fused with a contrast-sensitive colour model that is learned on the fly, and stereo disparities are obtained by dynamic programming. The second algorithm, layered graph cut (LGC), does not directly solve stereo. Instead the stereo match likelihood is marginalised over foreground and background hypotheses, and fused with a contrast-sensitive colour model like the one used in LDP. Segmentation is solved efficiently by ternary graph cut. Both algorithms are evaluated with respect to ground truth data and found to have similar p performance, substantially better than stereo or colour/contrast alone. However, their characteristics with respect to computational efficiency are rather different. The algorithms are demonstrated in the application of background substitution and shown to give good quality composite video output.",2005,0, 2716,Assessing the performance of erasure codes in the wide-area,"The problem of efficiently retrieving a file that has been broken into blocks and distributed across the wide-area pervades applications that utilize grid, peer-to-peer, and distributed file systems. While the use of erasure codes to improve the fault-tolerance and performance of wide-area file systems has been explored, there has been little work that assesses the performance and quantifies the impact of modifying various parameters. This paper performs such an assessment. We modify our previously defined framework for studying replication in the wide-area to include both Reed-Solomon and low-density parity-check (LDPC) erasure codes. We then use this framework to compare Reed-Solomon and LDPC erasure codes in three wide-area, distributed settings. We conclude that although LDPC codes have an advantage over Reed-Solomon codes in terms of decoding cost, this advantage does not always translate to the best overall performance in wide-area storage situations.",2005,0, 2717,On a method for mending time to failure distributions,"Many software reliability growth models assume that the time to next failure may be infinite; i.e., there is a chance that no failure will occur at all. For most software products this is too good to be true even after the testing phase. Moreover, if a non-zero probability is assigned to an infinite time to failure, metrics like the mean time to failure do not exist. In this paper, we try to answer several questions: Under what condition does a model permit an infinite time to next failure? Why do all non-homogeneous Poisson process (NHPP) models of the finite failures category share this property? And is there any transformation mending the time to failure distributions? Indeed, such a transformation exists; it leads to a new family of NHPP models. We also show how the distribution function of the time to first failure can be used for unifying finite failures and infinite failures NHPP models.",2005,0, 2718,A cost-efficient server architecture for real-time credit-control,"The importance of mobile and electronic commerce results in much attention given to credit-control systems. There are high non-functional requirements on such systems, e.g. high availability and reliability. These requirements can be met by changing the architecture of the credit-control system. In this paper the authors suggested a new architecture and a number of alternative implementations of credit-control server. By quantifying the availability, reliability, performance and cost the designers enabled to make better trade-off decisions. The new architecture was compared with the current, ""state-of-the-art"" solution. Finally, suggestions for the system developers concerning the choice of an appropriate architecture implementation variant were presented.",2005,0, 2719,A feasible schedulability analysis for fault-tolerant hard real-time systems,"Hard real-time systems require predictable performance despite the occurrence of failures. In this paper, the authors proposed a new fault-tolerant priority assignment algorithm based on worst-case response time schedulability analysis for fault-tolerant hard real-time system. This algorithm can be used, together with the schedulability analysis, to effectively improve system fault resilience when the two traditional fault-tolerant priority assignment policies cannot improve system fault resilience. Also, a fault-tolerant priority configuration search algorithm for the proposed analysis was presented.",2005,0, 2720,Comparing fault-proneness estimation models,"Over the last, years, software quality has become one of the most important requirements in the development of systems. Fault-proneness estimation could play a key role in quality control of software products. In this area, much effort has been spent in defining metrics and identifying models for system assessment. Using this metrics to assess which parts of the system are more fault-proneness is of primary importance. This paper reports a research study begun with the analysis of more than 100 metrics and aimed at producing suitable models for fault-proneness estimation and prediction of software modules/files. The objective has been to find a compromise between the fault-proneness estimation rate and the size of the estimation model in terms of number of metrics used in the model itself. To this end, two different methodologies have been used, compared, and some synergies exploited. The methodologies were the logistic regression and the discriminant analyses. The corresponding models produced for fault-proneness estimation and prediction have been based on metrics addressing different aspects of computer programming. The comparison has produced satisfactory results in terms of fault-proneness prediction. The produced models have been cross validated by using data sets derived from source codes provided by two application scenarios.",2005,0, 2721,Inconsistency measurement of software requirements specifications: an ontology-based approach,"Management of requirements inconsistency is key to the development of complex trustworthy software system, and precise measurement is precondition for the management of requirements inconsistency properly. But at present, although there are a lot of work on the detection of requirements inconsistency, most of them are limited in treating requirements inconsistency according to heuristic rules, we still lacks of promising method for handling requirements inconsistency properly. Based on an abstract requirements refinement process model, this paper takes domain ontology as infrastructure for the refinement of software requirements, the aim of which is to get requirements descriptions that are comparable. Thus we can measure requirements inconsistency based on tangent plane of requirements refinement tree, after we have detected inconsistent relations of leaf nodes at semantic level.",2005,0, 2722,Modeling the effects of 1 MeV electron radiation in gallium-arsenide solar cells using SILVACO® virtual wafer fabrication software,The ALTAS device simulator from Silvaco International has the potential for predicting the effects of electron radiation in solar cells by modeling material defects. A GaAs solar cell was simulated in ATLAS and compared to an actual cell with radiation defects identified using deep level transient spectroscopy techniques (DLTS). The solar cells were compared for various fluence levels of 1 MeV electron radiation and showed an average of less than three percent difference between experimental and simulated cell output characteristics. These results demonstrate that ATLAS software can be a viable tool for predicting solar cell degradation due to electron radiation.,2005,0, 2723,Evaluation and performance comparison of power swing detection algorithms,"This paper presents performance comparisons and evaluations of several power swing detection algorithms. Any sudden change in the configuration or the loading of an electrical network causes power swing between the load concentrations of the network. In order to prevent the distance protection from tripping during such conditions, a power swing blocking is utilized. The decreasing impedance, the Vcosφ and the superimposed currents methods are conventional algorithms for power swing detection. In this paper, the behavior of these algorithms is evaluated. Different conditions have been generated by EMTDC software and voltage and current waveforms have been given to the power swing detectors. Operation of the power swing detectors has been investigated for different conditions and the operation of different algorithms are compared.",2005,0, 2724,Real-time energy market design and operations challenges at the ISO New England,"On March 1, 2003, ISO New England launched its LMP based energy market for both real-time and day-ahead. Under this SMD implementation, the day-ahead market is a financial forward market, which does not enforce obligation on the physical generating resources to provide energy, while the real-time market involves the dispatch of physical resources to serve the real-time load in a secure and reliable manner. In this paper an over view of the real-time energy market in ISO New England is presented. Also, some operations and design challenges of the energy market was discussed.",2005,0, 2725,Discerning user-perceived media stream quality through application-layer measurements,"The design of access networks for proper support of multimedia applications requires an understanding of how the conditions of the underlying network (packet loss and delays, for instance) affect the performance of a media stream. In particular, network congestion can affect the user-perceived quality of a media stream. By choosing metrics that indicate and/or predict the quality ranking that a user would assign to a media stream, we can deduce the performance of a media stream without polling users directly. We describe a measurement mechanism utilizing objective measurements taken from a media player application that strongly correlate with user rankings of stream quality. Experimental results demonstrate the viability of the chosen metrics as predictors or indicators of user quality rankings, and suggest a new mechanism for evaluating the present and future quality of a media stream.",2005,0, 2726,Predicting the probability of change in object-oriented systems,"Of all merits of the object-oriented paradigm, flexibility is probably the most important in a world of constantly changing requirements and the most striking difference compared to previous approaches. However, it is rather difficult to quantify this aspect of quality: this paper describes a probabilistic approach to estimate the change proneness of an object-oriented design by evaluating the probability that each class of the system will be affected when new functionality is added or when existing functionality is modified. It is obvious that when a system exhibits a large sensitivity to changes, the corresponding design quality is questionable. The extracted probabilities of change can be used to assist maintenance and to observe the evolution of stability through successive generations and identify a possible ""saturation"" level beyond which any attempt to improve the design without major refactoring is impossible. The proposed model has been evaluated on two multiversion open source projects. The process has been fully automated by a Java program, while statistical analysis has proved improved correlation between the extracted probabilities and actual changes in each of the classes in comparison to a prediction model that relies simply on past data.",2005,0, 2727,Fast motion estimation algorithm based on predictive line diamond search technology,"Motion estimation (ME) is one of the most time-consuming parts in video encoding systems, and significantly affects the output quality of an encoded sequence. In this paper, a new algorithm is presented, referred to as the predictive line diamond search (PLDS), which is a gradient descent search combined with a new search strategy, namely, one-dimensional line search (1DLS). 1DLS assists a local search around an initial search center with a low computational complexity. After performing 1DLS, a small diamond search pattern and a more compact line search pattern are adaptively used according to the results of the previous search. Our experimental results show that, compared with the previous techniques, the proposed algorithm has lower computational complexity and provides better prediction performance, especially for fast or complex motion sequences.",2005,0, 2728,Some schedulers to achieve proportional junk rate differentiation,"The queueing delay or loss rate is usually used as the performance metric for the real-time multimedia applications. In this paper, to provide more suitable quality of services (QoS) requirements of some applications, we propose a new performance metric, junk rate, where junk is the packet which queueing delay exceeds its time threshold. The model of proportional junk rate differentiation is also constructed to provide the predictable and controllable ratio of junk rates for different classes. Three schedulers, namely, proportional junk rate scheduler with infinite memory, proportional junk rate scheduler with memory M, average junk distance scheduler, are proposed to achieve this model. Simulation results show that three schedulers actually yield this model well.",2005,0, 2729,The role of traffic forecasting in QoS routing - a case study of time-dependent routing,"QoS routing solutions can be classified into two categories, state-dependent and time-dependent, according to their awareness of the future traffic demand in the network. Compared with representative state-dependent routing algorithms, a time-dependent variation of WSP - TDWSP - is proposed in this paper to study the role of traffic forecasting in QoS routing, by customizing itself for a range of traffic demands. Our simulation results confirm the feasibility of traffic forecasting in the context of QoS routing, which empowers TDWSP to achieve better routing performance and to overcome QoS routing difficulties, even though completely accurate traffic prediction is not required. The case study involving TDWSP further reveals that even a static forecast can remain effective over a large area in the solvable traffic demand space, if the network topology and the peak traffic value are given. Thus, the role of traffic forecasting in QoS routing becomes more prominent.",2005,0, 2730,Real-time detection and containment of network attacks using QoS regulation,"In this paper, we present a network measurement mechanism that can detect and mitigate attacks and anomalous traffic in real-time using QoS regulation. The detection method rapidly pursues the dynamics of the network on the basis of correlation properties of the network protocols. By observing the proportion occupied by each traffic protocol and correlating it to that of previous states of traffic, it can be possible to determine whether the current traffic is behaving normally. When abnormalities are detected, our mechanism allows aggregated resource regulation of each protocol's traffic. The trace-driven results show that the rate-based regulation of traffic characterized by protocol classes is a feasible vehicle for mitigating the impact of network attacks on end servers.",2005,0, 2731,Fault prediction of boilers with fuzzy mathematics and RBF neural network,"How to predict potential faults of a boiler in an efficient and scientific way is very important. A lot of comprehensive research has been done, and promising results have been obtained, especially regarding the application of intelligent software. Still there are a lot of problems to be studied. It combines fuzzy mathematics with. RBF neural network in an intuition and natural way. Thus a new method is proposed for the prediction of the potential faults of a coal-fired boiler. The new method traces the development trend of related operation and state variables. The new method has been tested on a simulation machine. And its predicted results were compared with those of traditional statistical results. It is found that the new method has a good performance.",2005,0, 2732,Multispectral fingerprint biometrics,"A novel fingerprint sensor is described that combines a multispectral imager (MSI) with a conventional optical fingerprint sensor. The goal of this combination is a fingerprint sensor with improved usability and security relative to standard technology. The conventional sensor that was used in this research is a commercially available system based on total internal reflectance (TIR). It was modified to accommodate an MSI sensor in such a way that both MSI and TIR images are able to be collected when a user places his/her finger on the sensor platen. The MSI data were preprocessed to enhance fingerprint features. Both the preprocessed MSI images and the TIR images were then passed to a commercial fingerprint software package for minutiae detection and matching. A multiperson study was conducted to test the relative performance characteristics of the two types of finger data under typical office conditions. Results demonstrated that the TIR sensor performance was degraded by a large number of poor quality fingerprint images, likely due to a large percentage of samples taken on people with notably dry skin. The corresponding MSI data showed no such degradation and produced significantly better results. A selective combination of both modalities is shown to offer the potential of further performance improvements.",2005,0, 2733,A method for studying partial discharges location and propagation within power transformer winding based on the structural data,"Power transformer inner insulation system is a very critical component. Its degradation may pose apparatus to fail while in service. On the other hand, experimental experiences prove that partial discharges are a major source of insulation failure in power transformers. If the deterioration of the insulation system caused by PD activity can be detected at an early stage, preventive maintenance measures may be taken. Because of the complex structure of the transformer, accurate PD location is difficult and is one of the challenges power utilities are faced with. This problem comes to be vital in open access systems. In this paper a theory for locating partial discharge and its propagation along the winding is proposed, which is based on structural data of a transformer. The lumped element winding model is constructed. Quasi-static condition is applied and each turn of the winding is considered as a segment. Then an algorithm is developed to use the constructed matrices for PD location. A software package in Visual Basic environment has been developed. This paper introduces the background theory and utilized techniques.",2005,0, 2734,TCAM-based distributed parallel packet classification algorithm with range-matching solution,"Packet classification (PC) has been a critical data path function for many emerging networking applications. An interesting approach is the use of TCAM to achieve deterministic, high speed PC However, apart from high cost and power consumption, due to slow growing clock rate for memory technology in general, PC based on the traditional single TCAM solution has difficulty to keep up with fast growing line rates. Moreover, the TCAM storage efficiency is largely affected by the need to support rules with ranges, or range matching. In this paper, a distributed TCAM scheme that exploits chip-level-parallelism is proposed to greatly improve the PC throughput. This scheme seamlessly integrates with a range encoding scheme, which not only solves the range matching problem but also ensures a balanced high throughput performance. Using commercially available TCAM chips, the proposed scheme achieves PC performance of more than 100 million packets per second (Mpps), matching OC768 (40 Gbps) line rate.",2005,0, 2735,Efficient algorithms and software for detection of full-length LTR retrotransposons,"LTR retrotransposons constitute one of the most abundant classes of repetitive elements in eukaryotic genomes. In this paper, we present a new algorithm for detection of full-length LTR retrotransposons in genomic sequences. The algorithm identifies regions in a genomic sequence that show structural characteristics of LTR retrotransposons. Three key components distinguish our algorithm from that of current software-(i) a novel method that preprocesses the entire genomic sequence in linear time and produces high quality pairs of LTR candidates in running time that is constant per pair, (ii) a thorough alignment-based evaluation of candidate pairs to ensure high quality prediction, and (Hi) a robust parameter set encompassing both structural constraints and quality controls providing users with a high degree of flexibility. Validation of both our serial and parallel implementations of the algorithm against the yeast genome indicates both superior quality and performance results when compared to existing software.",2005,0, 2736,Towards Autonomic Virtual Applications in the In-VIGO System,"Grid environments enable users to share nondedicated resources that lack performance guarantees. This paper describes the design of application-centric middleware components to automatically recover from failures and dynamically adapt to grid environments with changing resource availabilities, improving fault-tolerance and performance. The key components of the application-centric approach are a global per-application execution history and an autonomic component that tracks the performance of a job on a grid resource against predictions based on the application execution history, to guide rescheduling decisions. Performance models of unmodified applications built using their execution history are used to predict failure as well as poor performance. A prototype of the proposed approach, an autonomic virtual application manager (AVAM), has been implemented in the context of the In-VIGO grid environment and its effectiveness has been evaluated for applications that generate CPU-intensive jobs with relatively short execution times (ranging from tens of seconds to less than an hour) on resources with highly variable loads - a workload generated by typical educational usage scenarios of In-VIGO-like grid environments. A memory-based learning algorithm is used to build the performance models for CPU-intensive applications that are used to predict the need for rescheduling. Results show that In-VIGO jobs managed by the AVAM consistently meet their execution deadlines under varying load conditions and gracefully recover from unexpected failures",2005,0, 2737,Combining Visualization and Statistical Analysis to Improve Operator Confidence and Efficiency for Failure Detection and Localization,"Web applications suffer from software and configuration faults that lower their availability. Recovering from failure is dominated by the time interval between when these faults appear and when they are detected by site operators. We introduce a set of tools that augment the ability of operators to perceive the presence of failure: an automatic anomaly detector scours HTTP access logs to find changes in user behavior that are indicative of site failures, and a visualizer helps operators rapidly detect and diagnose problems. Visualization addresses a key question of autonomic computing of how to win operators' confidence so that new tools will be embraced. Evaluation performed using HTTP logs from Ebates.com demonstrates that these tools can enhance the detection of failure as well as shorten detection time. Our approach is application-generic and can be applied to any Web application without the need for instrumentation",2005,0, 2738,Mining Logs Files for Computing System Management,"With advancement in science and technology, computing systems become increasingly more difficult to monitor, manage and maintain. Traditional approaches to system management have been largely based on domain experts through a knowledge acquisition process to translate domain knowledge into operating rules and policies. This has been experienced as a cumbersome, labor intensive, and error prone process. There is thus a pressing need for automatic and efficient approaches to monitor and manage complex computing systems. A popular approach to system management is based on analyzing system log files. However, several new aspects of the system log data have been less emphasized in existing analysis methods and posed several challenges. The aspects include disparate formats and relatively short text messages in data reporting, asynchronous data collection, and temporal characteristics in data representation. First, a typical computing system contains different devices with different software components, possibly from different providers. These various components have multiple ways to report events, conditions, errors and alerts. The heterogeneity and inconsistency of log formats make it difficult to automate problem determination. To perform automated analysis, we need to categorize the text messages with disparate formats into common situations. Second, text messages in the log files are relatively short with a large vocabulary size. Third, each text message usually contains a timestamp. The temporal characteristics provide additional context information of the messages and can be used to facilitate data analysis. In this paper, we apply text mining to automatically categorize the messages into a set of common categories, and propose two approaches of incorporating temporal information to improve the categorization performance",2005,0, 2739,Pattern recognition based tools enabling autonomic computing.,"Fault detection is one of the important constituents of fault tolerance, which in turn defines the dependability of autonomic computing. In presented work several pattern recognition tools were investigated in application to early fault detection. The optimal margin classifier technique was utilized to detect the abnormal behavior of software processes. The comparison with the performance of the quadratic classifiers is reported. The optimal margin classifiers were also implemented to the fault detection in hardware components. The impulse parameter probing technique was introduced to mitigate intermittent and transient fault problems. The pattern recognition framework of analysis of responses to a controlled component perturbation yielded promising results",2005,0, 2740,A 32-bit COTS-based fault-tolerant embedded system,"This paper presents a 32-bit fault-tolerant (FT) embedded system based on commercial off-the-shelf (COTS) processors. This embedded system uses two 32-bit Pentium® processors with master/checker (M/C) configuration and an external watchdog processor (WDP) for implementing a behavioral-based error detection scheme called committed instructions counting (CIC). The experimental evaluation was performed using both power-supply disturbance (PSD) and software-implemented fault injection (SWIFI) methods. A total of 9000 faults have been injected into the embedded system to measure the coverage of error detection mechanisms, i.e., the checker processor and the CIC scheme. The results show that the M/C configuration is not enough for this system and the CIC scheme could cover the limitation of the M/C configuration.",2005,0, 2741,Software based in-system memory test for highly available systems,"In this paper we describe a software based in-system memory test that is capable of testing system memory in both offline and online environments. A technique to transparently ""steal"" a chunk of memory from the system for running tests and then inserting it back for normal application's use is proposed. Factors like system memory architecture that needs to be considered while adapting any conventional memory testing algorithm for in-system testing are also discussed. Implementation of the proposed techniques can significantly improve the system's ability to proactively detect and manage functional faults in memory. An extension of the methodology described is expected to be applicable for in-system testing of other system components (like processor) as well.",2005,0, 2742,Comparing high-change modules and modules with the highest measurement values in two large-scale open-source products,"Identifying change-prone modules can enable software developers to take focused preventive actions that can reduce maintenance costs and improve quality. Some researchers observed a correlation between change proneness and structural measures, such as size, coupling, cohesion, and inheritance measures. However, the modules with the highest measurement values were not found to be the most troublesome modules by some of our colleagues in industry, which was confirmed by our previous study of six large-scale industrial products. To obtain additional evidence, we identified and compared high-change modules and modules with the highest measurement values in two large-scale open-source products, Mozilla and OpenOffice, and we characterized the relationship between them. Contrary to common intuition, we found through formal hypothesis testing that the top modules in change-count rankings and the modules with the highest measurement values were different. In addition, we observed that high-change modules had fairly high places in measurement rankings, but not the highest places. The accumulated findings from these two open-source products, together with our previous similar findings for six closed-source products, should provide practitioners with additional guidance in identifying the change-prone modules.",2005,0, 2743,The impact of institutional forces on software metrics programs,"Software metrics programs are an important part of a software organization's productivity and quality initiatives as precursors to process-based improvement programs. Like other innovative practices, the implementation of metrics programs is prone to influences from the greater institutional environment the organization exists in. In this paper, we study the influence of both external and internal institutional forces on the assimilation of metrics programs in software organizations. We use previous case-based research in software metrics programs as well as prior work in institutional theory in proposing a model of metrics implementation. The theoretical model is tested on data collected through a survey from 214 metrics managers in defense-related and commercial software organizations. Our results show that external institutions, such as customers and competitors, and internal institutions, such as managers, directly influence the extent to which organizations change their internal work-processes around metrics programs. Additionally, the adaptation of work-processes leads to increased use of metrics programs in decision-making within the organization. Our research informs managers about the importance of management support and institutions in metrics programs adaptation. In addition, managers may note that the continued use of metrics information in decision-making is contingent on adapting the organization's work-processes around the metrics program. Without these investments in metrics program adaptation, the true business value in implementing metrics and software process improvement is not realized.",2005,0, 2744,A game model for selection of purchasing bids in consideration of fuzzy values,"A number of efficiency-based vendor selection and negotiation models have been developed to deal with multiple attributes including price, quality and delivery performance which is treated as important bid attributes. But some alternative vendor's preferences of attributes are difficult of quantitative analysis, such as trust, reliability, and courtesy of the vendor, which are considered to be crucial issues of recent reaches in vendor evaluation. This paper proposes a buyer-seller game model that has distinct advantages over existing methods for bid selection and negotiation, the fuzzy indexes are used to evaluate those attributes which are difficult of quantitative analysis, we propose a new method that expands Talluri (2002) and Joe Zhu (2004)'s method and allows which the elements composing problems are given by fuzzy numerical values, An important outcome of assessing relative efficiencies within a group of decision making units (DMUs) in fuzzy data envelopment analysis is a set of virtual multipliers or weights accorded to each (input or output) factor taken into account. In this paper, by assessing upper bounds on factor weights and compacting the resulted intervals, a CSW is determined. Since resulted efficiencies by the proposed CSW are fuzzy numbers rather than crisp values, it is more informative for decision maker.",2005,0, 2745,Optimized distributed delivery of continuous-media documents over unreliable communication links,"Video-on-demand (VoD) applications place very high requirements on the delivery medium. High-quality services should provide for a timely delivery of the data-stream to the clients plus a minimum of playback disturbances. The major contributions of this paper are that it proposes a multiserver, multi-installment (MSMI) solution approach (sending the document in several installments from each server) to the delivery problem and achieves a minimization of the client waiting time, also referred to as the access time (AT) or start-up latency in the literature. By using multiple spatially distributed servers, we are able to exploit slow connections that would otherwise prevent the deployment of video-on-demand-like services, to offer such services in an optimal manner. Additionally, the delivery and playback schedule that is computed by our approach is loss-aware in the sense that it is flexible enough to accommodate packet losses without interrupts. The mathematical framework presented covers both computation and optimization problems associated with the delivery schedule, offering a complete set of guidelines for designing MSMI VoD services. The optimizations presented include the ordering of the servers and determining the number of installments based on the packet-loss probabilities of the communication links. Our analysis guarantees the validity of a delivery schedule recommended by the system by providing a percentage of confidence for an uninterrupted playback at the client site. This, in a way, quantifies the degree of quality of service rendered by the system and the MSMI strategy proposed. The paper is concluded by a rigorous simulation study that showcases the substantial advantages of the proposed approach and explores how optimization of the schedule parameters affects performance.",2005,0, 2746,P-RnaPredict-a parallel evolutionary algorithm for RNA folding: effects of pseudorandom number quality,"This paper presents a fully parallel version of RnaPredict, a genetic algorithm (GA) for RNA secondary structure prediction. The research presented here builds on previous work and examines the impact of three different pseudorandom number generators (PRNGs) on the GA's performance. The three generators tested are the C standard library PRNG RAND, a parallelized multiplicative congruential generator (MCG), and a parallelized Mersenne Twister (MT). A fully parallel version of RnaPredict using the Message Passing Interface (MPI) was implemented on a 128-node Beowulf cluster. The PRNG comparison tests were performed with known structures whose sequences are 118, 122, 468, 543, and 556 nucleotides in length. The effects of the PRNGs are investigated and the predicted structures are compared to known structures. Results indicate that P-RnaPredict demonstrated good prediction accuracy, particularly so for shorter sequences.",2005,0, 2747,Application-specific worst case corners using response surfaces and statistical models,"Integrated circuits (ICs) must be robust to manufacturing variations. Circuit simulation at a set of worst case corners is a computationally efficient method for verifying the robustness of a design. This paper presents a new statistical methodology to determine the worst case corners for a set of circuit performances. The proposed methodology first estimates response surfaces for circuit performances as quadratic functions of the process parameters with known statistical distributions. These response surfaces are then used to extract the worst case corners in the process parameter space as the points where the circuit performances are at their minimum/maximum values corresponding to a specified tolerance level. Corners in the process parameter space close to each other are clustered to reduce their number, which reduces the number of simulations required for design verification. The novel concept of a relaxation coefficient to ensure that the corners capture the minimum/maximum values of all the circuit performances at the desired tolerance level is also introduced. The corners are realistic since they are derived from the multivariate statistical distribution of the process parameters at the desired tolerance level. The methodology is demonstrated with examples showing extraction of corners from digital standard cells and also the corners for analog/radio frequency (RF) blocks found in typical communication ICs.",2005,0, 2748,A study on the application of digital cameras to derive audio information from the TVs on trains,"Digital camera can help people take videos anytime with ease. For this reason, application of digital video camera's technology is the hottest topic right now. Due to the fact that the TV on the train is in a public place it can't give audio information. We were trying to find a way to get the audio information by second time (ST) detection video only. ""Second time"" (ST) means the user uses their digital camera to take video from the original video. We propose a new way to use digital watermarking by embedding some pointer symbols into the source video, so passengers can use this kind of video to detect high quality audio in ST situations. In order to show other possible application for this process in the future, in this study we also use VB programming to design simulation software ""DDAS"" (directly detection audio system). The software was designed to handle uncompressed AVl files. We tested this software using ST video. In the Experiments of this software successfully detected audio information and audio file types covering AVI, MPEG, MIDI and WAV file type. According to the questionnaires from the users, DDAS system's output audio file was given a 3.8 MOS level which shows the system has enough audio embedded ability. From the questionnaires, we also found ST video's screen size affects the detected audio's MOS quality. We found the best MOS quality was located in 1:1âˆ?:4/3 screen size. We also use this kind of technology for study and we found that using multimedia improves the student's grades and the grades about 6 average grades.",2005,0, 2749,How to produce better quality test software,"LabWindows/CVI is a popular C compiler for writing automated test equipment (ATE) test software. Since C was designed as a portable assembly language, it uses many low-level machine operations that tend to be error prone, even for the professional programmer. Test equipment engineers also tend to underestimate the effort required to write high-quality software. Quality software has very few defects and is easy to write and maintain. The examples used in this article are for the C programming language, but the principles also apply to most other programming languages. Most of the tools mentioned work with both C and C++ software.",2005,0, 2750,Software reliability prediction and analysis during operational use,"In reality, the fault detection (or correction) phenomenon and software reliability estimation in the operational phase are different from that in the testing phase. The fault removal continues at a slower rate during the operational phase. In this paper, we will investigate some techniques for software reliability prediction and measurement in the operational phase. We first review how some software reliability growth models (SRGMs) based on non-homogeneous Poisson processes (NHPPs) can be readily derived based on a unified theory for NHPP models. Under this general framework, we can not only verify some conventional SRGMs but also derive some new SRGMs that can be used for software reliability measurement in the operational phase. That is, based on the unified theory, we can incorporate the concept of multiple changepoints into software reliability modeling. We can formularize and simulate the fault detection process in the operational phase. Some numerical illustrations based on real software failure data are also presented. The experimental results show that the proposed models can easily reflect the possible changes of fault detection process and offer quantitative analysis on software failure behavior in field operation.",2005,0, 2751,Analyzing gene expression time-courses,"Measuring gene expression over time can provide important insights into basic cellular processes. Identifying groups of genes with similar expression time-courses is a crucial first step in the analysis. As biologically relevant groups frequently overlap, due to genes having several distinct roles in those cellular processes, this is a difficult problem for classical clustering methods. We use a mixture model to circumvent this principal problem, with hidden Markov models (HMMs) as effective and flexible components. We show that the ensuing estimation problem can be addressed with additional labeled data partially supervised learning of mixtures - through a modification of the expectation-maximization (EM) algorithm. Good starting points for the mixture estimation are obtained through a modification to Bayesian model merging, which allows us to learn a collection of initial HMMs. We infer groups from mixtures with a simple information-theoretic decoding heuristic, which quantifies the level of ambiguity in group assignment. The effectiveness is shown with high-quality annotation data. As the HMMs we propose capture asynchronous behavior by design, the groups we find are also asynchronous. Synchronous subgroups are obtained from a novel algorithm based on Viterbi paths. We show the suitability of our HMM mixture approach on biological and simulated data and through the favorable comparison with previous approaches. A software implementing the method is freely available under the GPL from http://ghmm.org/gql.",2005,0, 2752,Development of ANN-based virtual fault detector for Wheatstone bridge-oriented transducers,This paper reports on the development of a new artificial neural network-based virtual fault detector (VFD) for detection and identification of faults in DAS-connected Wheatstone bridge-oriented transducers of a computer-based measurement system. Experimental results show that the implemented VFD is convenient for fusing intelligence into such systems in a user-interactive manner. The performance of the proposed VFD is examined experimentally to detect seven frequently occurring faults automatically in such transducers. The presented technique used an artificial neural network-based two-class pattern classification network with hard-limit perceptrons to fulfill the function of an efficient residual generator component of the proposed VFD. The proposed soft residual generator detects and identifies various transducer faults in collaboration with a virtual instrument software-based inbuilt algorithm. An example application is also presented to demonstrate the use of implemented VFD practically for detecting and diagnosing faults in a pressure transducer having semiconductor strain gauges connected in a Wheatstone bridge configuration. The results obtained in the example application with this strategy are promising.,2005,0, 2753,"Hydratools, a MATLAB® based data processing package for Sontek Hydra data","The U.S. Geological Survey (USGS) has developed a set of MATLAB tools to process and convert data collected by Sontek Hydra instruments to netCDF, which is a format used by the USGS to process and archive oceanographic time-series data. The USGS makes high-resolution current measurements within 1.5 meters of the bottom. These data are used in combination with other instrument data from sediment transport studies to develop sediment transport models. Instrument manufacturers provide software which outputs unique binary data formats. Multiple data formats are cumbersome. The USGS solution is to translate data streams into a common data format: netCDF. The Hydratools toolbox is written to create netCDF format files following EPIC conventions, complete with embedded metadata. Data are accepted from both the ADV and the PCADP. The toolbox will detect and remove bad data, substitute other sources of heading and tilt measurements if necessary, apply ambiguity corrections, calculate statistics, return information about data quality, and organize metadata. Standardized processing and archiving makes these data more easily and routinely accessible locally and over the Internet. In addition, documentation of the techniques used in the toolbox provides a baseline reference for others utilizing the data.",2005,0, 2754,A comparative study on software reuse metrics and economic models from a traceability perspective,"A fundamental task when employing software reuse is evaluating its impacts by measuring the relation of reused and developed software, the cost for obtaining reuse and the cost avoided by reusing software during development and maintenance. Different reuse related metrics exist in the literature, varying from strictly code-based metrics, aiming to measure the amount of code reused in a product, to more elaborate cost-based metrics and models, aiming to measure the costs involved in reuse programs and to evaluate the impacts of reuse in software development. Although reuse is commonly claimed to benefit maintenance, the traceability problem is still neglected on the reuse metrics arena, despite its great impacts on software maintenance. Reuse metrics may be used as important support tools for dealing with the traceability between reused assets and their clients. The goal of this work is to evaluate the current state of the art on the reuse metrics area with special emphasis on code-based metrics, building on previous surveys with further analysis and considerations on the applicability of such metrics to reuse traceability.",2005,0, 2755,Empirical case studies in attribute noise detection,"The problem of determining the noisiest attribute(s) from a set of domain-specific attributes is of practical importance to domain experts and the data mining community. Data noise is generally of two types: attribute noise and mislabeling errors (class noise). For a given domain-specific dataset, attributes that contain a significant amount of noise can have a detrimental impact on the success of a data mining initiative, e.g., reducing the predictive ability of a classifier in a supervised learning task. Techniques that provide information about the noise quality of an attribute are useful tools for a data mining practitioner when performing analysis on a dataset or scrutinizing the data collection processes. Our technique for detecting noisy attributes uses an algorithm that we recently proposed for the detection of instances with attribute noise. This paper presents case studies that confirm our recent work done on detecting noisy attributes and further validates that our technique is indeed able to detect attributes that contain noise.",2005,0, 2756,2005 IEEE International Conference on Automation Science and Engineering (CASE) (IEEE Cat. No.05EX1181),The following topics were dealt with: laboratory automation; yeast cell automated lifetime analysis; rapid prototyping systems; production process and its quality measurement; production management; fish breeding optimization; patient treatment; control theory; robots; dynamic shipment planning; machining; collaborative diagnostics; industrial process; manufacturing systems; multi-agent systems; supply chain management; impulse resistance measurement instrument; and production fault detection.,2005,0, 2757,Survivability of a distributed multi-agent application - a performance control perspective,"Distributed multi-agent systems (DMAS) such as supply chains functioning in highly dynamic environments need to achieve maximum overall utility during operation. The utility from maintaining performance is an important component of their survivability. This utility is often met by identifying trade-offs between quality of service and performance. To adaptively choose the operational settings for better utility, we propose an autonomous and scalable queueing theory based methodology to control the performance of a hierarchical network of distributed agents. By formulating the MAS as an open queueing network with multiple classes of traffic we evaluate the performance and subsequently the utility, from which we identify the control alternative for a localized, multi-tier zone. When the problem scales, another larger queueing network could be composed using zones as building blocks. This method advocates the systematic specification of the DMAS's attributes to aid realtime translation of the DMAS into a queueing network. We prototype our framework in Cougaar and verify our results.",2005,0, 2758,Measuring the functionality of online stores,"This paper shows the need of a framework which can be used to measure the functionality delivered by electronic commerce (e-commerce) systems. Such a framework would be helpful in areas such as cost prediction, effort estimation, and so on. The paper goes on to propose such a framework, based on the established methods of function points and object points.",2005,0, 2759,Business performance management system for CRM and sales execution,"In 2004 the IBM Telesales organization launched a new customer segmentation process to improve profits, revenue growth and customer satisfaction. The challenges were to automatically monitor customer segment status to ensure results are in line with segment targets, and to automatically generate high-quality predictive analytical models to improve customer segmentation rules and management over time. This paper describes a software solution that combines business performance management with data mining techniques to provide a powerful combination of performance monitoring and proactive customer management in support of the new telesales business processes.",2005,0, 2760,Scheduling tasks with Markov-chain based constraints,"Markov-chain (MC) based constraints have been shown to be an effective QoS measure for a class of real-time systems, particularly those arising from control applications. Scheduling tasks with MC constraints introduces new challenges because these constraints require not only specific task finishing patterns but also certain task completion probability. Multiple tasks with different MC constraints competing for the same resource further complicates the problem. In this paper, we study the problem of scheduling multiple tasks with different MC constraints. We present two scheduling approaches which (i) lead to improvements in ""overall"" system performance, and (ii) allow the system to achieve graceful degradation as system load increases. The two scheduling approaches differ in their complexities and performances. We have implemented our scheduling algorithms in the QNX real-time operating system environment and used the setup for several realistic control tasks. Data collected from the experiments as well as simulation all show that our new scheduling algorithms outperform algorithms designed for window-based constraints as well as previous algorithms designed for handling MC constraints.",2005,0, 2761,A BIST approach for testing FPGAs using JBITS,"This paper explores the built-in self test (BIST) concepts to test the configurable logic blocks (CLBs) of static RAM (SRAM) based FPGAs using Java Bits (JBits). The proposed technique detects and diagnoses single and multiple stuck-at faults in the CLBs while significantly reducing the time taken to perform the testing. Previous BIST approaches for testing FPGAs use traditional CAD tools which lack control over configurable resources, resulting in the design being placed on the hardware in a different way than intended by the designer. In this paper, the design of the logic BIST architecture is done using JBits 2.8 software for Xilinx Virtex family of devices. The test requires seven configurations and two test sessions to test the CLBs. The time taken to generate the entire BIST logic in both the sessions is approximately 77 seconds as compared with several minutes to hours in traditional design flow.",2005,0, 2762,11th IEEE International Software Metrics Symposium - Cover,To following topics are dealt with: software metrics; cost estimation; software architectures; measurement-based decision making; quality assurance; product assessment; process improvement; education; open source software projects; data mining; software repositories; defect analysis and testing; and function points,2005,0, 2763,"Software, performance and resource utilisation metrics for context-aware mobile applications","As mobile applications become more pervasive, the need for assessing their quality, particularly in terms of efficiency (i.e., performance and resource utilisation), increases. Although there is a rich body of research and practice in developing metrics for traditional software, there has been little study on how these relate to mobile context-aware applications. Therefore, this paper defines and empirically evaluates metrics to capture software, resource utilisation and performance attributes, for the purpose of modelling their impact in context-aware mobile applications. To begin, a critical analysis of the problem domain identifies a number of specific software, resource utilisation and performance attributes. For each attribute, a concrete metric and technique of measurement is defined. A series of hypotheses are then proposed, and tested empirically using linear correlation analysis. The results support the hypotheses thus demonstrating the impact of software code attributes on the efficiency of mobile applications. As such, a more formal model in the form of mathematical equations is proposed in order to facilitate runtime decisions regarding the efficient placement of mobile objects in a context-aware mobile application framework. Finally, a preliminary empirical evaluation of the model is carried out using a typical application and an existing mobile application framework",2005,0, 2764,Assessing the impact of coupling on the understandability and modifiability of OCL expressions within UML/OCL combined models,"Diagram-based UML notation is limited in its expressiveness thus producing a model that would be severely underspecified. The flaws in the limitation of the UML diagrams are solved by specifying UML/OCL combined models, OCL being an essential add-on to the UML diagrams. Aware of the importance of building precise models, the main goal of this paper is to carefully describe a family of experiments we have undertaken to ascertain whether any relationship exists between object coupling (defined through metrics related to navigations and collection operations) and two maintainability sub-characteristics: understandability and modifiability of OCL expressions. If such a relationship exists, we will have found early indicators of the understandability and modifiability of OCL expressions. Even though the results obtained show empirical evidence that such a relationship exists, they must be considered as preliminaries. Further validation is needed to be performed to strengthen the conclusions and external validity",2005,0, 2765,An industrial case study of implementing and validating defect classification for process improvement and quality management,"Defect measurement plays a crucial role when assessing quality assurance processes such as inspections and testing. To systematically combine these processes in the context of an integrated quality assurance strategy, measurement must provide empirical evidence on how effective these processes are and which types of defects are detected by which quality assurance process. Typically, defect classification schemes, such as ODC or the Hewlett-Packard scheme, are used to measure defects for this purpose. However, we found it difficult to transfer existing schemes to an embedded software context, where specific document- and defect types have to be considered. This paper presents an approach to define, introduce, and validate a customized defect classification scheme that considers the specifics of an industrial environment. The core of the approach is to combine the software engineering know-how of measurement experts and the domain know-how of developers. In addition to the approach, we present the results and experiences of using the approach in an industrial setting. The results indicate that our approach results in a defect classification scheme that allows classifying defects with good reliability, that allows identifying process improvement actions, and that can serve as a baseline for evaluating the impact of process improvements",2005,0, 2766,Measurement-driven dashboards enable leading indicators for requirements and design of large-scale systems,"Measurement-driven dashboards provide a unifying mechanism for understanding, evaluating, and predicting the development, management, and economics of large-scale systems and processes. Dashboards enable interactive graphical displays of complex information and support flexible analytic capabilities for user customizability and extensibility. Dashboards commonly include software requirements and design metrics because they provide leading indicators for project size, growth, and stability. This paper focuses on dashboards that have been used on actual large-scale projects as well as example empirical relationships revealed by the dashboards. The empirical results focus on leading indicators for requirements and design of large-scale systems. In the first set of 14 projects focusing on requirements metrics, the ratio of software requirements to source-lines-of code averaged 1:46. Projects that far exceeded the 1:46 requirements-to-code ratio tended to be more effort-intensive and fault-prone during verification. In the second set of 16 projects focusing on design metrics, the components in the top quartile of the number of component internal states had 6.2 times more faults on average than did the components in the bottom quartile, after normalization by size. The components in the top quartile of the number of component interactions had 4.3 times more faults on average than did the components in the bottom quartile, after normalization by size. When the number of component internal states was in the bottom quartile, the component fault-proneness was low even when the number of component interactions was in the upper quartiles, regardless of size normalization. Measurement-driven dashboards reveal insights that increase visibility into large-scale systems and provide feedback to organizations and projects",2005,0, 2767,Experiences from conducting semi-structured interviews in empirical software engineering research,"Many phenomena related to software development are qualitative in nature. Relevant measures of such phenomena are often collected using semi-structured interviews. Such interviews involve high costs, and the quality of the collected data is related to how the interviews are conducted. Careful planning and conducting of the interviews are therefore necessary, and experiences from interview studies in software engineering should consequently be collected and analyzed to provide advice to other researchers. We have brought together experiences from 12 software engineering studies, in which a total of 280 interviews were conducted. Four areas were particularly challenging when planning and conducting these interviews; estimating the necessary effort, ensuring that the interviewer had the needed skills, ensuring good interaction between interviewer and interviewees, and using the appropriate tools and project artifacts. The paper gives advice on how to handle these areas and suggests what information about the interviews should be included when reporting studies where interviews have been used in data collection. Knowledge from other disciplines is included. By sharing experience, knowledge about the accomplishments of software engineering interviews is increased and hence, measures of high quality can be achieved",2005,0, 2768,Ensemble imputation methods for missing software engineering data,"One primary concern of software engineering is prediction accuracy. We use datasets to build and validate prediction systems of software development effort, for example. However it is not uncommon for datasets to contain missing values. When using machine learning techniques to build such prediction systems, handling of incomplete data is an important issue for classifier learning since missing values in either training or test set or in both sets can affect prediction accuracy. Many works in machine learning and statistics have shown that combining (ensemble) individual classifiers is an effective technique for improving accuracy of classification. The ensemble strategy is investigated in the context of incomplete data and software prediction. An ensemble Bayesian multiple imputation and nearest neighbour single imputation method, BAMINNSI, is proposed that constructs ensembles based on two imputation methods. Strong results on two benchmark industrial datasets using decision trees support the method",2005,0, 2769,Understanding of estimation accuracy in software development projects,"Over the past decades large investments in software engineering research and development have been made by academia and the software industry. These efforts have produced considerably insight in the complex domain of software development, and have paid off in the shape of the improved tools, languages, methodologies and techniques. However, a recent review of estimation survey documents that less progress has been made in the area of estimation performance. This is a major concern for the software industry, as lack of estimation performance often causes budget overruns, delays, lost contracts or poor quality software. Because of these rather dramatic consequences, there is a high demand for more research on the topic of effort estimation in software development projects. That demand motivated this PhD. The thesis is written at the Estimation Group at Simula Research Laboratory in Oslo, Norway. The work is supervised by professor Magne Jorgensen, and is part of the SPIKE (Software Process Improvement based on Knowledge and Experience) project which is funded by the Norwegian Research Council",2005,0, 2770,A seamless handoff approach of mobile IP based on dual-link,"Mobile IP protocol solves the problem of mobility support for hosts connected to Internet anytime and anywhere, and makes the mobility transparent to the higher layer applications. But the handoff latency in Mobile IP affects the quality of communication. This paper proposes a seamless handoff approach based on dual-link and link layer trigger, using information from link layer to predict and trigger the dual-link handoff. During the handoff, MN keeps one link connected with the current network while it handoff another link to the new network. In this paper we develop a model system based on this approach and provide experiments to evaluate the performance of this approach. The experimental results show that this approach can ensure the seamless handoff of Mobile IP.",2005,0, 2771,Iterative Metamorphic Testing,"An enhanced version of metamorphic testing, namely n-iterative metamorphic testing, is proposed to systematically exploit more information out of metamorphic tests by applying metamorphic relations in a chain style. A contrastive case study, conducted within an integrated testing environment MTest, shows that n-iterative metamorphic testing exceeds metamorphic testing and special case testing in terms of their fault detection capabilities. Another advantage of n-iterative metamorphic testing is its high efficiency in test case generation.",2005,0, 2772,Software Test Selection Patterns and Elusive Bugs,"Traditional white and black box testing methods are effective in revealing many kinds of defects, but the more elusive bugs slip past them. Model-based testing incorporates additional application concepts in the selection of tests, which may provide more refined bug detection, but does not go far enough. Test selection patterns identify defect-oriented contexts in a program. They also identify suggested tests for risks associated with a specified context. A context and its risks is a kind of conceptual trap designed to corner a bug. The suggested tests will find the bug if it has been caught in the trap.",2005,0, 2773,Power transmission control using distributed max-flow,Existing maximum flow algorithms use one processor for all calculations or one processor per vertex in a graph to calculate the maximum possible flow through a graph's vertices. This is not suitable for practical implementation. We extend the max-flow work of Goldberg and Tarjan to a distributed algorithm to calculate maximum flow where the number of processors is less than the number of vertices in a graph. Our algorithm is applied to maximizing electrical flow within a power network where the power grid is modeled as a graph. Error detection measures are included to detect problems in a simulated power network. We show that our algorithm is successful in executing quickly enough to prevent catastrophic power outages.,2005,0, 2774,Software reliability growth model considering testing profile and operation profile,"The testing and operation environments may be essentially different, thus the fault detection rate (FDR) of testing phase is different from that of the operation phase. In this paper, based on the representative model, G-O model, of nonhomogeneous Poisson process (NHPP), a transformation is performed between the FDR of the testing phase to that of the operation considering the profile differences of the two phases, and then a software reliability growth model (SRGM) called TO-SRGM describing the differences of the FDR between the testing phase and the operation phase is proposed. Finally, the parameters of the model are estimated using the least squares estimate (LSE) based on normalized failure data. Experiment results show that the goodness-of-fit of the TO-SRGM is better than that of the G-0 model and the PZ-SRGM on the normalized failure data set.",2005,0, 2775,A comparison of network level fault injection with code insertion,"This paper describes our research into the application of fault injection to Simple Object Access Protocol (SOAP) based service oriented-architectures (SOA). We show that our previously devised WS-FIT method, when combined with parameter perturbation, gives comparable performance to code insertion techniques with the benefit that it is less invasive. Finally we demonstrate that this technique can be used to compliment certification testing of a production system by strategic instrumentation of selected servers in a system.",2005,0, 2776,The conceptual cohesion of classes,"While often defined in informal ways, software cohesion reflects important properties of modules in a software system. Cohesion measurement has been used for quality assessment, fault proneness prediction, software modularization, etc. Existing approaches to cohesion measurement in object-oriented software are largely based on the structural information of the source code, such as attribute references in methods. These measures reflect particular interpretations of cohesion and try to capture different aspects of cohesion and no single cohesion metric or suite is accepted as standard measurement for cohesion. The paper proposes a new set of measures for the cohesion of individual classes within an OO software system, based on the analysis of the semantic information embedded in the source code, such as comments and identifiers. A case study on open source software is presented, which compares the new measures with an extensive set of existing metrics. The differences and similarities among the approaches and results are discussed and analyzed.",2005,0, 2777,The top ten list: dynamic fault prediction,"To remain competitive in the fast paced world of software development, managers must optimize the usage of their limited resources to deliver quality products on time and within budget. In this paper, we present an approach (the top ten list) which highlights to managers the ten most susceptible subsystems (directories) to have a fault. Managers can focus testing resources to the subsystems suggested by the list. The list is updated dynamically as the development of the system progresses. We present heuristics to create the top ten list and develop techniques to measure the performance of these heuristics. To validate our work, we apply our presented approach to six large open source projects (three operating systems: NetBSD, FreeBSD, OpenBSD; a window manager: KDE; an office productivity suite: KOffice; and a database management system: Postgres). Furthermore, we examine the benefits of increasing the size of the top ten list and study its performance.",2005,0, 2778,A controlled experiment assessing test case prioritization techniques via mutation faults,"Regression testing is an important part of software maintenance, but it can also be very expensive. To reduce this expense, software testers may prioritize their test cases so that those that are more important are run earlier in the regression testing process. Previous work has shown that prioritization can improve a test suite's rate of fault detection, but the assessment of prioritization techniques has been limited to hand-seeded faults, primarily due to the belief that such faults are more realistic than automatically generated (mutation) faults. A recent empirical study, however, suggests that mutation faults can be representative of real faults. We have therefore designed and performed a controlled experiment to assess the ability of prioritization techniques to improve the rate of fault detection techniques, measured relative to mutation faults. Our results show that prioritization can be effective relative to the faults considered, and they expose ways in which that effectiveness can vary with characteristics of faults and test suites. We also compare our results to those collected earlier with respect to the relationship between hand-seeded faults and mutation faults, and the implications this has for researchers performing empirical studies of prioritization.",2005,0, 2779,Optimizing test to reduce maintenance,"A software package evolves in time through various maintenance release steps whose effectiveness depends mainly on the number of faults left in the modules. Software testing is one of the most demanding and crucial phases to discover and reduce faults. In real environment, time available to test a software release is a given finite quantity. The purpose of this paper is to identify a criterion to estimate an efficient time repartition among software modules to enhance fault location in testing phase and to reduce corrective maintenance. The fundamental idea is to relate testing time to predicted risk level of the modules in the release under test. In our previous work we analyzed several kinds of risk prediction factors and their relationship with faults; moreover, we thoroughly investigated the behavior of faults on each module through releases to find significant fault proneness tendencies. Starting from these two lines of analysis, in this paper we propose a new approach to optimize the use of available testing time in a software release. We tuned and tested our hypotheses on a large industrial environment.",2005,0, 2780,Call stack coverage for test suite reduction,"Test suite reduction is an important test maintenance activity that attempts to reduce the size of a test suite with respect to some criteria. Emerging trends in software development such as component reuse, multi-language implementations, and stringent performance requirements present new challenges for existing reduction techniques that may limit their applicability. A test suite reduction technique that is not affected by these challenges is presented; it is based on dynamically generated language-independent information that can be collected with little run-time overhead. Specifically, test cases from the suite being reduced are executed on the application under test and the call stacks produced during execution are recorded. These call stacks are then used as a coverage requirement in a test suite reduction algorithm. Results of experiments on test suites for the space antenna-steering application show significant reduction in test suite size at the cost of a moderate loss in fault detection effectiveness.",2005,0, 2781,An empirical comparison of test suite reduction techniques for user-session-based testing of Web applications,"Automated cost-effective test strategies are needed to provide reliable, secure, and usable Web applications. As a software maintainer updates an application, test cases must accurately reflect usage to expose faults that users are most likely to encounter. User-session-based testing is an automated approach to enhancing an initial test suite with real user data, enabling additional testing during maintenance as well as adding test data that represents usage as operational profiles evolve. Test suite reduction techniques are critical to the cost effectiveness of user-session-based testing because a key issue is the cost of collecting, analyzing, and replaying the large number of test cases generated from user-session data. We performed an empirical study comparing the test suite size, program coverage, fault detection capability, and costs of three requirements-based reduction techniques and three variations of concept analysis reduction applied to two Web applications. The statistical analysis of our results indicates that concept analysis-based reduction is a cost-effective alternative to requirements-based approaches.",2005,0, 2782,Facilitating the implementation and evolution of business rules,"Many software systems implement, amongst other things, a collection of business rules. However, the process of evolving the business rules associated with a system is both time consuming and error prone. In this paper, we propose a novel approach to facilitating business rule evolution through capturing information to assist with the evolution of rules at the point of implementation. We analyse the process of rule evolution, in order to determine the information that must be captured. Our approach allows programmers to implement rules by embedding them into application programs (giving the required performance and genericity), while still easing the problems of evolution.",2005,0, 2783,A software reliability growth model from testing to operation,"This paper presents a SRGM (software reliability growth model) from testing to operation based on NHPP (nonhomogeneous Poisson process). Although a few research projects have been devoted to the differences between testing environment and operational environment, consideration of the variation of environmental influence factors along testing time in the existing models is limited. The model in this paper is the first scheme of a few NHPP models which take environmental factors experimented from actual failure data as a function of testing time. FDR (fault detection rate) is usually used to measure the effectiveness of fault detection by test techniques and test cases. A bell-shaped FDR function is proposed which integrate both environmental factors and inherent FDR per fault. A NHPP SRGM from testing to operation that incorporates environmental factors called TOE-SRGM is built which integrates FDR of testing phase and the proposed FDR function of operational phase. TOE-SRGM is evaluated using a set of software failure data. The results show that TOE-SRGM fits the failure data better than G-O model and PZ-SRGM.",2005,0, 2784,Ontology-based structured cosine similarity in document summarization: with applications to mobile audio-based knowledge management,"Development of algorithms for automated text categorization in massive text document sets is an important research area of data mining and knowledge discovery. Most of the text-clustering methods were grounded in the term-based measurement of distance or similarity, ignoring the structure of the documents. In this paper, we present a novel method named structured cosine similarity (SCS) that furnishes document clustering with a new way of modeling on document summarization, considering the structure of the documents so as to improve the performance of document clustering in terms of quality, stability, and efficiency. This study was motivated by the problem of clustering speech documents (of no rich document features) attained from the wireless experience oral sharing conducted by mobile workforce of enterprises, fulfilling audio-based knowledge management. In other words, this problem aims to facilitate knowledge acquisition and sharing by speech. The evaluations also show fairly promising results on our method of structured cosine similarity.",2005,0, 2785,Applications of Global Positioning System (GPS) in geodynamics: with three examples from Turkey,"Global Positioning System (GPS) has been very useful tool for the last two decades in the area of geodynamics because or the validation of the GPS results by the Very Long Baseline Interferometry (VLBI) and Satellite Laser Ranging (SLR) observations. The modest budget requirement and the high accuracy relative positioning availability of GPS increased the use of it in determination of crustal and/or regional deformations. Since the civilian use the GPS began in 1980, the development on the receiver and antenna technology with the ease of use software packages reached to a well known state, which may be named as a revolution in the Earth Sciences among other application fields. Analysis of a GPS network can also give unknown information about the fault lines that can not be seen from the ground surface. Having information about the strain accumulation along the fault line may allow us to evaluate future probabilities of regional earthquake hazards and develop earthquake scenarios for specific faults. In this study, the use of GPS in geodynamical studies will be outlined throughout the instrumentation, the measurements, and the methods utilized. The preliminary results of three projects, sponsored by the Scientific & Technical Research Council of Turkey (TUBITAK) and Istanbul Technical University (ITU) which have been carried out in Turkey using GPS will be summarized. The projects are mainly aimed to determine the movements along the fault zones. Two of the projects have been implemented along the North Anatolian Fault Zone. (NAFZ), one is in Mid-Anatolia region, and the ther is in Western Marmara region. The third project has been carried out in the Fethiye-Burdur region. The collected GPS data were processed by the GAMIT/GLOBK software The results are represented as velocity vectors obtained using the yearly combinations of the daily measured GPS data.",2005,0, 2786,Analysis of a method for improving video quality in the Internet with inclusion of redundant essential video data,"This paper presents a variant of a method for combating the burst-loss problem of multicast video packets. This variant uses redundant essential video data to offer biased protection towards all critical video information from loss, especially from contiguous loss when a burst loss of multiple consecutive packets occurs. Simulation results indicate that inclusion of redundant critical video data improves performance of our previously proposed method for solving the burst-loss problem. Furthermore, probability models, created for evaluating the effectiveness of the proposed method and its variant, give results which are consistent with those obtained from simulations.",2005,0, 2787,Best ANN structures for fault location in single-and double-circuit transmission lines,"The great development in computing power has allowed the implementation of artificial neural networks (ANNs) in the most diverse fields of technology. This paper shows how diverse ANN structures can be applied to the processes of fault classification and fault location in overhead two-terminal transmission lines, with single and double circuit. The existence of a large group of valid ANN structures guarantees the applicability of ANNs in the fault classification and location processes. The selection of the best ANN structures for each process has been carried out by means of a software tool called SARENEUR.",2005,0, 2788,A reflective practice of automated and manual code reviews for a studio project,"In this paper, the target of code review is project management system (PMS), developed by a studio project in a software engineering master's program, and the focus is on finding defects not only in view of development standards, i.e., design rule and naming rule, but also in view of quality attributes of PMS, i.e., performance and security. From the review results, a few lessons are learned. First, defects which had not been found in the test stage of PMS development could be detected in this code review. These are hidden defects that affect system quality and that are difficult to find in the test. If the defects found in this code review had been fixed before the test stage of PMS development, productivity and quality enhancement of the project would have been improved. Second, manual review takes much longer than an automated one. In this code review, general check items were checked by automation tool, while project-specific ones were checked by manual method. If project-specific check items could also be checked by automation tool, code review and verification work after fixing the defects would be conducted very efficiently. Reflecting on this idea, an evolution model of code review is studied, which eventually seeks fully automated review as an optimized code review.",2005,0, 2789,Two approaches for the improvement in testability of communication protocols,"Protocols have grown larger and more complex with the advent of computer and communication technologies. As a result, the task of conformance testing of protocol implementation has also become more complex. The study of design for testability (DFT) is a research area in which researchers investigate design principles that will help to overcome the ever increasing complexity of testing distributed systems. Testability metrics are essential for evaluating and comparing designs. In a previous paper, we introduce a new metric for testability of communication protocols, based on the detection probability of a default. We demonstrate the usefulness of the metric for identifying faults there are more difficult to detect. In this paper, we present two approaches for improved testing of a protocol implementation once those faults that are difficult to detect are identified.",2005,0, 2790,Performance analysis of random access channel in OFDMA systems,"The random access channel (RACH) in OFDMA systems is an uplink contention-based transport channel that is mainly used for subscriber stations to make a resource request to base stations. In this paper we focus on analyzing the performance of RACH in OFDMA systems such that the successful transmission probability, correctly detectable probability and throughput of RACH are analyzed. We also choose an access mechanism with binary exponential backoff delay procedure similar to that in IEEE 802.11. Based on the mechanism, we derive the delay and the blocking probability of RACH in OFDMA systems.",2005,0, 2791,Fault tolerant XGFT network on chip for multi processor system on chip circuits,"This paper presents a fault-tolerant eXtended Generalized Fat Tree (XGFT) Network-On-Chip (NOC) implemented with a new fault-diagnosis-and-repair (FDAR) system. The FDAR system is able to locate faults and reconfigure switch nodes in such a way that the network can route packets correctly despite the faults. This paper presents how the FDAR finds the faults and reconfigures the switches. Simulation results are used for showing that faulty XGFTs could also achieve good performance, if the FDAR is used. This is possible if deterministic routing is used in faulty parts of the XGFTs and adaptive Turn-Back (TB) routing is used in faultless parts of the network for ensuring good performance and Quality-of-Service (QoS). The XGFT is also equipped with parity bit checks for detecting bit errors from the packets.",2005,0, 2792,Accelerating molecular dynamics simulations with configurable circuits,"Molecular dynamics (MD) is of central importance to computational chemistry. Here we show that MD can be implemented efficiently on a COTS FPGA board, and that speed-ups from 31× to 88× over a PC implementation can be obtained. Although the amount of speed-up depends on the stability required, 46× can be obtained with virtually no detriment, and the upper end of the range is apparently viable in many cases. We sketch our FPGA implementations and describe the effects of precision on the trade-off between performance and quality of the MD simulation.",2005,0, 2793,Snort offloader: a reconfigurable hardware NIDS filter,Software-based network intrusion detection systems (NIDS) often fail to keep up with high-speed network links. In this paper an FPGA-based pre-filter is presented that reduces the amount of traffic sent to a software-based NIDS for inspection. Simulations using real network traces and the Snort rule set show that a pre-filter can reduce up to 90% of network traffic that would have otherwise been processed by Snort software. The projected performance enables a computer to perform real-time intrusion detection of malicious content passing over a 10 Gbps network using FPGA hardware that operates with 10 Gbps of throughput and software that needs only to operate with 1 Gbps of throughput.,2005,0, 2794,An iterative hardware Gaussian noise generator,"The quality of generated Gaussian noise samples plays a crucial role when evaluating the bit error rate performance of communication systems. This paper presents a new approach for the field-programmable gate array (FPGA) realization of a high-quality Gaussian noise generator (GNG). The datapath of the GNG can be configured differently based on the required accuracy of the Gaussian probability density function (PDF). Since the GNG is often most conveniently implemented on the same FPGA as the design under evaluation, the area efficiency of the proposed GNG is important. For a particular configuration, the proposed design utilizes only 3% of the configurable slices and two on-chip block memories of a Virtex XC2V4000-6 FPGA to generate Gaussian samples within up to ±6.55δ, where δ is the standard deviation, and can operate at up to 132 MHz.",2005,0, 2795,Composition assessment metrics for CBSE,"The objective of this paper is the formal definition of composition assessment metrics for CBSE, using an extension of the CORBA component model metamodel as the ontology for describing component assemblies. The method used is the representation of a component assembly as an instantiation of the extended CORBA component model metamodel. The resulting meta-objects diagram can then be traversed using object constraint language clauses. These clauses are a formal and executable definition of the metrics that can be used to assess quality attributes from the assembly and its constituent components. The result is the formal definition of context-dependent metrics that cover the different composition mechanisms provided by the CORBA component model and can be used to compare alternative component assemblies; a metamodel extension to capture the topology of component assemblies. The conclusion is that providing a formal and executable definition of metrics for CORBA component assemblies is an enabling precondition to allow for independent scrutiny of such metrics which is, in turn, essential to increase practitioners confidence on predictable quality attributes.",2005,0, 2796,Automated detection of performance regressions: the mono experience,"Engineering a large software project involves tracking the impact of development and maintenance changes on the software performance. An approach for tracking the impact is regression benchmarking, which involves automated benchmarking and evaluation of performance at regular intervals. Regression benchmarking must tackle the nondeterminism inherent to contemporary computer systems and execution environments and the impact of the nondeterminism on the results. On the example of a fully automated regression benchmarking environment for the mono open-source project, we show how the problems associated with nondeterminism can be tackled using statistical methods.",2005,0, 2797,Code Normal Forms,"Because of their strong economic impact, complexity and maintainability are among the most widely used terms in software engineering. But, they are also among the most weakly understood. A multitude of software metrics attempts to analyze complexity and a proliferation of different definitions of maintainability can be found in text books and corporate quality guide lines. The trouble is that none of these approaches provides a reliable basis for objectively assessing the ability of a software system to absorb future changes. In contrast to this, relational database theory has successfully solved very similar difficulties through normal forms. In this paper, we transfer the idea of normal forms to code. The approach taken is to introduce semantic dependencies as a foundation for the definition of code normal form criteria",2005,0, 2798,The Quantitative Safety Assessment for Safety-Critical Software,"The software fault failure rate bound is discussed and generalized for different reliability growth models. The fault introduction during testing and the fault removal efficiency are modeled to relax the two common assumptions made in software reliability models. Three approaches are introduced for the fault content estimation, and thus they are applied to software coverage estimation. A three-state non-homogenous Markov model is constructed for software safety assessment. The two most important metrics for safety assessment, steady state safety and MTTUF, are estimated using the three-state Markov model. A case study is conducted to verify the theory proposed in the paper",2005,0, 2799,Towards Software Quality Economics for Defect-Detection Techniques,"There are various ways to evaluate defect-detection techniques. However, for a comprehensive evaluation the only possibility is to reduce all influencing factors to costs. There are already some models and metrics for the cost of quality that can be used in that context. The existing metrics for the effectiveness and efficiency of defect-detection techniques and experiences with them are combined with cost metrics to allow a more fine-grained estimation of costs and a comprehensive evaluation of defect-detection techniques. The current model is most suitable for directly comparing concrete applications of different techniques",2005,0, 2800,Motion analysis of the international and national rank squash players,"In this paper, we present a study on squash player work-rate during the squash matches of two different quality levels. To assess work-rate, the measurement of certain parameters of player motion is needed. The computer vision based software application was used to automatically obtain player motion data from the digitized video recordings of 22 squash matches. The matches were played on two quality levels - international and Slovene national players. We present the results of work-rate comparison between these two groups of players based on game duration and distance covered by the players. We found that the players on the international quality level on average cover significantly larger distances, which is partially caused by longer average game durations.",2005,0, 2801,Prototype stopping rules in software development projects,"Custom software development projects under product specification uncertainty can be subject to wide variations in performance (duration, cost, and quality) depending on how the uncertainty is resolved. A prototyping strategy has been chosen in many cases to mitigate specification risk and improve performance. The study reported here sought to illuminate the case where duration was the highest priority project constraint, a feature that has often called for a concurrent development process. Unlike concurrent engineering, however, the process in this paper was a sequential, three-phase approach including an optional, up-front prototyping phase, a nominal-duration construction phase, and a variable length rework phase that grew with the arrival of specification modifications. The source of uncertainty was the modification arrival time; the management control point was the amount of time spent engaged in prototyping activities. Results showed that in situations where the modification arrival rate was sufficiently faster during prototyping than during construction, a minimal-duration choice was available. The model also returned the solution to perform no prototyping in cases where arrival rates were nearly equivalent between phases, or when the rework cost associated with modifications was consistently low.",2005,0, 2802,Pre-layout physical connectivity prediction with application in clustering-based placement,"In this paper, we introduce a structural metric, logic contraction, for pre-layout physical connectivity prediction. For a given set of nodes forming a cluster in a netlist, we can predict their proximity in the final layout based on the logic contraction value of the cluster. We demonstrate a very good correlation of our pre-layout measure with the post-layout physical distances between those nodes. We show an application of the logic contraction to circuit clustering. We compare our seed-growth clustering algorithm with the existing efficient clustering techniques. Experimental results demonstrate the effectiveness of our new clustering method.",2005,0, 2803,Performance Evaluation for Self-Healing Distributed Services,"Distributed applications, based on internetworked services, provide users with more flexible and varied services and developers with the ability to incorporate a vast array of services into their applications. Such applications are difficult to develop and manage due to their inherent dynamics and heterogeneity. One desirable characteristic of distributed applications is self-healing, or the ability to reconfigure themselves ""on the fly"" to circumvent failure. In this paper, we discuss our middleware for developing self-healing distributed applications. We present the model we adopted for self-healing behaviour and show as case study the reconfiguration of an application that uses networked sorting services and an application for networked home appliances. We discuss the performance benefits of self-healing property by analyzing the elapsed time for automatic reconfiguration without user intervention. Our results show that a distributed application developed with our self-healing middleware is able to perform smoothly by quickly reconfiguring its services upon detection of failure",2005,0, 2804,An Efficient and Practical Defense Method Against DDoS Attack at the Source-End,"Distributed Denial-of-Service (DDoS) attack is one of the most serious threats to the Internet. Detecting DDoS at the source-end has many advantages over defense at the victim-end and intermediate-network. One of the main problems for source-end methods is the performance degradation brought by these methods, which discourages Internet service providers (ISPs) to deploy the defense system. We propose an efficient detection approach, which only requires limited fixed-length memory and low computation overhead but provides satisfying detection results. The low cost of defense is expected to attract more ISPs to join the defense. The experiments results show our approach is efficient and feasible for defense at the source-end",2005,0, 2805,Guest Editor's Introduction: The Promise of Public Software Engineering Data Repositories,"Scientific discovery related to software is based on a centuries-old paradigm common to all fields of science: setting up hypotheses and testing them through experiments. Repeatedly confirmed hypotheses become models that can describe and predict real-world phenomena. The best-known models in software engineering describe relationships between development processes, cost and schedule, defects, and numerous software ""-ilities"" such as reliability, maintainability, and availability. But, compared to other disciplines, the science of software is relatively new. It's not surprising that most software models have proponents and opponents among software engineers. This introduction to the special issue discusses the power of modeling, the promise of data repositories, and the workshop devoted to this topic.",2005,0, 2806,Building effective defect-prediction models in practice,"Defective software modules cause software failures, increase development and maintenance costs, and decrease customer satisfaction. Effective defect prediction models can help developers focus quality assurance activities on defect-prone modules and thus improve software quality by using resources more efficiently. These models often use static measures obtained from source code, mainly size, coupling, cohesion, inheritance, and complexity measures, which have been associated with risk factors, such as defects and changes.",2005,0, 2807,Improving after-the-fact tracing and mapping: supporting software quality predictions,"The software engineering industry undertakes many activities that require generating and using mappings. Companies develop knowledge bases to capture corporate expertise and possibly proprietary information. Software developers build traceability matrices to demonstrate that their designs satisfy the requirements. Proposal managers map customers' statements of work to individual sections of companies' proposals to prove compliance. Systems engineers authoring interface specifications record design rationales as they make relevant decisions. We developed an approach to tracing and mapping that aims to use fully automated information retrieval techniques, and we implemented our approach in a tool called RETRO (requirements tracing on target).",2005,0, 2808,A fast and efficient partial distortion search algorithm for block motion estimation,"Under the prerequisite of assuring image quality close to full search algorithm (FS), a fast and efficient partial distortion search algorithm for block motion estimation based on predictive motion vector field (PMVPDS) is proposed in order to reduce the computational complexity of existent partial distortion algorithm. PMVPDS takes advantages of predictive motion vector field technique, half-stop technique and spiral scanning path to find matching block quickly. In addition, controllable partial distortion criterion (CPDC) is used in the calculation of block distortion to speed up further. Experiment results show that, compared with the normalized partial distortion search algorithm (NPDS) and the progressive partial distortion search algorithm (PPDS), PMVPDS provides a significant speedup and achieves better PSNR performance.",2005,0, 2809,Availability modeling for reliable routing software,"With the explosion in the use of Internet, there is an obvious trend that IP-based critical applications (E-Transaction, etc.) are increasing dramatically. Therefore, it is required to build high performance routers (HPRs) with high availability (HA). Though researches on HA router hardware are fairly mature, investigations on HA routing software are still at the preliminary stage. This paper systematically presents possible implementation approaches on HA routing software and classifies them into three categories: protocol extension, state synchronization and packet duplication. This paper innovatively models and analyzes these HA routing software schemes in continuous-time Markov chains (CTMC). Based on the modeling, this paper also introduces numerical results and evaluates these different HA approaches.",2005,0, 2810,QoS-aware replanning of composite Web services,"Run-time service discovery and late-binding constitute some of the most challenging issues of service-oriented software engineering. For late-binding to be effective in the case of composite services, a QoS-aware composition mechanism is needed. This means determining the set of services that, once composed, not only will perform the required functionality, but also will best contribute to achieve the level of QoS promised in service level agreements (SLAs). However, QoS-aware composition relies on estimated QoS values and workflow execution paths previously obtained using a monitoring mechanism. At run-time, the actual QoS values may deviate from the estimations, or the execution path may not be the one foreseen. These changes could increase the risk of breaking SLAs and obtaining a poor QoS. Such a risk could be avoided by replanning the service bindings of the workflow slice still to be executed. This paper proposes an approach to trigger and perform composite service replanning during execution. An evaluation has been performed simulating execution and replanning on a set of composite service workflows.",2005,0, 2811,High level extraction of SoC architectural information from generic C algorithmic descriptions,"The complexity of nowadays, algorithms in terms of number of lines of codes and cross-relations among processing algorithms that are activated by specific input signals, goes far beyond what the designer can reasonably grasp from the ""pencil and paper"" analysis of the (software) specifications. Moreover, depending on the implementation goal different measures and metrics are required at different steps of the implementation methodology or design flow of SoC. The process of extracting the desired measures needs to be supported by appropriate automatic tools, since code rewriting, at each design stage, may result resource consuming and error prone. This paper presents an integrated tool for automatic analysis capable of producing complexity results based on rich and customizable metrics. The tool is based on a C virtual machine that allows extracting from any C program execution the operations and data-flow information, according to the defined metrics. The tool capabilities include the simulation of virtual memory architectures.",2005,0, 2812,Auxiliary voltage sag ride-through system for adjustable-speed drives,"Voltage sags and harmonics are two very important power quality problems. Adjustable-speed drives (ASD) trip due to voltage sags, interfering with production and resulting into financial losses. Harmonics can cause, among others, overheating and noise in the motors. In order to improve performance and reliability of ASD, this paper presents an auxiliary system that combines two sub-circuits that provide ride-through capabilities to critical ASD loads during balanced and unbalanced voltage sags and line current total harmonic distortion (THD) improvement using a third harmonic injection method. A supervised Adaline algorithm is used for voltage sag detection to activate a thyristor controlled rectifier to keep the DC-bus voltage within acceptable limits. A detailed description of the operating principle is presented. The theoretical analysis is verified by digital simulation using PSIM software. Implementation details and experimental results on a 12.5 kW prototype are also presented",2005,0, 2813,Interlaced to progressive scan conversion Using fuzzy edge-based line average algorithm,"De-interlacing methods realize the interlaced to progressive conversion required in many applications. Among them, intra-field methods are widely used for their good trade off between performance and computational cost. In particular, the ELA algorithm is well-known for its advantages in reconstructing the edges of the images, although it degrades the image quality where the edges are not clear. The algorithm proposed in this paper uses a simple fuzzy system which models heuristic rules to improve the KLA rules. It can be implemented easily in software and hardware since the increase in computational cost is very low. Simulation results are included to illustrate the advantages of the proposed fuzzy ELA algorithm in de-interlacing non noisy and noisy images.",2005,0, 2814,Spectral analysis for the rotor defects diagnosis of an induction machine,"The main goal of this paper is to develop a method permitting to determine the nature of the rotor defects thanks to the spectral analysis. The development growing that the equipment of measurement (spectral analyzer) and the software of digital signal processing had made possible the diagnosis of the electric machine defects. Motor current signature analysis (MCSA) is used for the detection of the electrical and the mechanical faults of an induction machine to identify rotor bar faults. Also, the calculation of machine inductances (with and without rotor defects) is carried out by the tools of software MATLAB before the beginning of simulation under Software SIMULINK. Simulation and experimental results are presented to confirm the validity of proposed approach.",2005,0, 2815,Patterns and tools for achieving predictability and performance with real time Java,"The real-time specification for Java (RTSJ) offers the predictable memory management needed for real-time applications, while maintaining Java 's advantages of portability and ease of use. RTSJ's scoped memory allows object lifetimes to be controlled in groups, rather than individually as in C++. While easier than individual object lifetime management, scoped memory adds programming complexity from strict rules governing memory access across scopes. Moreover, memory leaks can potentially create jitter and reduce performance. To manage the complexities of RTSJ's scoped memory, we developed patterns and tools for RTZen, a real-time CORBA Object Request Broker (ORB). We describe four new patterns that enable communication and coordination across scope boundaries, an otherwise difficult task in RTSJ. We then present IsoLeak, a runtime debugging tool that visualizes the scoped hierarchies of complex applications and locates memory leaks. Our empirical results show that RTZen is highly predictable and has acceptable performance. RTZen therefore demonstrates that the use of patterns and tools like IsoLeak can help applications meet the stringent QoS requirements of DRE applications, while supporting safer, easier, cheaper and faster development in real-time Java.",2005,0, 2816,2005 International Symposium on Empirical Software Engineering (IEEE Cat. No. 05EX1213),The following topics are dealt with: software project management; software maintenance; software testing; software metrics; software quality; software process improvement; software requirements; software performance evaluation; software reusability; software cost estimation; empirical analysis.,2005,0, 2817,Exploratory testing: a multiple case study,"Exploratory testing (ET) - simultaneous learning, test design, and test execution - is an applied practice in industry but lacks research. We present the current knowledge of ET based on existing literature and interviews with seven practitioners in three companies. Our interview data shows that the main reasons for using ET in the companies were the difficulties in designing test cases for complicated functionality and the need for testing from the end user's viewpoint. The perceived benefits of ET include the versatility of testing and the ability to quickly form an overall picture of system quality. We found some support for the claimed high defect detection efficiency of ET. The biggest shortcoming of ET was managing test coverage. Further quantitative research on the efficiency and effectiveness of ET is needed. To help focus ET efforts and help control test coverage, we must study planning, controlling and tracking ET.",2005,0, 2818,Comparison of various methods for handling incomplete data in software engineering databases,"Increasing the awareness of how missing data affects software predictive accuracy has led to increasing numbers of missing data techniques (MDTs). This paper investigates the robustness and accuracy of eight popular techniques for tolerating incomplete training and test data using tree-based models. MDTs were compared by artificially simulating different proportions, patterns, and mechanisms of missing data. A 4-way repeated measures design was employed to analyze the data. The simulation results suggest important differences. Listwise deletion is substantially inferior while multiple imputation (MI) represents a superior approach to handling missing data. Decision tree single imputation and surrogate variables splitting are more severely impacted by missing values distributed among all attributes. MI should be used if the data contain many missing values. If few values are missing, any of the MDTs might be considered. Choice of technique should be guided by pattern and mechanisms of missing data.",2005,0, 2819,Quality vs. quantity: comparing evaluation methods in a usability-focused software architecture modification task,"A controlled experiment was performed to assess the usefulness of portions of a usability-supporting architectural pattern (USAP) in modifying the design of software architectures to support a specific usability concern. Results showed that participants using a complete USAP produced modified designs of significantly higher quality than participants using only a usability scenario. Comparison of solution quality ratings with a quantitative measure of responsibilities considered in the solution showed positive correlation between the measures. Implications for software development are that usability concerns can be included at architecture design time, and that USAPs can significantly help software architects to produce better designs to address usability concerns. Implications for empirical software engineering are that validated quantitative measures of software architecture quality may potentially be substituted for costly and often elusive expert assessment.",2005,0, 2820,Identification of test process improvements by combining fault trigger classification and faults-slip-through measurement,"Successful software process improvement depends on the ability to analyze past projects and determine which parts of the process that could become more efficient. One source of such an analysis is the faults that are reported during development. This paper proposes how a combination of two existing techniques for fault analysis can be used to identify where in the test process improvements are needed, i.e. to pinpoint which activities in which phases that should be improved. This was achieved by classifying faults after which test activities that triggered them and which phase each fault should have been found in, i.e. through a combination of orthogonal defect classification (ODC) and faults-slip-through measurement. As a part of the method, the paper proposes a refined classification scheme due to identified problems when trying to apply ODC classification schemes in practice. The feasibility of the proposed method was demonstrated by applying it on an industrial software development project at Ericsson AB. The obtained measures resulted in a set of quantified and prioritized improvement areas to address in consecutive projects.",2005,0, 2821,Determining how much software assurance is enough? A value-based approach,"A classical problem facing many software projects is how to determine when to stop testing and release the product for use. On the one hand, we have found that risk analysis helps to address such ""how much is enough?"" questions, by balancing the risk exposure of doing too little with the risk exposure of doing too much. In some cases, it is difficult to quantify the relative probabilities and sizes of loss in order to provide practical approaches for determining a risk-balanced ""sweet spot"" operating point. However, we have found some particular project situations in which tradeoff analysis helps to address such questions. In this paper, we provide a quantitative approach based on the COCOMO II cost estimation model and the COQUALMO qualify estimation model. We also provide examples of its use under the differing value profiles characterizing early startups, routine business operations, and high-finance operations in marketplace competition situation. We also show how the model and approach can assess the relative payoff of value-based testing compared to value-neutral testing based on some empirical results. Furthermore, we propose a way to perform cost/schedule/reliability tradeoff analysis using COCOMO II to determine the appropriate software assurance level in order to finish the project on time or within budget.",2005,0, 2822,An experiment on subjective evolvability evaluation of object-oriented software: explaining factors and interrater agreement,"Recent trends in software development have emphasized the importance of refactoring in preserving software evolvability. We performed two experiments on software evolvability evaluation, i.e. evaluating the existence of certain code problems called code smells and the refactoring decision. We studied the agreement of the evaluators. Interrater agreement was high for simple code smells and low for the refactoring decision. Furthermore, we analyzed evaluators' demographics and source code metrics as factors explaining the evaluations. The code metrics explained over 70% of the variation regarding the simple code smell evaluations, but only about 30% of the refactoring decision. Surprisingly, the demographics were not useful predictors neither for evaluating code smells nor the refactoring decision. The low agreement for the refactoring decisions may indicate difficulty in building tool support simulating real-life subjective refactoring decisions. However, code metrics tools should be effective in highlighting straightforward problems, e.g. simple code smells.",2005,0, 2823,A multiple-case study of software effort estimation based on use case points,"Through industry collaboration we have experienced an increasing interest in software effort estimation based on use cases. We therefore investigated one promising method, the use case points method, which is inspired by function points analysis. Four companies developed equivalent functionality, but their development processes varied, ranging from a light, code-and-fix process with limited emphasis on code quality, to a heavy process with considerable emphasis on analysis, design and code quality. Our effort estimate, which was based on the use case points method, was close to the actual effort of the company with the lightest development process; the estimate was 413 hours while actual effort of the four companies ranged from 431 to 943 hours. These results show, that the use case points method needs modification to better handle effort related to the development process and the quality of the code.",2005,0, 2824,A friend in need is a friend indeed [software metrics and friend functions],"Previous research has highlighted the extensive use of the C++ friend construct in both library-based and application-based systems. However, existing software metrics do not concentrate on measuring friendship accurately, a surprising omission given the debate friendship has caused in the object-oriented community. In this paper, a number of software metrics, that measure the extent to which friend class relationships are actually used in systems, are defined. These metrics are based on the interactions for which the friend construct is necessary, as well as the direction of this association between classes. Our results, in applying these metrics to the top 100 downloaded systems from sourceforge.net, indicate that up to 66% of friend class relationships in systems are redundant. Elsewhere, friend function declarations would have been more appropriate in many cases. In addition, it has been shown that friendship-based coupling contributes significantly to the high coupling of friend classes for only 25% of the systems studied.",2005,0, 2825,OBDD-based evaluation of reliability and importance measures for multistate systems subject to imperfect fault coverage,"Algorithms for evaluating the reliability of a complex system such as a multistate fault-tolerant computer system have become more important. They are designed to obtain the complete results quickly and accurately even when there exist a number of dependencies such as shared loads (reconfiguration), degradation, and common-cause failures. This paper presents an efficient method based on ordered binary decision diagram (OBDD) for evaluating the multistate system reliability and the Griffith's importance measures which can be regarded as the importance of a system-component state of a multistate system subject to imperfect fault-coverage with various performance requirements. This method combined with the conditional probability methods can handle the dependencies among the combinatorial performance requirements of system modules and find solutions for multistate imperfect coverage model. The main advantage of the method is that its time complexity is equivalent to that of the methods for perfect coverage model and it is very helpful for the optimal design of a multistate fault-tolerant system.",2005,0, 2826,Refactoring the aspectizable interfaces: an empirical assessment,"Aspect oriented programming aims at addressing the problem of the crosscutting concerns, i.e., those functionalities that are scattered among several modules in a given system. Aspects can be defined to modularize such concerns. In this work, we focus on a specific kind of crosscutting concerns, the scattered implementation of methods declared by interfaces that do not belong to the principal decomposition. We call such interfaces aspectizable. All the aspectizable interfaces identified within a large number of classes from the Java Standard Library and from three Java applications have been automatically migrated to aspects. To assess the effects of the migration on the internal and external quality attributes of these systems, we collected a set of metrics and we conducted an empirical study, in which some maintenance tasks were executed on the two alternative versions (with and without aspects) of the same system. In this paper, we report the results of such a comparison.",2005,0, 2827,A predictive QoS control strategy for wireless sensor networks,"The number of active sensors in a wireless sensor network has been proposed as a measure, albeit limited, for quality of service (QoS) for it dictates the spatial resolution of the sensed parameters. In very large sensor network applications, the number of sensor nodes deployed may exceed the number required to provide the desired resolution. Herein we propose a method, dubbed predictive QoS control (PQC), to manage the number of active sensors in such an over-deployed network. The strategy is shown to obtain near lifetime and variance performance in comparison to a Bernoulli benchmark, with the added benefit of not requiring the network to know the total number of sensors available. This benefit is especially relevant in networks where sensors are prone to failure due to not only energy exhaustion but also environmental factors and/or those networks where nodes are replenished over time. The method also has advantages in that only transmitting sensors need to listen for QoS control information and thus enabling inactive sensors to operate at extremely low power levels",2005,0, 2828,Energy-efficient self-adapting online linear forecasting for wireless sensor network applications,"New energy-efficient linear forecasting methods are proposed for various sensor network applications, including in-network data aggregation and mining. The proposed methods are designed to minimize the number of trend changes for a given application-specified forecast quality metric. They also self-adjust the model parameters, the slope and the intercept, based on the forecast errors observed via measurements. As a result, they incur O(1) space and time overheads, a critical advantage for resource-limited wireless sensors. An extensive simulation study based on real-world and synthetic time-series data shows that the proposed methods reduce the number of trend changes by 20%~50% over the existing well-known methods for a given forecast quality metric. That is, they are more predictive than the others with the same forecast quality metric",2005,0, 2829,Modeling and analysis of high-availability routing software,"With the explosion in the use of Internet, there is an obvious trend that critical applications based on IP network (E-transaction, etc.) are increasing dramatically. Therefore, it is required to build high performance routers (HPRs) with high availability (HA). Though researches on HA router hardware are fairly mature, investigations on HA routing software are still on-going. This paper systematically presents possible implementation approaches on HA routing software and classifies them into three catalogs: protocol extension, state synchronization and packet duplication. This paper innovatively models and analyzes these HA routing software schemes in continuous-time Markov chains (CTMC). Based on the modeling, this paper also introduces numerical results and evaluates these different HA approaches.",2005,0, 2830,A software-based concurrent error detection technique for power PC processor-based embedded systems,"This paper presents a behavior-based error detection technique called control flow checking using branch trace exceptions for powerPC processors family (CFCBTE). This technique is based on the branch trace exception feature available in the powerPC processors family for debugging purposes. This technique traces the target addresses of program branches at run-time and compares them with reference target addresses to detect possible violations caused by transient faults. The reference target addresses are derived by a preprocessor from the source program. The proposed technique is experimentally evaluated on a 32-bit powerPC microcontroller using software implemented fault injection (SWIFI). The results show that this technique detects about 91% of the injected control flow errors. The memory overhead is 39.16% on average, and the performance overhead varies between 110% and 304% depending on the workload used. This technique does not modify the program source code.",2005,0, 2831,Safety analysis of software product lines using state-based modeling,"The analysis and management of variations (such as optional features) are central to the development of safety-critical, software product lines. However, the difficulty of managing variations, and the potential interactions among them, across an entire product line currently hinders safety analysis in such systems. The work described here contributes to a solution by integrating safety analysis of a product line with model-based development. This approach provides a structured way to construct a state-based model of a product line having significant, safety-related variations. The process described here uses and extends previous work on product-line software fault tree analysis to explore hazard-prone variation points. The process then uses scenario-guided executions to exercise the state model over the variations as a means of validating the product-line safety properties. Using an available tool, relationships between behavioral variations and potentially hazardous states are systematically explored and mitigation steps are identified. The paper uses a product line of embedded medical devices to demonstrate and evaluate the process and results",2005,0, 2832,Providing test quality feedback using static source code and automatic test suite metrics,"A classic question in software development is ""How much testing is enough?"" Aside from dynamic coverage-based metrics, there are few measures that can be used to provide guidance on the quality of an automatic test suite as development proceeds. This paper utilizes the software testing and reliability early warning (STREW) static metric suite to provide a developer with indications of changes and additions to their automated unit test suite and code for added confidence that product quality will be high. Retrospective case studies to assess the utility of using the STREW metrics as a feedback mechanism were performed in academic, open source and industrial environments. The results indicate at statistically significant levels the ability of the STREW metrics to provide feedback on important attributes of an automatic test suite and corresponding code",2005,0, 2833,Assessing the crash-failure assumption of group communication protocols,"Designing and correctly implementing group communication systems (GCSs) is notoriously difficult. Assuming that processes fail only by crashing provides a powerful means to simplify the theoretical development of these systems. When making this assumption, however, one should not forget that clean crash failures provide only a coarse approximation of the effects that errors can have in distributed systems. Ignoring such a discrepancy can lead to complex GCS-based applications that pay a large price in terms of performance overhead yet fail to deliver the promised level of dependability. This paper provides a thorough study of error effects in real systems by demonstrating an error-injection-driven design methodology, where error injection is integrated in the core steps of the design process of a robust fault-tolerant system. The methodology is demonstrated for the Fortika toolkit, a Java-based GCS. Error injection enables us to uncover subtle reliability bottlenecks both in the design of Fortika and in the implementation of Java. Based on the obtained insights, we enhance Fortika's design to reduce the identified bottlenecks. Finally, a comparison of the results obtained for Fortika with the results obtained for the OCAML-based Ensemble system in a previous work, allows us to investigate the reliability implications that the choice of the development platform (Java versus OCAML) can have",2005,0, 2834,Forecasting field defect rates using a combined time-based and metrics-based approach: a case study of OpenBSD,"Open source software systems are critical infrastructure for many applications; however, little has been precisely measured about their quality. Forecasting the field defect-occurrence rate over the entire lifespan of a release before deployment for open source software systems may enable informed decision-making. In this paper, we present an empirical case study often releases of OpenBSD. We use the novel approach of predicting model parameters of software reliability growth models (SRGMs) using metrics-based modeling methods. We consider three SRGMs, seven metrics-based prediction methods, and two different sets of predictors. Our results show that accurate field defect-occurrence rate forecasts are possible for OpenBSD, as measured by the Theil forecasting statistic. We identify the SRGM that produces the most accurate forecasts and subjectively determine the preferred metrics-based prediction method and set of predictors. Our findings are steps towards managing the risks associated with field defects",2005,0, 2835,A novel method for early software quality prediction based on support vector machine,"The software development process imposes major impacts on the quality of software at every development stage; therefore, a common goal of each software development phase concerns how to improve software quality. Software quality prediction thus aims to evaluate software quality level periodically and to indicate software quality problems early. In this paper, we propose a novel technique to predict software quality by adopting support vector machine (SVM) in the classification of software modules based on complexity metrics. Because only limited information of software complexity metrics is available in early software life cycle, ordinary software quality models cannot make good predictions generally. It is well known that SVM generalizes well even in high dimensional spaces under small training sample conditions. We consequently propose a SVM-based software classification model, whose characteristic is appropriate for early software quality predictions when only a small number of sample data are available. Experimental results with a medical imaging system software metrics data show that our SVM prediction model achieves better software quality prediction than some commonly used software quality prediction models",2005,0, 2836,Empirical assessment of machine learning based software defect prediction techniques,"The wide-variety of real-time software systems, including telecontrol/telepresence systems, robotic systems, and mission planning systems, can entail dynamic code synthesis based on runtime mission-specific requirements and operating conditions. This necessitates the need for dynamic dependability assessment to ensure that these systems perform as specified and not fail in catastrophic ways. One approach in achieving this is to dynamically assess the modules in the synthesized code using software defect prediction techniques. Statistical models; such as stepwise multi-linear regression models and multivariate models, and machine learning approaches, such as artificial neural networks, instance-based reasoning, Bayesian-belief networks, decision trees, and rule inductions, have been investigated for predicting software quality. However, there is still no consensus about the best predictor model for software defects. In this paper; we evaluate different predictor models on four different real-time software defect data sets. The results show that a combination of IR and instance-based learning along with the consistency-based subset evaluation technique provides a relatively better consistency in accuracy prediction compared to other models. The results also show that ""size"" and ""complexity"" metrics are not sufficient for accurately predicting real-time software defects.",2005,0, 2837,Efficiently registering video into panoramic mosaics,"We present an automatic and efficient method to register and stitch thousands of video frames into a large panoramic mosaic. Our method preserves the robustness and accuracy of image stitchers that match all pairs of images while utilizing the ordering information provided by video. We reduce the cost of searching for matches between video frames by adaptively identifying key frames based on the amount of image-to-image overlap. Key frames are matched to all other key frames, but intermediate video frames are only matched to temporally neighboring key frames and intermediate frames. Image orientations can be estimated from this sparse set of matches in time quadratic to cubic in the number of key frames but only linear in the number of intermediate frames. Additionally, the matches between pairs of images are compressed by replacing measurements within small windows in the image with a single representative measurement. We show that this approach substantially reduces the time required to estimate the image orientations with minimal loss of accuracy. Finally, we demonstrate both the efficiency and quality of our results by registering several long video sequences",2005,0, 2838,Hierarchical behavior organization,"In most behavior-based approaches, implementing a broad set of different behavioral skills and coordinating them to achieve coherent complex behavior is an error-prone and very tedious task. Concepts for organizing reactive behavior in a hierarchical manner are rarely found in behavior-based approaches, and there is no widely accepted approach for creating such behavior hierarchies. Most applications of behavior-based concepts use only few behaviors and do not seem to scale well. Reuse of behaviors for different application scenarios or even on different robots is very rare, and the integration of behavior-based approaches with planning is unsolved. This paper discusses the design, implementation, and performance of a behavior framework that addresses some of these issues within the context of behavior-based and hybrid robot control architectures. The approach presents a step towards more systematic software engineering of behavior-based robot systems.",2005,0, 2839,Condition monitoring and fault diagnosis of electrical motors-a review,"Recently, research has picked up a fervent pace in the area of fault diagnosis of electrical machines. The manufacturers and users of these drives are now keen to include diagnostic features in the software to improve salability and reliability. Apart from locating specific harmonic components in the line current (popularly known as motor current signature analysis), other signals, such as speed, torque, noise, vibration etc., are also explored for their frequency contents. Sometimes, altogether different techniques, such as thermal measurements, chemical analysis, etc., are also employed to find out the nature and the degree of the fault. In addition, human involvement in the actual fault detection decision making is slowly being replaced by automated tools, such as expert systems, neural networks, fuzzy-logic-based systems; to name a few. It is indeed evident that this area is vast in scope. Hence, keeping in mind the need for future research, a review paper describing different types of faults and the signatures they generate and their diagnostics' schemes will not be entirely out of place. In particular, such a review helps to avoid repetition of past work and gives a bird's eye view to a new researcher in this area.",2005,0, 2840,Development of a portable digital radiographic system based on FOP-coupled CMOS image sensor and its performance evaluation,"As a continuation of our digital X-ray imaging sensor R&D, we have developed a cost-effective, portable, digital radiographic system based on a CMOS image sensor coupled with a fiber optic plate (FOP) and selected conventional scintillators. The imaging system consists of a commercially available CMOS image sensor of 48 μm × 48 μm pixel size and 49.2 mm × 49.3 mm active area, a FOP bundled with several millions of glass fibers of about 6 μm in diameter and 3 mm in thickness, phosphor screens such as Min-R or Lanex series, a readout IC board, a GUI software, and a battery-operated X-ray generator (20-60 kVp; up to 1 mA). Here the FOP was incorporated into the imaging system to reduce the performance degradation of the CMOS sensor module caused by irradiation and also to improve image quality. In this paper, we described each imaging component of the fully-integrated portable digital radiographic system in detail, and also presented its performance analysis with experimental measurements and acquired X-ray images in terms of system response with exposure, contrast-to-noise ratio (CNR), modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE).",2005,0,2633 2841,Technology of detecting GIC in power grids & its monitoring device,"The magnetic storm results in the transmission lines with geomagnetically induced current (GIC). And GIC happening in random has the frequency between 0.001 Hz~0.1 Hz, and continues from several minutes to several hours. Based on elaborating mechanism and characteristic of GIC in grids and the influence on China power grids, the article has conducted the research work of GIC monitoring technology, and has investigated the method of sampling data of GIC, the survey algorithm and the new monitoring device. The simulated test shows, the monitoring device can effectively measure GIC which is signal of quasi direct current and randomness, has advantages of having few data to be handled and needing little memory space, etc",2005,0, 2842,Research on the Intelligence High-Voltage Circuit Breaker Based on DSP,"In this paper an intelligence recloser with faults breaking and synchronizing make function based on DSP is presented. During breaking the short circuit current, the current change rate is adopted to judge whether the short circuit taking. In this paper the four reclosing possibilities are analyzed and simulated in order to find the voltage optimum inputting angle, at the same time short circuit faults are analyzed and simulated in order to diagnose and distinguish fault's kinds. Furthermore the digital filtering based on the improved Fourier transformations (IFFT) is used to estimate the optimal reclosing angle so as to weaken the instantaneous current during reclosing circuit. Based on DSP technology, the hardware circuit has been set up and the software has been programmed. And the experiments show that it is effectively to reduce the making current and obviously to improve circuit breaker's performance",2005,0, 2843,Experiences of PD Diagnosis on MV Cables using Oscillating Voltages (OWTS),"Detecting, locating and evaluating of partial discharges (PD) in the insulating material, terminations and joints provides the opportunity for a quality control after installation and preventive detection of arising service interruption. A sophisticated evaluation is necessary between PD in several insulating materials and also in different types of terminations and joints. For a most precise evaluation of the degree and risk caused by PD it is suggested to use a test voltage shape that is preferably like the same under service conditions. Only under these requirements the typical PD parameters like inception and extinction voltage, PD level and PD pattern correspond to significant operational values. On the other hand the stress on the insulation should be limited during the diagnosis to not create irreversible damages and thereby worsening the condition of the test object. The paper introduces an oscillating wave test system (OWTS), which meets these mentioned demands well. The design of the system, its functionality and especially the operating software are made for convenient field application. Field data and experience reports will be presented and discussed. This field data serve also as good guide for the level of danger to the different insulating systems due to partial discharges",2005,0, 2844,Development of an on-line data quality monitor for the relativistic heavy-ion experiment ALICE,"The on-line data monitoring tool developed for the coming ALICE experiment at LHC, CERN is presented. This monitoring tool which is a part of the ALICE-DAQ software framework, written entirely in C++ language, uses standard Linux tools in conjunction with the data display and analysis package ROOT, developed at CERN. It allows checking the consistency and quality of the data and correct functioning of the various sub-detectors either at run time or during off line by playing back the recorded raw data. After discussing the functionality and performance of this package, the experience gained during the test beam periods is also summarized",2005,0, 2845,The GNAM monitoring system and the OHP histogram presenter for ATLAS,"ATLAS is one of the four experiments under construction along the Large Hadron Collider at CERN. During the 2004 combined test beam, the GNAM monitoring system and the OHP histogram presenter were widely used to assess both the hardware setup and the data quality. GNAM is a modular framework where detector specific code can be easily plugged in to obtain online low-level monitoring applications. It is based on the monitoring tools provided by the ATLAS trigger and data acquisition (TDAQ) software, OHP is a histogram presenter, capable to perform both as a configurable display and as a browser. From OHP, requests to execute simple interactive operations (such as reset, rebin or update) on histograms, can be sent to GNAM",2005,0, 2846,The thin gap chambers database experience in test beam and preparations for ATLAS,"Thin gap chambers (TGCs) are used for the muon trigger system in the forward region of the LHC experiment ATLAS. The TGCs are expected to provide a trigger signal within 25 ns of the bunch spacing. About 3,600 ATLAS TGCs have been produced in Israel, Japan and China. The chambers go through a vigorous quality control program before installation. An extensive system test of the ATLAS muon spectrometer has been performed in the H8 beam line at the CERN SPS during the last few years. Three TGC stations were employed there for triggering muons in the endcap. A relational database was used for storing the conditions of the tests as well as the configuration of the system. This database has provided the detector control system with the information needed for its operation and configuration. The database is used to assist the online operation and maintenance. The same database is storing the non event condition and configuration parameters needed later for the offline and reconstruction software. A larger scale of the database has been produced to support the whole TGC system. It integrates all the production, tests and assembly information. A 1/12th model of the whole TGC system is currently in use for testing the performance of this database in configuring and condition tracking of the system. A mockup of the database was first implemented during the H8 test beams. This paper describes the database structure, its interface to other systems and its operational performance",2005,0, 2847,Benchmarks and implementation of the ALICE high level trigger,"The ALICE high level trigger combines and processes the full information from all major detectors in a large computer cluster. Data rate reduction is achieved by reducing the event rate by selecting interesting events (software trigger) and by reducing the event size by selecting sub-events and by advanced data compression. Reconstruction chains for the barrel detectors and the forward muon spectrometer have been benchmarked. The HLT receives a replica of the raw data via the standard ALICE DDL link into a custom PCI receiver card (HLT-RORC). These boards also provide a FPGA co-processor for data-intensive tasks of pattern recognition. Some of the pattern recognition algorithms (cluster finder, Hough transformation) have been re-designed in VHDL to be executed in the Virtex-4 FPGA on the HLT-RORC. HLT prototypes were operated during the beam tests of the TPC and TRD detectors. The input and output interfaces to DAQ and the data flow inside of HLT were successfully tested. A full-scale prototype of the dimuon-HLT achieved the expected data flow performance. This system was finally embedded in a GRID-like system of several distributed clusters demonstrating the scalability and fault-tolerance of the HLT",2005,0,3033 2848,Performance of the ATLAS SCT readout system,"The ATLAS semiconductor tracker (SCT) together with the pixel and the transition radiation detectors form the tracking system of the ATLAS experiment at LHC. It consists of 20,000 single-sided silicon microstrip sensors assembled back-to-back into modules mounted on four concentric barrels and two end-cap detectors formed by nine disks each. The SCT module production and testing has finished while the macro-assembly is well under way. After an overview of the layout and the operating environment of the SCT, a description of the readout electronics design and operation requirements is given. The quality control procedure and the DAQ software for assuring the electrical functionality of hybrids and modules are discussed. The focus is on the electrical performance results obtained during the assembly and testing of the end-cap SCT modules",2005,0, 2849,FUMSâ„?artificial intelligence technologies including fuzzy logic for automatic decision making,"Advances in sensing technologies and aircraft data acquisition systems have resulted in generating huge aircraft data sets, which can potentially offer significant improvements in aircraft management, affordability, availability, airworthiness and performance (MAAAP). In order to realise these potential benefits, there is a growing need for automatically trending/mining these data and fusing the data into information and decisions that can lead to MAAAP improvements. Smiths has worked closely with the UK Ministry of Defence (MOD) to evolve Flight and Usage Management Software (FUMSâ„? to address this need. FUMSâ„?provides a single fusion and decision support platform for helicopters, aeroplanes and engines. FUMSâ„?tools have operated on existing aircraft data to provide an affordable framework for developing and verifying diagnostic, prognostic and life management approaches. Whilst FUMSâ„?provides automatic analysis and trend capabilities, it fuses the condition indicators (CIs) generated by aircraft health and usage monitoring systems (HUMS) into decisions that can increase fault detection rates and reduce false alarm rates. This paper reports on a number of decision-making processes including logic, Bayesian belief networks and fuzzy logic. The investigation presented in this paper has indicated that decision-making based on logic and fuzzy logic can offer verifiable techniques. The paper also shows how Smiths has successfully applied fuzzy logic to the Chinook HUMS CIs. Fuzzy logic has also been applied to detect sensor problems causing long-term data corruptions.",2005,0, 2850,Metrics for ontologies,"The success of the semantic Web has been linked with the use of ontologies on the semantic Web. Given the important role of ontologies on the semantic Web, the need for domain ontology development and management are increasingly more and more important to most kinds of knowledge-driven applications. More and more these ontologies are being used for information exchange. Information exchange technology should foster knowledge exchange by providing tools to automatically assess the characteristics and quality of an ontology. The scarcity of theoretically and empirically validated measures for ontologies has motivated our investigation. From this investigation a suite of quality metrics have been developed and implemented as a plug-in to the ontology editor Protege so that any ontology specified in a standard Web ontology language such as RDFS or OWL may have a quality assessment analysis performed.",2005,0, 2851,A QoS provision multipolling mechanism for IEEE 802.11e standard,"With flexibility and mobility, wireless local area network (WLAN) has rapidly become one of the most emergent computer research fields. It attracts significant interests both in academic and industry communities. For applying a higher quality of service (QoS) to network applications, the 802.11e Task Group has deployed hybrid coordination function (HCF) to improve the original IEEE 802.11 medium access control (MAC) protocol. Nevertheless, how to choose the right MAC parameters and QoS mechanism so as to achieve a predictable performance still remains unsolved. In this paper, we propose a polling access control scheme, which in its PCF mode applies non-preemptive priority in order to transfer voice packets more efficiently. The voice traffic characterized by packet rate of voice source and the maximum tolerable jitter (packet delay variation) is forecasted. We record the scheduling results in a queue, with which AP (access point) can poll and then enable mobile users to communicate with their opposite sites. This occurrence also solves the problem that some voice packets do not suit QoS in IEEE 802.11e standard with multipolling. During the time-gap while transmitting no voice packets, the scheme changes to DCF mode to transfer data packets. Furthermore we simulate and analyze the performance of the scheme in a WLAN environment. The experimental results show that our approach can dramatically improve the quality of network service.",2005,0, 2852,Outage probability lower bound in CDMA systems with lognormal-shadowed multipath Rayleigh-faded and noise-corrupted links,"The outage probability is one of the common metrics used in performance evaluation of cellular networks. In this paper, we derive a lower bound on the outage probability in CDMA systems where the communication links are disturbed by co-channel interference as well as additive noise. Each link is assumed to be faded according to both a lognormal distribution and a multipath Rayleigh distribution where the former represents the effect of shadowing while the latter represents the effect of short-term fading. The obtained lower bound is given in terms of a single-fold integral that can be easily computed using any modern software package. We present numerical results for the derived bound and compare them with the outage probability obtained by means of Monte Carlo simulations. Based on our results, we conclude that the proposed bound is relatively tight in a wide range of situations, particularly, in the case of small to moderate number of interferers and small to moderate shadowing standard deviation values.",2005,0, 2853,Delay-Centric Link Quality Aware OLSR,"This paper introduces a delay-centric link quality aware routing protocol, LQOLSR (link quality aware optimized link state routing). The LQOLSR chooses find fast and high quality routes in mobile ad hoc networks (MANET). LQOLSR predicts a packet transmission delay according to multiple transmission rates in IEEE 802.11 and selects the fastest route from source to destination by estimating relative transmission delay between nodes. We implement a LQOLSR protocol by modifying the basic OLSR (optimized link state routing) protocol. We evaluate and analyze the performance in a real testbed established in an office building",2005,0, 2854,A novel tuneable low-intensity adversarial attack,"Currently, denial of service (DoS) attacks remain amongst the most critical threats to Internet applications. The goal of the attacker in a DoS attack is to overwhelm a shared resource by sending a large amount of traffic thus, rendering the resource unavailable to other legitimate users. In this paper, we expose a novel contrasting category of attacks that is aimed at exploiting the adaptive behavior exhibited by several network and system protocols such as TCP. The goal of the attacker in this case is not to entirely disable the service but to inflict sufficient degradation to the service quality experienced by legitimate users. An important property of these attacks is the fact that the desired adversarial impact can be achieved by using an non-suspicious low-rate attack stream, which can easily evade detection. Further by tuning various parameters of the attack traffic stream, the attacker can inflict varying degrees of service degradation and at the same time making it extremely difficult for the victim to detect attacker presence. Our simulation based experiments validate our observations and demonstrate that an attacker can significantly degrade the performance of the TCP flows by inducing lowrate attack traffic which is co-ordinated to exploit the congestion control behavior of TCP",2005,0, 2855,Numerical software quality control in object oriented development,"This paper proposes new method to predict the number of the remaining bugs at the delivery inspection applied to every iteration of OOD, object oriented development. Our method consists of two parts. The first one estimates the number of the remaining bugs by applying the Gompertz curve. The second one uses the interval estimation called OOQP, object oriented quality probe. The basic idea of OOQP is to randomly extract a relatively small number of test cases, usually 10 to 20% of the entire test cases, and to execute them in the actual operation environment. From the test result of OOQP, we can efficiently predict the number of the remaining bugs by the interval estimation. The premier problem of OOQP is that OOD is imposed to use the system design specification document whose contents, like UML, tend to be ambiguous. Our estimation method works well at a matrix-typed organization where a QA team and a development team collaboratively work together to improve the software quality.",2005,0, 2856,How software can help or hinder human decision making (and vice-versa),"Summary form only given. Developments in computing offer experts in many fields specialised support for decision making under uncertainty. However, the impact of these technologies remains controversial. In particular, it is not clear how advice of variable quality from a computer may affect human decision makers. Here the author reviews research showing strikingly diverse effects of computer support on expert decision-making. Decisions support can both systematically improve or damaged the performance of decision makers in subtle ways depending on the decision maker's skills, variation in the difficulty of individual decisions and the reliability of advice from the support tool. In clinical trials decision support technologies are often assessed in terms of their average effects. However this methodology overlooks the possibility of differential effects on decisions of varying difficulty, on decision makers of varying competence, of computer advice of varying accuracy and of possible interactions among these variables. Research that has teased apart aggregated clinical trial data to investigate these possibilities has discovered that computer support was less useful for - and sometimes hindered - professional experts who were relatively good at difficult decisions without support; at the same time the same computer support tool helped those experts who were less good at relatively easy decisions without support. Moreover, inappropriate advice from the support tool could bias decision makers' decisions and, predictably, depending on the type of case, improve or harm the decisions.",2005,0, 2857,Predictors of customer perceived software quality,"Predicting software quality as perceived by a customer may allow an organization to adjust deployment to meet the quality expectations of its customers, to allocate the appropriate amount of maintenance resources, and to direct quality improvement efforts to maximize the return on investment. However, customer perceived quality may be affected not simply by the software content and the development process, but also by a number of other factors including deployment issues, amount of usage, software platform, and hardware configurations. We predict customer perceived quality as measured by various service interactions, including software defect reports, requests for assistance, and field technician dispatches using the afore mentioned and other factors for a large telecommunications software system. We employ the non-intrusive data gathering technique of using existing data captured in automated project monitoring and tracking systems as well as customer support and tracking systems. We find that the effects of deployment schedule, hardware configurations, and software platform can increase the probability of observing a software failure by more than 20 times. Furthermore, we find that the factors affect all quality measures in a similar fashion. Our approach can be applied at other organizations, and we suggest methods to independently validate and replicate our results.",2005,0, 2858,Main effects screening: a distributed continuous quality assurance process for monitoring performance degradation in evolving software systems,"Developers of highly configurable performance-intensive software systems often use a type of in-house performance-oriented ""regression testing"" to ensure that their modifications have not adversely affected their software's performance across its large configuration space. Unfortunately, time and resource constraints often limit developers to in-house testing of a small number of configurations and unreliable extrapolation from these results to the entire configuration space, which allows many performance bottlenecks and sources of QoS degradation to escape detection until systems are fielded. To improve performance assessment of evolving systems across large configuration spaces, we have developed a distributed continuous quality assurance (DCQA) process called main effects screening that uses in-the-field resources to execute formally designed experiments to help reduce the configuration space, thereby allowing developers to perform more targeted in-house QA. We have evaluated this process via several feasibility studies on several large, widely-used performance-intensive software systems. Our results indicate that main effects screening can detect key sources of performance degradation in large-scale systems with significantly less effort than conventional techniques.",2005,0, 2859,Is mutation an appropriate tool for testing experiments? [software testing],"The empirical assessment of test techniques plays an important role in software testing research. One common practice is to instrument faults, either manually or by using mutation operators. The latter allows the systematic, repeatable seeding of large numbers of faults; however, we do not know whether empirical results obtained this way lead to valid, representative conclusions. This paper investigates this important question based on a number of programs with comprehensive pools of test cases and known faults. It is concluded that, based on the data available thus far, the use of mutation operators is yielding trustworthy results (generated mutants are similar to real faults). Mutants appear however to be different from hand-seeded faults that seem to be harder to detect than real faults.",2005,0, 2860,Predictor models in software engineering (PROMISE),"
First Page of the Article
",2005,0, 2861,Testing decision systems with classification components,"Many decision tools and complex decision systems require components that use learning technology to improve the quality of the decisions, based on observations (such as sensor data). In order to employ these tools and systems in high- or medium-risk applications, the design, implementation, and deployment process needs to follow principled verification, validation, and testing procedures that assure a reliable operation. This task is far from being trivial because of the very nature of learning - a technology that provides tools for making decisions under uncertainty. Only little research efforts have been dedicated so far to validating and testing learning-based systems. This paper describes a novel tool for the testing and the validation of learning systems and a set of statistical tests that are employed by this tool for the assessment of learned classification decisions. We also describe some aspects of the underlying theoretical and experimental framework for the validation and testing of systems that learn.",2005,0, 2862,Design-level performance prediction of component-based applications,"Server-side component technologies such as Enterprise JavaBeans (EJBs), .NET, and CORBA are commonly used in enterprise applications that have requirements for high performance and scalability. When designing such applications, architects must select suitable component technology platform and application architecture to provide the required performance. This is challenging as no methods or tools exist to predict application performance without building a significant prototype version for subsequent benchmarking. In this paper, we present an approach to predict the performance of component-based server-side applications during the design phase of software development. The approach constructs a quantitative performance model for a proposed application. The model requires inputs from an application-independent performance profile of the underlying component technology platform, and a design description of the application. The results from the model allow the architect to make early decisions between alternative application architectures in terms of their performance and scalability. We demonstrate the method using an EJB application and validate predictions from the model by implementing two different application architectures and measuring their performance on two different implementations of the EJB platform.",2005,0, 2863,Strategy for mutation testing using genetic algorithms,"In this paper, we propose a model to reveal faults and kill mutant using genetic algorithms. The model first instruments the source and mutant program and divides in small units. Instead of checking the entire program, it tries to find fault in each unit or kills each mutant unit. If any unit survives, the new test data is generated using genetic algorithm with special fitness function. The output of each test for each unit is recorded to detect the faulty unit. In this strategy, the source program and the mutant are instrumented in such a way that the input and output behavior of each unit can be traced. A checker module is used to compare and trace the output of each unit. A complete architecture of the model is proposed in the paper",2005,0, 2864,A rejuvenation methodology of cluster recovery,"While traditional security mechanisms rely on preventive controls and those are very limited in surviving malicious attacks, we propose a novel approach of the security issue to cluster recovery. In this paper, we present the cluster recovery model with a software rejuvenation methodology, which is applicable in security field. We propose two formal approaches, stochastic and Markov decision process. And we estimate the possibility of surviving under unknown attacks. The basic idea of rejuvenation is to investigate the consequences for the exact respond time in face of attacks and rejuvenating the running software/service, refresh its internal state, and resume or restart it. These actions deter the intruder's progress, prevent from more serious damages and provide time to perform more detailed analysis.",2005,0, 2865,Comparison of class test integration ordering strategies,"This paper discusses the approaches used for integration testing of object oriented applications that have been modeled in unified modeling language. It explains integration testing and the test design patterns used far integration testing. Integration problems arise due to lack of interoperability among different components. Strategies for class test order for integration testing have been discussed in this paper, as integration test order specifies the estimated cost for developing stubs' and drivers. Semantics for UML diagrams enable the dependency analysis of components for integration testing. Techniques far replacing classes by stubs for breaking cycles among them have been discussed to enable stepwise integration testing.",2005,0, 2866,Complexity signatures for system health monitoring,"The ability to assess risk in complex systems is one of the fundamental challenges facing the aerospace industry in general, and NASA in particular. First, such an ability allows for quantifiable trade-offs during the design stage of a mission. Second, it allows the monitoring of die health of the system while in operation. Because many of the difficulties in complex systems arise from the interactions among the subsystems, system health monitoring cannot solely focus on the health of those subsystems. Instead system level signatures that encapsulate the complex system interactions are needed. In this work, we present the entropy-scale (ES) and entropy-resolution (ER) system-level signatures that are both computationally tractable and encapsulate many of the salient characteristics of a system. These signatures are based on the change of entropy as a system is observed across different resolutions and scales. We demonstrate the use of the ES and ER signatures on artificial data streams and simple dynamical systems and show that they allow the unambiguous clustering of many types of systems, and therefore are good indicators of system health. We then show how these signatures can be applied to graphical data as well as data strings by using a simple ""graph-walking"" method. This method extracts a data stream from a graphical system representation (e.g., fault tree, software call graph) that conserves the properties of the graph. Finally we apply these signatures to analysis of software packages, and show that they provide significantly better correlation with risk markers than many standard metrics. These results indicate that proper system level signatures, coupled with detailed component-level analysis enable the automatic detection of potentially hazardous subsystem interactions in complex systems before they lead to system deterioration or failures",2005,0, 2867,Validation and verification of prognostic and health management technologies,"Impact Technologies and the Georgia Institute of Technology are developing a Web-based software application that will provide JSF (F-35) system suppliers with a comprehensive set of PHM verification and validation (V&V) resources which will include: standards and definitions, V&V metrics for detection, diagnosis, and prognosis, access to costly seeded fault data sets and example implementations, a collaborative user forum for the exchange of information, and an automated tool for impartially evaluating the performance and effectiveness of PHM technologies. This paper presents the development of the prototype software product to illustrate the feasibility of the techniques, methodologies, and approaches needed to verify and validate PHM capabilities. A team of JSF system suppliers has been assembled to contribute, provide feedback and make recommendations to the product under development. The approach being pursued for assessing the overall PHM system accuracy is to quantify the associated uncertainties at each of the individual levels of a PHM system, and build up the accumulated inaccuracies as information is processed through the PHM architecture",2005,0, 2868,Capturing processor architectures from protocol processing applications: a case study,"We present a case study in finding optimized processor architectures for a given protocol processing application. The process involves application analysis, hardware/software partitioning and optimization, and evaluation of design quality through simulations, estimations and synthesis. The case study was targeted at processing key IPv6 routing functions at 200 MHz using 0.18 μm CMOS technology. A comparison to an implementation on a commercial processor revealed that the captured architectures provided similar or better performance. Especially checksum calculation was efficient in the captured architectures.",2005,0, 2869,Application of General Perception-Based QoS Model to Find Providers' Responsibilities. Case Study: User Perceived Web Service Performance.,"This paper presents a comprehensive model intended to analyze quality of service in telecommunications services and its causes. Although many works have been published in this area, both from a technical viewpoint as well as taking into consideration subjective concerns, they have not resulted in a unique methodology to assess the experienced quality. While most of the studies consider the quality of service only from a technical and end-to-end point of view, we try to analyze quality of service as a general gauge of final users' satisfaction. The proposed model allows us to estimate the quality experienced by end users, while offering detailed results regarding the responsibility of the different agents involved in the service provision. Once we overview the most significant elements of the model, an in-depth analytical study is detailed. Finally, we illustrate a practical study for Web browsing service in order to validate the theoretical model",2005,0, 2870,The improvement on the measuring precision of detecting fault composite insulators by using electric field mapping,"According to the statistics of the fault composite insulators, most of the defects take place at the high voltage (HV) end of the insulators. At present, the minimum defect length detected possibly at HV end of the insulators is about 7 cm by using electric field mapping device, which could not indicate the defects located between the last shed and the HV electrode. Therefore, it is important to improve the measuring precision that is suitable for indicating the defects less than 7 cm in inspecting the fault composite insulators based on electric field mapping device. In order to enhance the measuring precision of the device, we analyzed the electric field distribution along with an insulator by using the commercial software ANSYS. We found that a 5 cm defect can be found if we collect two to three electric field data between the two sheds. Therefore, we added a photoelectric cell array to trigger the device for collecting more data between the two sheds. The tests were conducted in our laboratory by using our new device. The results from our experiments show that the sensitivity of detecting the defects is increased and our new device can indicate the defects less than 5 cm at the HV end without grading rings.",2005,0, 2871,Generalized performance management of multi-class real-time imprecise data services,"The intricacy of real-time data service management increases mainly due to the emergence of applications operating in open and unpredictable environments, increases in software complexity, and need for performance guarantees. In this paper we propose an approach for managing the quality of service of real-time databases that provide imprecise and differentiated services, and that operate in unpredictable environments. Transactions are classified into service classes according to their level of importance. Transactions within each service class are further classified into subclasses based on their quality of service requirements. In this way transactions are explicitly differentiated according to their importance and quality of service requests. The performance evaluation shows that during overloads the most important transactions are guaranteed to meet their deadlines and that reliable quality of service is provided even in the face of varying load and execution time estimation errors",2005,0, 2872,Software reuse metrics for object-oriented systems,"The importance of software measurement is increasing leading to development of new measurement techniques. Reusing existing software components is a key feature in increasing software productivity. It is one of the key elements in object-oriented programming, which reduces the cost and increases the quality of the software. An important feature of C++ called templates support generic programming, which allows the programmer to develop reusable software modules such as functions, classes, etc. The need for software reusability metrics is particularly acute for an organization in order to measure the degree of generic programming included in the form of templates in code. This research addresses this need and introduces a new set of metrics for object-oriented software. Two metrics are proposed for measuring amount of genericty included in the code and then analytically evaluated against Weyuker's set of nine axioms. This set of metrics is then applied to standard projects and accordingly ways in which project managers can use these metrics are suggested.",2005,0, 2873,A quantitative supplement to the definition of software quality,"This paper proposes a new quantitative definition for software quality. The definition is based on the Taguchi philosophy for assessing and improving the quality of manufacturing processes. The Taguchi approach, originally developed for manufacturing processes, define quality in terms of ""loss imparted to society "" by a product after delivery of the product to the end user. To facilitate the use of the Taguchi definition, several ""loss functions"" have been developed. These loss functions allow quality to be quantitatively measured in monetary values (e.g. US dollars). To illustrate the application of the Taguchi definition to a software product, examples that utilize some of the loss functions are presented. The proposed definition of software quality shows good correlation to other popular qualitative and quantitative definitions for software quality.",2005,0, 2874,Design of opportunity tree framework for effective process improvement based on quantitative project performance,"Nowadays IT industry drives to improve the software process on marketing and financial benefits. For efficient process improvement, work performance should be enhanced in line with organization's vision by identifying weakness for improvement and risks with process assessment results and then mapping them in the software development environment. According to organization's vision, plans should be developed for marketing and financial strategic objectives. For each plan, improvement strategies should be developed for each work performance unit such as quality, delivery, cycle time, and waste. Process attributes in each unit should be identified and improvement methods shall be determined for them. In order to suggest a PPM (project performing measure) model to quantitatively measure organization's project performing capability and make an optimal decision for process improvement, this paper statistically analyzes SPICE assessment results of 2,392 weakness for improvement by process for 49 appraisals and 476 processes which were assessed through KASPA (Korea Association of Software Process Assessors) from 1999 to 2004 and then makes SEF (SPICE experience factory). It also presents scores on project performing capability and improvement effects by level, and presents weakness for improvement by priority in the performance unit by level. And finally, this paper suggests an OTF (opportunity tree framework) model to show optimal process improvement strategies.",2005,0, 2875,Design of SPICE experience factory model for accumulation and utilization of process assessment experience,"With growing interest in software process improvement (SPI), many companies are introducing international process models and standards. SPICE is most widely used process assessment model in the SPI work today. In the process of introducing and applying SPICE, practical experiences contribute to enhancing the project performance. The experience helps people to make decisions under uncertainty, and to find better compromises. This paper suggests a SPICE experience factory (SEF) model to use SPICE assessment experience. For this, we collected SPICE assessment results which were conducted in Korea from 1999 to 2004. The collected data does not only contain rating information but also specifies strengths and improvement point for each assessed company and its process. To use this assessment result more efficiently, root words were derived from each result items. And root words were classified into four: 1) measurement, 2) work product, 3) process performance, and 4) process definition and deployment. Database was designed and constructed to save all analyzed data in forms of root words. Database also was designed to efficiently search information the organization needs by strength/improvement point, or root word for each level. This paper describes procedures of SEF model and presents methods to utilize it. By using the proposed SEF model, even organizations which plan to undergo SPICE assessment for the first time can establish the optimal improvement strategies.",2005,0, 2876,Automated testing of multiple redundant electrohydraulic servo-actuators and their test sets,"Testing is an important activity in the design and development of any aircraft. Testing processes can be automated to reduce the diagnostic times and to improve accuracy, which results in faster detection of faults. Thus, higher throughput for testing the unit under test (UUT) is achieved. This paper is developed for automated testing of high performance, multiple redundant electro-hydraulic servo-actuators and their test sets through software with the help of data acquisition cards. Actuator test set (ATS) is special to type test equipment (STTE) used for testing the servo-actuators of a typical fly-by-wire (FBW) aircraft. At first, the testing procedure of ATS is automated before testing the servo-actuators by interfacing personal computer (PC) with ATS through digital to analog (D/A) and analog to digital (A/D) converters and then implementing pre-flight built-in-test (PBIT) algorithm on the servo-actuators through ATS which makes the evaluation and analysis of the performance of the servo-actuators easy and simple.",2005,0, 2877,Process diagnosis via electrical-wafer-sorting maps classification,"The commonality analysis is a proven tool for fault detection in semiconductor manufacturing. This methodology extracts subsets of production lots from all the available data. Then, data mining techniques are used only on the selected data. This approach loses part of the available information and does not discriminate among the lots. The new methodology performance the automatic classification of the electrical wafer test maps in order to identify the classes of failure present in the production lots. Subsequently, the proposed procedure uses the process history of each wafer to create a list of the root cause candidates. This methodology is the core of the software tool ACID which is currently used for process diagnosis at the Agrate site of the ST Microelectronics. A real analysis is presented.",2005,0, 2878,Congestion targeted reduction of quality of service DDoS attacking and defense scheme in mobile ad hoc networks,"In the multimedia applications over the mobile ad hoc networks, the goodput and delay performance of UDP packets is very sensitive to the congestion targeted DDoS attacking. In this paper, we analyze this type of attacking in details for the first time to our knowledge. We figure out the principles of the attacking from the analysis of network capacity and classify the attacking into four categories: pulsing attacking, round robin attacking, self-whisper attacking and flooding attacking. The defense schemes are also proposed, which includes the detection of RTS/CTS packets, signal interference frequency and retransmission time and response stage with ECN marking mechanism. Through the ns2 simulation, the basic attacking scenario is discussed and the results show that the attacking leads to the great jitter of the goodput and delay in pulsing attacking mode. The delay increasing (up to 110 times in five attacking flows) and goodput decreasing (to 77.42% in 5 attacking flows) is obvious with the addition of attacking flows.",2005,0, 2879,Estimating the costs of a reengineering project,"Accurate estimation of project costs is an essential prerequisite to making a reengineering project. Existing systems are usually reengineered because it is cheaper to reengineer them than to redevelop or to replace them. However, to make this decision, management must know what the reengineering will cost. This contribution describes an eight step tool supported process for calculating the time and the costs required to reengineer an existing system. The process is derived from the author's 20 year experience in estimating reengineering projects and has been validated by several real life field experiments in which it has been refined and calibrated",2005,0, 2880,An experimental study of soft errors in microprocessors,"The issue of soft errors is an important emerging concern in the design and implementation of future microprocessors. The authors examine the impact of soft errors on two different microarchitectures: a DLX processor for embedded applications and a high-performance alpha processor. The results contrast impact of soft errors on combinational and sequential logic, identify the most vulnerable units, and assess soft error impact on the application.",2005,0, 2881,"TRUSS: a reliable, scalable server architecture","Traditional techniques that mainframes use to increase reliability -special hardware or custom software - are incompatible with commodity server requirements. The Total Reliability Using Scalable Servers (TRUSS) architecture, developed at Carnegie Mellon, aims to bring reliability to commodity servers. TRUSS features a distributed shared-memory (DSM) multiprocessor that incorporates computation and memory storage redundancy to detect and recover from any single point of transient or permanent failure. Because its underlying DSM architecture presents the familiar shared-memory programming model, TRUSS requires no changes to existing applications and only minor modifications to the operating system to support error recovery.",2005,0, 2882,Determining inspection cost-effectiveness by combining project data and expert opinion,"There is a general agreement among software engineering practitioners that software inspections are an important technique to achieve high software quality at a reasonable cost. However, there are many ways to perform such inspections and many factors that affect their cost-effectiveness. It is therefore important to be able to estimate this cost-effectiveness in order to monitor it, improve it, and convince developers and management that the technology and related investments are worth while. This work proposes a rigorous but practical way to do so. In particular, a meaningful model to measure cost-effectiveness is proposed and a method to determine cost-effectiveness by combining project data and expert opinion is described. To demonstrate the feasibility of the proposed approach, the results of a large-scale industrial case study are presented and an initial validation is performed.",2005,0, 2883,The physical topology discovery for switched Ethernet based on connections reasoning technique,"Accurate and up-to-date knowledge of topology serves as the base of a number of network management functions, such as performance monitoring and evaluation, fault detection and location, resource allocation and etc. Firstly the main achievements in topology discovery for LAN are introduced and the defects of those methods are pointed out in this paper. Then the basic theories for connections reasoning technique (CRT) based on the predication logic are proposed. This technique translates the topology discovery into a math problem of logic reasoning, so that topology of LAN can be studied by mathematic tools. A new algorithm for topology discovery based on this mechanism is proposed in this paper. Compared with current discovery algorithms, this method excels in: 1) making use of AFT more effectively in topology discovery, so the whole topology can be build up by just a part of AFTs; and 2) naturally resolving the problem of topology discovery for multiple subnet switched domain.",2005,0, 2884,A quality of service mechanism for IEEE 802.11 wireless network based on service differentiation,"This paper introduces an analytical model for wireless local area network with priority schemes based service differentiation. This model can predict station performance by access stations' number and traffic type before wireless channel condition changed. Then a new algorithm, DTCWF (dynamic tuning of contention window with fairness), is proposed to modify protocol options to limit end to end delay and loss rate of high priority traffic and maximize throughput of other traffics. Simulations validate this model and the comparison between DTCWF, DCF, and EDCA shows that our algorithm can improve quality of service for real-time traffic.",2005,0, 2885,SVM-based approach for instrument fault accomodation in automotive systems,"The paper deals with the use of support vector machines (SVMs) in software-based instrument fault accommodation schemes. A performance comparisons between SVMs and artificial neural networks (ANNs) is also reported. As an example, a real case study on an automotive system is presented. The ANNs and SVMs regression capability are employed to accommodate faults that could occur on main sensors involved in the engine operating. The obtained results prove the good behaviour of both tools. Similar performances have been achieved in terms of accuracy.",2005,0, 2886,Performance evaluation of agent-based material handling systems using simulation techniques,"The increasing influence of global economy is changing the conventional approach to managing manufacturing companies. Real-time reaction to changes in shop-floor operations, quick and quality response in satisfying customer requests, and reconfigurability in both hardware equipment and software modules, are already viewed as essential characteristics for next generation manufacturing systems. Part of a larger research that employs agent-based modeling techniques in manufacturing planning and control, this work proposes an agent-based material handling system and contrasts the centralized and decentralized scheduling approaches for allocation of material handling operations to the available resources in the system. To justify the use of the decentralized agent-based approach and assess its performance compared to conventional scheduling systems, a series of validation tests and a simulation study are carried out. As illustrated by the preliminary results obtained in the simulation study the decentralized agent-based approach can give good feasible solutions in a short amount of time.",2005,0, 2887,Robust resource allocations in parallel computing systems: model and heuristics,"The resources in parallel computer systems (including heterogeneous clusters) should be allocated to the computational applications in a way that maximizes some system performance measure. However, allocation decisions and associated performance prediction are often based on estimated values of application and system parameters. The actual values of these parameters may differ from the estimates; for example, the estimates may represent only average values, the models used to generate the estimates may have limited accuracy, and there may be changes in the environment. Thus, an important research problem is the development of resource management strategies that can guarantee a particular system performance given such uncertainties. To address this problem, we have designed a model for deriving the degree of robustness of a resource allocation-the maximum amount of collective uncertainty in system parameters within which a user-specified level of system performance (QoS) can be guaranteed. The model is presented and we demonstrate its ability to select the most robust resource allocation from among those that otherwise perform similarly (based oh the primary performance criterion). The model's use in allocation heuristics is also demonstrated. This model is applicable to different types of computing and communication environments, including parallel, distributed, cluster; grid, Internet, embedded, and wireless.",2005,0, 2888,Design of a software distributed shared memory system using an MPI communication layer,"We designed and implemented a software distributed shared memory (DSM) system, SCASH-MPI, by using MPI as the communication layer of the SCASH DSM. With MPI as the communication layer, we could use high-speed networks with several clusters and high portability. Furthermore, SCASH-MPI can use high-speed networks with MPI, which is the most commonly available communication library. On the other hand, existing software DSM systems usually use a dedicated communication layer, TCP, or UDP-Ethernet. SCASH-MPI avoids the need for a large amount of pin-down memory for shared memory use that has limited the applications of the original SCASH. In SCASH-MPI, a thread is created to support remote memory communication using MPI. An experiment on a 4-node Itanium cluster showed that the Laplace Solver benchmark using SCASH-MPI achieves a performance comparable to the original SCASH. Performance degradation is only 6.3% in the NPB BT benchmark Class B test. In SCASH-MPI, page transfer does not start until a page fault is detected. To hide the latency of page transmission, we implemented a prefetch function. The latency in BT Class B was reduced by 64% when the prefetch function was used.",2005,0, 2889,Formal verification of dead code elimination in Isabelle/HOL,"Correct compilers are a vital precondition to ensure software correctness. Optimizations are the most error-prone phases in compilers. In this paper, we formally verify dead code elimination (DCE) within the theorem prover Isabelle/HOL. DCE is a popular optimization in compilers which is typically performed on the intermediate representation. In our work, we reformulate the algorithm for DCE so that it is applicable to static single assignment (SSA) form which is a state of the art intermediate representation in modern compilers, thereby showing that DCE is significantly simpler on SSA form than on classical intermediate representations. Moreover, we formally prove our algorithm correct within the theorem prover Isabelle/HOL. Our program equivalence criterion used in this proof is based on bisimulation and, hence, captures also the case of non-termination adequately. Finally we report on our implementation of this verified DCE algorithm in the industrial-strength scale compiler system.",2005,0, 2890,A Distributed Architecture for Network Performance Measurement and Evaluation System,"Quality of Service (QoS) has now become a central issue in network design and application. This article describes a distributed architecture for linking geographically distributed network performance measurement and evaluation system (NPMES) based on in-depth research on the key techniques of network QoS. The prototype implementation of NPMES and the performance evaluation criteria based on performance aggregation are carefully introduced. Our contribution is presenting a new performance evaluation criteria based on performance aggregation to access QoS. Experiment results indicate the scalable NPMES can do the real-time detection on network performance of ISPs from the end user’s perspective. The performance aggregation approach is a bran-new idea, and of a definite practicability.",2005,0, 2891,Cost and response time simulation for Web-based applications on mobile channels,"When considering the addition of a mobile presentation channel to an existing Web-based application, a key question that has to be answered even before development begins is how the mobile channel's characteristics will impact the user experience and the cost of using the application. If either of these factors is outside acceptable limits, economical considerations may forbid adding the channels, even if it would be feasible from a purely technical perspective. Both of these factors depend considerably on two metrics: The time required to transmit data over the mobile network, and the volume transmitted. The PETTICOAT method presented in this paper uses the dialog flow model and Web server log files of an existing application to identify typical interaction sequences and to compile volume statistics, which are then run through a tool that simulates the volume and time that would be incurred by executing the interaction sequences on a mobile channel. From the simulated volume and time data, we can then calculate the cost of accessing the application on a mobile channel.",2005,0, 2892,A method of generating massive virtual clients and model-based performance test,"Testing the performance of a server that handles massive connections requires to generate massive virtual client connections and to model realistic traffic. In this paper, we propose a novel approach to generate massive virtual clients and realistic traffic. Our approach exploits the Windows I/O completion port (IOCP), which is the Windows NT operating system support for developing a scalable, high throughput server, and model-based testing scenarios. We describe implementation details of the proposed approach. Through analysis and experiments, we prove that the proposed method can predict and evaluate performance data more accurately in cost-effective way.",2005,0, 2893,Principles of timing anomalies in superscalar processors,"The counter-intuitive timing behavior of certain features in superscalar processors that cause severe problems for existing worst-case execution time analysis (WCET) methods is called timing anomalies. In this paper, we identify structural sources potentially causing timing anomalies in superscalar pipelines. We provide examples for cases where timing anomalies can arise in much simpler hardware architectures than commonly supposed (i.e., even in hardware containing only in-orderfunctional units). We elaborate the general principle behind timing anomalies and propose a general criterion (resource allocation criterion) that provides a necessary (but not sufficient) condition for the occurrence of timing anomalies in a processor. This principle allows to state the absence of timing anomalies for a specific combination of hardware and software and thus forms a solid theoretic foundation for the time-predictable execution of real-time software on complex processor hardware.",2005,0, 2894,A data mining-based framework for grid workflow management,"In this paper we investigate on the exploitation of data mining techniques to analyze data coming from the enactment of workflow-based processes in a service-oriented grid infrastructure. The extracted knowledge allows users to better comprehend the behavior of the enacted processes, and can be profitably exploited to provide advanced support to several phases in the life-cycle of workflow processes, including (re-)design, matchmaking, scheduling and performance monitoring. To this purpose, we focus on recent data mining techniques specifically aimed at enabling refined analyzes of workflow executions. Moreover, we introduce a comprehensive system architecture that supports the management of grid workflows by fully taking advantage of such mining techniques.",2005,0, 2895,Towards a metamorphic testing methodology for service-oriented software applications,"Testing applications in service-oriented architecture (SOA) environments needs to deal with issues like the unknown communication partners until the service discovery, the imprecise black-box information of software components, and the potential existence of non-identical implementations of the same service. In this paper, we exploit the benefits of the SOA environments and metamorphic testing (MT) to alleviate the issues. We propose an MT-oriented testing methodology in this paper. It formulates metamorphic services to encapsulate services as well as the implementations of metamorphic relations. Test cases for the unit test phase is proposed to generate follow-up test cases for the integration test phase. The metamorphic services invoke relevant services to execute test cases and use their metamorphic relations to detect failures. It has potentials to shift the testing effort from the construction of the integration test sets to the development of metamorphic relations.",2005,0, 2896,Energy/power estimation for LDPC decoders in software radio systems,"Low-density parity-check (LDPC) codes are advanced error-correcting codes with performance approaching the Shannon limit. Although many LDPC code decoding algorithms have been proposed, no detailed comparison in energy/power consumption had ever been reported. This paper presents the performance and energy/power consumption trade-off for different LDPC decoding algorithms and proposes an efficient method to estimate energy consumption. We also propose a joint power management scheme for transmitter and receiver to save receiver energy while maintaining same communication quality by properly delivering more transmit power.",2005,0, 2897,Analyzing software quality with limited fault-proneness defect data,"Assuring whether the desired software quality and reliability is met for a project is as important as delivering it within scheduled budget and time. This is especially vital for high-assurance software systems where software failures can have severe consequences. To achieve the desired software quality, practitioners utilize software quality models to identify high-risk program modules: e.g., software quality classification models are built using training data consisting of software measurements and fault-proneness data from previous development experiences similar to the project currently under-development. However, various practical issues can limit availability of fault-proneness data for all modules in the training data, leading to the data consisting of many modules with no fault-proneness data, i.e., unlabeled data. To address this problem, we propose a novel semi-supervised clustering scheme for software quality analysis with limited fault-proneness data. It is a constraint-based semi-supervised clustering scheme based on the k-means algorithm. The proposed approach is investigated with software measurement data of two NASA software projects, JM1 and KC2. Empirical results validate the promise of our semi-supervised clustering technique for software quality modeling and analysis in the presence of limited defect data. Additionally, the approach provides some valuable insight into the characteristics of certain program modules that remain unlabeled subsequent to our semi-supervised clustering analysis.",2005,0, 2898,Improved Predictive Current Controlled PWM for Single-Phase Grid-Connected Voltage Source Inverters,"Inverter-based distributed generators (DG) must meet the power quality requirements set by interconnection standards such as IEEE Standard 1547 before DGs are allowed to interconnect with existing electric power systems. The power quality is highly dependent on the control strategies of the DG inverters. Traditional predictive current controller can precisely control the load current with low distortions, however, has a poor performance under component parameter variations. An improved predictive current controller has been developed by the authors for single-phase grid-connected voltage source inverters (VSI). Aiming to overcoming the drawbacks of the traditional predictive controller, a scheme for improving the robustness of inverter system is proposed along with a dual-timer control strategy and a software phase-lock-loop (PLL). The controller is designed not only to minimize the control error introduced by the control delay but also to provide a faster response for over-current protection. The simulation and experiment results show that the improved predictive controller has a superior performance to the traditional predictive controller, particularly under parameter variations. The single-phase grid-connected VSI implemented with the proposed predictive controller has shown very low current THD in both laboratory tests and in field operation of a small wind turbine system",2005,0, 2899,Frequency-Adjustable Positive Sequence Detector for Power Conditioning Applications,"This paper proposes a novel and simple positive sequence detector (PSD), which is inherently self-adjustable to fundamental frequency deviations by means of a software-based PLL (phase locked loop). Since the proposed positive sequence detector is not based on Fortescue's classical decomposition and no special input filtering is needed, its dynamic response may be as fast as one fundamental cycle. The digital PLL ensures that the positive sequence components can be calculated even under distorted waveform conditions and fundamental frequency deviations. For the purpose of validating the proposed models, the positive sequence detector has been implemented in a PC-based power quality monitor and experimental results illustrate its good performance. The PSD algorithm has also been evaluated in the control loop of a series active filter and simulation results demonstrate its effectiveness in a closed-loop system. Moreover, considering single-phase applications, this paper also proposes a general single-phase PLL and a fundamental wave detector (FWD) immune to frequency variations and waveform distortions",2005,0, 2900,Improved Current Controller Based on SVPWM for Three-phase Grid-connected Voltage Source Inverters,"Space vector pulse-width modulation (SVPWM) has been widely employed for the current control of three-phase voltage source inverters (VSI). However, in grid-connected distributed generation (DG) systems, SVPWM also brings drawbacks to current controllers, such as the compromised output current due to the grid harmonic disturbance and nonlinearity of the system, the lack of inherent over-current protection, etc. In this paper, an improved current controller has been developed for three-phase grid-connected VSIs. A grid harmonic feed-forward compensation method is employed to depress the grid disturbance. Based on a dual-timer sampling scheme, a software predictor and filter is proposed for both the current feedback loop and grid voltage feed-forward loop to eliminate the system nonlinearity due to the control delay. Both simulation and experimental results have verified the superior performance of the proposed current controller",2005,0, 2901,Asymptotic Performance of a Multichart CUSUM Test Under False Alarm Probability Constraint,"Traditionally the false alarm rate in change point detection problems is measured by the mean time to false detection (or between false alarms). The large values of the mean time to false alarm, however, do not generally guarantee small values of the false alarm probability in a fixed time interval for any possible location of this interval. In this paper we consider a multichannel (multi-population) change point detection problem under a non-traditional false alarm probability constraint, which is desirable for a variety of applications. It is shown that in the multichart CUSUM test this constraint is easy to control. Furthermore, the proposed multichart CUSUM test is shown to be uniformly asymptotically optimal when the false alarm probability is small: it minimizes an average detection delay, or more generally, any positive moment of the stopping time distribution for any point of change.",2005,0, 2902,JTAG-based vector and chain management for system test,"We present an embedded boundary-scan test vector management solution that ensures the correct version of test vectors is applied to the unit under test (UUT). This new vector management approach leverages the system-level boundary-scan multi-drop architecture employed in some high availability electronic systems. Compared to previous methods that do not use system-level boundary-scan resources for vector management, this new approach ensures that the correct boundary-scan vectors are retrieved from the UUT without affecting its operation and requires minimal UUT functionality to execute. The performance and effectiveness of this technique is shown using an actual design example that has been realized in a new product",2005,0, 2903,Computational intelligence based testing for semiconductor measurement systems,"This paper describes a computational intelligence-based software configuration implemented on semiconductor automatic test equipment (ATE) and how we can improve our design (e.g. memory test chip) based on such a method. The purpose of this unique software configuration incorporating neural network, genetic-algorithm and other artificial intelligence technologies is to enhance ATE capability and efficiency by providing an intelligent interface for a variety of functions that are controlled or monitored by the software. This includes automated and user directed control of the ATE and a diagnostic strategy to streamline test sequences and specific combinations of test conditions through the use of advanced diagnostic strategies. Such methods can achieve greater accuracy in failure diagnosis and fault prediction; and improve confidence in circuit performance testing that result in the determination of a DUT (device under test) status",2005,0, 2904,The case for outsourcing DFT,"The author discusses about outsourcing analog/mixed-signal DFT. At present we still lack a ""SAF"" metric for measuring analog IC fault coverage, as most analog faults that are found by testing are of a parametric variety, and can not be measured or scored (as in the SAF coverage grade) by using Boolean techniques. To analyze analog and mixed-signal (A/MS) logic for testability, one has to know what the analog failures are that need to be detected, what the capability of the test equipment will be for these measurements, what the error or repeatability will be, and what the trade off is going to be between increased test accuracy and test time",2005,0, 2905,Application of a Robust and Efficient ICP Algorithm for Fitting a Deformable 3D Human Torso Model to Noisy Data,"We investigate the use of an iterative closest point (ICP) algorithm in the alignment of a point distribution model (PDM) of 3D human female torsos to sample female torso data. An approximate k-d tree procedure for efficient ICP is tested to assess whether it improves the speed of the alignment process. The use of different error norms, namely Lâ‚?and Lâ‚? are compared to ascertain if either offers an advantage in terms of convergence and in the quality of the final fit when the sample data is clean, noisy or has some data missing. It is found that the performance of the ICP algorithm used is improved in both speed of convergence and accuracy of fit through the combined use of an approximate and exact k-d tree search procedure and with the minimisation of the Lâ‚?norm even when up to 50% of the data is noisy or up to 25% is missing. We demonstrate the use of this algorithm in providing, via a fitted torso PDM, smooth surfaces for noisy torso data and valid data points for torsos with missing data.",2005,0, 2906,Performance Model Building of Pervasive Computing,"Performance model building is essential to predict the ability of an application to satisfy given levels of performance or to support the search for viable alternatives. Using automated methods of model building is becoming of increasing interest to software developers who have neither the skills nor the time to do it manually. This is particularly relevant in pervasive computing, where the large number of software and hardware components requires models of so large a size that using traditional manual methods of model building would be error prone and time consuming. This paper deals with an automated method to build performance models of pervasive computing applications, which require the integration of multiple technologies, including software layers, hardware platforms and wired/wireless networks. The considered performance models are of extended queueing network (EQN) type. The method is based on a procedure that receives as input the UML model of the application to yield as output the complete EQN model, which can then be evaluated by use of any evaluation tool.",2005,0, 2907,New approach for selfish nodes detection in mobile ad hoc networks,"A mobile ad hoc network (MANET) is a temporary infrastructureless network, formed by a set of mobile hosts that dynamically establish their own network on the fly without relying on any central administration. Mobile hosts used in MANET have to ensure the services that were ensured by the powerful fixed infrastructure in traditional networks, the packet forwarding is one of these services. The resource limitation of nodes used in MANET, particularly in energy supply, along with the multi-hop nature of this network may cause new phenomena which do not exist in traditional networks. To save its energy a node may behave selfishly and uses the forwarding service of other nodes without correctly forwarding packets for them. This deviation from the correct behavior represents a potential threat against the quality of service (QoS), as well as the service availability, one of the most important security requirements. Some solutions have been recently proposed, but almost all these solutions rely on the watchdog technique as stated in S. Marti et al. (2000) in their monitoring components, which suffers from many problems. In this paper we propose an approach to mitigate some of these problems, and we assess its performance by simulation.",2005,0, 2908,Spectral gamma detectors for hand-held radioisotope identification devices (RIDs) for nuclear security applications,"The in-situ identification of the isotope causing a radiation alarm at a border crossing is essential for a quick resolution of the case. Commonly, hand-held radioisotope identification devices (RIDs) are used to address this task. Although a number of such devices are commercially available, there is still a gap between the requirements of the users and what is actually available. Two components are essential for the optimal performance of such devices: i) The quality of the raw data-the gamma spectra, and ii) isotope identification software matching the specifics of a certain gamma detector. In this paper, we investigate new spectral gamma detectors for potential use in hand-held radioisotope identification devices. The standard NaI detector is compared with new scintillation and room temperature semiconductor detectors. The following spectral gamma detectors were included into the study: NaI(Tl), LaCl3(Ce), 6LiI(Eu), CdWO4, hemispheric and coplanar CdZnTe detectors. In the paper basic spectrometric properties are measured and compared in view of their use in RIDs",2005,0, 2909,VRM: a failure-aware grid resource management system,"For resource management in grid environments, advance reservations turned out to be very useful and hence are supported by a variety of grid toolkits. However, failure recovery for such systems has not yet received the attention it deserves. In this paper, we address the problem of remapping reservations to other resources, when the originally selected resource fails. Instead of dealing with jobs already running, which usually means checkpointing and migration, our focus is on jobs that are scheduled on the failed resource for a specific future period of time but not started yet. The most critical factor when solving this problem is the estimation of the downtime. We avoid the drawbacks of under- or overestimating the downtime by a dynamic load-based approach that is evaluated by extensive simulations in a grid environment and shows superior performance compared to estimation-based approaches.",2005,0, 2910,Implementation of a digital receiver for DS-CDMA communication systems using HW/SW codesign,"An optimized partitioning algorithm for HW/SW codesign is described and applied to the implementation of a digital RAKE receiver for DS-CDMA on an heterogeneous (hardware/software) platform. The development of this partitioning algorithm aims to gather quality features that have already proved effective when applied separately, but had not been used together yet. The partitioning algorithm is applied to a flexible RAKE architecture able to adapt its number of demodulation fingers to each propagation environment. This RAKE detector uses a serial and iterative computation of the required processing in order to jointly optimize performance and computational power",2005,0, 2911,Novel usage of wavelets as basis in RDNN for telecommunication applications,The paper explores the use of wavelet basis recurrent dynamic neural network (RDNN) to improve the estimation of reliability growth of communication network's software. The presented RDNN handles noise contaminated data and provides enhanced speed and performance of the system as compared with alternate approaches. Nonlinearity of the system is represented by proper selection of the wavelet function. The integrated defect tracking model parameters and the data are fed to the RDNN and the network is trained for optimizing the defect tracking model performance. The designed system will assist the service release management in obtaining more effective risk reduction. Using wavelet basis for RDNN requires only 10 iterations compared to 200 iterations with hump as basis function. It also reduces the maximum percentage error from 88% to 7.69% in the expected outputs. This also improves the telecommunication system defect tracking and network deployment,2005,0, 2912,Uncertainty in optical measurement applications: a case study,"The uncertainty related to a measurement is at least as important as the measurement itself. Apart from being able to determine intervals of confidence around the final result within which the true measurement value is expected to lie at a certain level of confidence, the rigorous treatment of uncertainty throughout an algorithm allows to increase its robustness against disturbing influences and to judge an its applicability to a given task. This paper addresses the propagation of uncertainty within a quality control application using image based sensors. Simulations and real-world results are provided to show the applicability of the proposed application",2005,0, 2913,The use of optimal filters to track parameters of performance models,"Autonomic computer systems react to changes in the system, including failures, load changes, and changed user behaviour. Autonomic control may be based on a performance model of the system and the software, which implies that the model should track changes in the system. A substantial theory of optimal tracking filters has a successful history of application to track parameters while integrating data from a variety of sources, an issue which is also relevant in performance modeling. This work applies extended Kalman filtering to track the parameters of a simple queueing network model, in response to a step change in the parameters. The response of the filter is affected by the way performance measurements are taken, and by the observability of the parameters.",2005,0, 2914,The detector control system of the LHCb RICH detector,"The LHCb experiment at the Large Hadron Collider (LHC) is dedicated to the study of b-quark properties. A key element of the LHCb detector is particle identification, a task performed by the ring imaging Cherenkov (RICH) subsystem. Efficient particle identification over the full momentum range of 1 to 100 GeV/c requires an extensive system of detector control and monitoring. The RICH detector control system (DCS) monitors environmental parameters which directly affect RICH performance, such as temperature, pressure and humidity. Quality of the RICH radiator gas is monitored through the DCS using ultrasound and Fabry-Perot techniques. The DCS also monitors and controls low and high voltages of the RICH photodetector and the readout electronics systems, as well as providing overall detector safety. Monitoring of the detector alignment will be performed with a laser and CCD system. Starting from the physics requirements, the RICH DCS hardware devices and software tools are described. An account is then given of system integration into the overall LHCb DCS framework",2005,0, 2915,The ATLAS SCT: from end-cap module assembly to detector operation,"The semiconductor tracker (SCT) forms part of the ATLAS inner detector and comprises 4088 silicon microstrip detector modules. It will measure charged particle tracks produced in the 14 TeV proton-proton collisions at the LHC by providing four space point measurements within a pseudorapidity range of |eta|<2.5. The design and assembly of the 2000 recently completed modules for the end-caps of the SCT are reported here. The procedures used for their quality assurance are described and the main results presented, showing a low loss rate during assembly of only 7%, half the allowed contingency. The 2004 combined test beam is then used to illustrate how the DAQ and offline software are being prepared for ATLAS operation",2005,0, 2916,Reduced parallel anode readout for 256 ch flat panel PMT,"SPECT and PET need a good pixel identification to obtain high quality images. This is why new generations of PSPMT with anode array with smaller step have been developed till the recent flat panel PMT H9500 with 2"" square area, 256 anodes array, 3 mm individual pitch. The realization of electronic chains with so high number channels demands dedicated electronics readout with high cost and high management difficulty. In this work we propose a new method of anode number reduction in parallel readout limiting the total chain number to 64. Starting from the evidence that event charge distribution is always contained inside a portion of the FP PMT anodic plane, we assume that only a quarter of the anodes are involved in the detection. Our approach consists on virtually dividing 16times16 anodes in 4 arrays with 64 anodes per each. Once the charge distribution is collected by the 4 anodic planes, each individual anodic charge is projected on one plane. Physically, each i,j-element of one quadrant is associated to the corresponding i,j-element of the other three matrices. In terms of hardware, it is simply realized connecting one to each other the set of four i,j-anodes of the 4 quadrants. In this work an image reconstruction software has been developed and tested by measured charge distribution collected by 256 anodes Hamamatsu FP PMT. The final results, in term of spatial resolution and position linearity, are in agreement with ones collected by the total number of anodes",2005,0, 2917,Simulation tools for damping in high frequency resonators,"This paper presents the development of HiQLab, a simulation tool to compute the effect of damping in high frequency resonators. Existing simulation tools allow designers to compute resonant frequencies but few tools provide estimates of damping, which is crucial in evaluating the performance of such devices. In the current code, two damping mechanisms: thermoelastic damping and anchor loss, have been implemented. Thermoelastic damping results from irreversible heat flow due to mechanically-driven temperature gradients, while anchor loss occurs when high-frequency mechanical waves radiate away from the resonator and into the substrate. Our finite-element simulation tool discretizes PDE models of both phenomena, and evaluates the quality factor (Q), a measure of damping in the system, with specialized eigencomputations and model reduction techniques. The core functions of the tool are written in C++ for performance. Interfaces are in Lua and MATLAB, which give users access to powerful visualization and pre- and postprocessing capabilities",2005,0, 2918,Dynamic characterization study of flip chip ball grid array (FCBGA) on peripheral component interconnect (PCI) board application,"This paper outlines and discusses the new mechanical characterization metrologies applied on PCI board envelope. 'The dynamic responses of PCI board were monitored and characterized using accelerometer and strain gauges. PCI board performances were analyzed to differentiate its high risk areas through analysis of board strain responses to solder joint crack. Board ""strain states"" analysis methodology was introduced to provide immediate accurate board bending modes and deflection associated with experimental results. Using this methodology, it eases the board bend mode analysis which can capture the board strain performance limit at the same time. In addition, high speed camera (HSC) tool was incorporated into the evaluation to understand the boards bend history under shock test. This allows better view of the bending moment and matching to defect locations for corrective action implementation. Detailed failure analysis mapping of solder joint crack percentages was successfully gathered to support those findings. Key influences, such as thermal/mechanical enabling preload masses and shock input profiles on solder joint crack severity were conducted as well to understand the potential risk modulators for SJR performance. Furthermore, commercial simulation software analysis tool was applied to correlate the board's bend modes and predict the high risk solder joint location; which is important for product enabling solutions design. As a result, a system level stiffener solution was designed. Hence, with this characterization and validation concept, a practical stiffener solution for PCI application was validated through a special case study to improve the board SJR performance in its use condition.",2005,0, 2919,Design Phase Analysis of Software Reliability Using Aspect-Oriented Programming,"Software system may have various nonfunctional requirements such as reliability, security, performance and schedulability. If we can predict how well the system will meet such requirements at an early phase of software development, we can significantly save the total development cost and time. Among non-functional requirements, reliability is commonly required as the essential property of the system being developed. Therefore, many analysis methods have been proposed but methods that can be practically performed in the design phase are rare. In this paper we show how design-level aspects can be used to separate reliability concerns from essential functional concerns during software design. The aspect-oriented design technique described in this paper allows one to independently specify fault tolerance and essential functional concerns, and then weave the specifications to produce a design model that reflects both concerns. We illustrate our approach using an example.",2005,0, 2920,A new wavelet-based method for detection of high impedance faults,"Detecting high impedance faults is one of the challenging issues for electrical engineers. Over-current relays can only detect some of the high impedance faults. Distance relays are unable to detect faults with impedance over 100 Omega. In this paper, by using an accurate model for high impedance faults, a new wavelet-based method is presented. The proposed method, which employs a 3 level neural network system, can successfully differentiate high impedance faults from other transients. The paper also thoroughly analyzes the effect of choice of mother wavelet on the detection performance. Simulation results which are carried out using PSCAD/EMTDC software are summarized",2005,0, 2921,Software tools to facilitate advanced network operation,"Economical trends force electricity companies to maximize the utilization of their networks. As a result margins decrease. Within these decreased margins the quality of supply must improve. Disturbances have to be avoided and, if they occur, solved as quickly as possible. This increases the working stress for the network operators. This paper describes the development and application of four software tools that facilitate and simplify the work of network operators. They concern fault location, safe coupling of MV networks, optimal transmission network operation and pseudo state estimation. These tools are based on standard load flow and short circuit analysis methods, where new additions have been made. All applications communicate with various existing applications and devices. They combine on-line measured values with power system database information. The software is equipped with a user friendly interface for grid operators. The applications were developed in cooperation between power utility Nuon and software developer phase to phase and are already in use or close to implementation",2005,0, 2922,Spare Line Borrowing Technique for Distributed Memory Cores in SoC,"In this paper, a new architecture of distributed embedded memory cores for SoC is proposed and an effective memory repair method by using the proposed spare line borrowing (software-driven reconfiguration) technique is investigated. It is known that faulty cells in memory core show spatial locality, also known as fault clustering. This physical phenomenon tends to occur more often as deep submicron technology advances due to defects that span multiple circuit elements and sophisticated circuit design. The combination of new architecture & repair method proposed in this paper ensures fault tolerance enhancement in SoC, especially in case of fault clustering. This fault tolerance enhancement is obtained through optimal redundancy utilization: spare redundancy in a fault-resistant memory core is used to fix the fault in a fault-prone memory core. The effect of spare line borrowing technique on the reliability of distributed memory cores is analyzed through modeling and extensive parametric simulation",2005,0, 2923,A new Phase Locked Loop Strategy for Power Quality Instruments Synchronisation,"Power quality instrumentation requires the accurate fundamental frequency estimation and the signal synchronization, even in presence of disturbances. In the paper the authors present an innovative synchronization technique, based on a single phase software PLL. To evaluate how the synchronization technique is adversely affected by the application of stationary and transient disturbing influences, appropriate testing conditions have been developed, taking into account the requirements of the in-force standards. In the paper the proposed technique is described and PLL performances in presence of stationary and transient disturbances are presented",2005,0, 2924,Client profiling for QoS-based Web service recommendation,"Quality of service (QoS) is one of the important factors when choosing a Web service (WS) provided by a particular WS provider. In this paper we focus on performance, which is one of the important QoS attributes. Performance can be evaluated from the client-side as well as the server-side. As clients are connected through heterogeneous network environments, different clients can experience different performance although they are connected to the same WS provider. Through client grouping and profiling, performance experienced by prospective clients can be estimated based on other clients' historical performance. In this paper we present the common factors for client grouping and profiling, and present the results of the experiments we carried out to understand the effect of these factors on the performance experienced by WS clients. The experimental results show the importance of client grouping and profiling for WS recommendation and composition.",2005,0, 2925,Increasing the efficiency of fault detection in modified code,"Many software systems are developed in a number of consecutive releases. Each new release does not only add new code but also modifies already existing one. In this study we have shown that the modified code can be an important source of faults. The faults are widely recognized as one of the major cost drivers in software projects. Therefore we look for methods of improving fault detection in the modified code. We suggest and evaluate a number of prediction models for increasing the efficiency of fault detection. We evaluate them against the theoretical best model, a simple model based on size, as well as against analyzing the code in a random order (not using any model). We find that using our models provides a significant improvement both over not using any model at all and using the simple model based on the class size. The gain offered by the models corresponds to 30% to 60% of the theoretical maximum.",2005,0, 2926,Identifying error proneness in path strata with genetic algorithms,"In earlier work we have demonstrated that GA can successfully identify error prone paths that have been weighted according to our weighting scheme. In this paper we investigate whether the depth of strata in the software affects the performance of the GA. Our experiments show that the GA performance changes throughout the paths. It performs better in the upper, less in the middle and best in the lower layer of the paths. Although various methods have been applied for detecting and reducing errors in software, little research has been done into partitioning a system into smaller, error prone domains for software quality assurance. To identify error proneness in software paths is important because by identifying them, they can be given priority in code inspections or testing. Our experiments observe to what extent the GA identifies errors seeded into paths using several error seeding strategies. We have compared our GA performance with random path selection.",2005,0, 2927,An integrated solution for testing and analyzing Java applications in an industrial setting,"Testing a large-scale, real-life commercial software application is a very challenging task due to the constant changes in the software, the involvement of multiple programmers and testers, and a large amount of code. Integrating testing with development can help find program bugs at an earlier stage and hence reduce the overall cost. In this paper, we report our experience on how to apply eXVantage (a tool suite for code coverage testing, debugging, performance profiling, etc.) to a large, complex Java application at the implementation and unit testing phases in Avaya. Our results suggest that programmers and testers can benefit from using eXVantage to monitor the testing process, gain confidence on the quality of their software, detect bugs which are otherwise difficult to reveal, and identify performance bottlenecks in terms of which part of code is most frequently executed.",2005,0, 2928,Early stage software reliability and design assessment,"In early developmental stages of software, failure data is not available to determine the reliability of software, but design assessment is a must in this stage. We propose a model based on reliability block diagram (RBD) for representing real-world problems and an algorithm for analysis of these models in early phase of software development. We have named this technique early reliability analysis technique (ERAT). We have performed several simulations on randomly generated software models to compute reliabilities and coupling parameters. The simulation result shows that reliabilities are good quality indicator and coupling can be correlated with system reliability and can be used for system design assessment.",2005,0, 2929,Personal software process (PSP) assistant,"The personal software process (PSP) is a process and performance improvement method aimed at individual software engineers. The use of PSP has been shown to result in benefits such as improved estimation accuracy and reduced defect density of individuals. However, the experience of our institute and of several others is that recording various size and defect data can be onerous, which in, turn can lead to adoption and data quality problems. This paper describes a system that we have developed that performs automatic size and defect recording, aside from providing facilities for viewing and editing the usual PSP logs and reports. Moreover, the system automatically classifies and ranks defects, and then consolidates schedules and defect lists of individual developers into a schedule and defect library for the developers' team.",2005,0, 2930,Predicting software suitability using a Bayesian belief network,"The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian belief networks, a machine learning method. This research presents a Bayesian network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts.",2005,0, 2931,Bayesian networks modeling for software inspection effectiveness,"Software inspection has been broadly accepted as a cost effective approach for defect removal during the whole software development lifecycle. To keep inspection under control, it is essential to measure its effectiveness. As human-oriented activity, inspection effectiveness is due to many uncertain factors that make such study a challenging task. Bayesian networks modeling is a powerful approach for the reasoning under uncertainty and it can describe inspection procedure well. With this framework, some extensions have been explored in this paper. The number of remaining defects in the software is proposed to be incorporated into the framework, with expectation to provide more information on the dynamic changing status of the software. In addition, a different approach is adopted to elicit the prior belief of related probability distributions for the network. Sensitivity analysis is developed with the model to locate the important factors to inspection effectiveness.",2005,0, 2932,Application-based metrics for strategic placement of detectors,"The goal of this paper is to provide low-latency detection and prevent error propagation due to value errors. This paper introduces metrics to guide the strategic placement of detectors and evaluates (using fault injection) the coverage provided by ideal detectors embedded at program locations selected using the computed metrics. The computation is represented in the form of a dynamic dependence graph (DDG), a directed-acyclic graph that captures the dynamic dependencies among the values produced during the course of program execution. The DDG is employed to model error propagation in the program and to derive metrics (e.g., value fanout or lifetime) for detector placement. The coverage of the detectors placed is evaluated using fault injections in real programs, including two large SPEC95 integer benchmarks fgcc and perl). Results show that a small number of detectors, strategically placed, can achieve a high degree of detection coverage.",2005,0, 2933,Reliability prediction and assessment of fielded software based on multiple change-point models,"In this paper, we investigate some techniques for reliability prediction and assessment of fielded software. We first review how several existing software reliability growth models based on non-homogeneous Poisson processes (NHPPs) can be readily derived based on a unified theory for NHPP models. Furthermore, based on the unified theory, we can incorporate the concept of multiple change-points into software reliability modeling. Some models are proposed and discussed under both ideal and imperfect debugging conditions. A numerical example by using real software failure data is presented in detail and the result shows that the proposed models can provide fairly good capability to predict software operational reliability.",2005,0, 2934,Evaluating Web applications testability by combining metrics and analogies,"This paper introduces an approach to describe a Web application through an object-oriented model and to study application testability using a quality model focused on the use of object-oriented metrics and software analogies analysis. The proposed approach uses traditional Web and object-oriented metrics to describe structural properties of Web applications and to analyze them. These metrics are useful to measure some important software attributes, such as complexity, coupling, size, cohesion, reliability, defects density, and so on. Furthermore, the presented quality model uses these object-oriented metrics to describe applications in order to predict some software quality factors (such as test effort, reliability, error proneness, and so on) through an instance-based classification system. The approach uses a classification system to study software analogies and to define a set of information then used as the basis for applications quality factors prediction and evaluation. The presented approaches are applied into the WAAT (Web Applications Analysis and Testing) project",2005,0, 2935,An embedded adaptive live video transmission system over GPRS/CDMA network,"The provision of video communication over wireless links is a challenging task because there is difficulty to get better video quality guarantee for video transmission over the current wireless networks include GPRS or CDMA. In this paper, we developed a embedded adaptive live video streaming transmission system on GPRS/CDMA network based on MPEG4 video compression standard. This system presents a performance enhancement for real-time video transmission by employing an adaptive sender rate control scheme, which estimates the available channel bandwidth between the sender and receiver, then adjusts the output rate of encoder side using R-D framework and frame-skip control mechanism according to the estimated time-varying channel bandwidth.",2005,0, 2936,Enhancing Internet robustness against malicious flows using active queue management,"Attackers can easily modify the TCP control protocols of host computers to inject the malicious flows to the Internet. Including DDoS and worm attack flows, these malicious flows are unresponsive to the congestion control mechanism which is necessary to the equilibrium of the whole Internet. In this paper, a new scheme against the large scale malicious flows is proposed based on the principles of TCP congestion control. The kernel is to implement a new scheduling algorithm named as CCU (compare and control unresponsive flows) which is one sort of active queue management (AQM). According to the unresponsive characteristic of malicious flows, CCU algorithm relies on the two processes of malicious flows - detection and punishment. The elastics control mechanism of unresponsive flows benefits the AQM with the high performance and enhances the Internet robustness against malicious flows. The network resource can be regulated for the basic quality of service (QoS) demands of legal users. The experiments prove that CCU can detect and restrain responsive flows more accurately compared to other AQM algorithms.",2005,0, 2937,Managing a project course using Extreme Programming,"Shippensburg University offers an upper division project course in which the students use a variant of Extreme Programming (XP) including: the Planning Game, the Iteration Planning Game, test driven development, stand-up meetings and pair programming. We start the course with two weeks of controlled lab exercises designed to teach the students about test driven development in JUnit/Eclipse and designing for testability (with the humble dialog box design pattern) while practicing pair programming. The rest of our semester is spent in three four-week iterations developing a product for a customer. Our teams are generally large (14-16 students) so that the projects can be large enough to motivate the use of configuration management and defect tracking tools. The requirement of pair programming limits the amount of project work the students can do outside of class, so class time is spent on the projects and teaching is on-demand individual mentoring with lectures/labs inserted as necessary. One significant challenge in managing this course is tracking individual responsibilities and activities to ensure that all of the students are fully engaged in the project. To accomplish this, we have modified the story and task cards from XP to provide feedback to the students and track individual performance against goals as part of the students' grades. The resulting course has been well received by the students. This paper will describe this course in more detail and assess its effect on students' software engineering background through students' feedback and code metrics",2005,0, 2938,Work in progress - Measuring the ROItimefor Static Analysis,"Static analysis is one method that offers potential for reducing errors in delivered software. Static analysis is currently used to discover buffer overflows and mathematical errors, as well as verifying compliance with documented programming standards. Static analysis is routinely used in safety critical software applications within the avionics and automotive industries. Outside of these applications, static analysis is not a routinely taught method for software development. This paper intends to provide a quantitative measure for evaluating the effectiveness of static analysis as well as presenting results from an academic environment",2005,0, 2939,A Simulation Task to Assess Students â€?Design Process Skill,"Research has shown that the quality of one's design process is an important ingredient in expertise. Assessing design process skill typically requires a performance assessment in which students are observed (either directly or by videotape) completing a design and assessed using an objective scoring system. This is impractical in course-based assessment. As an alternative, we developed a computer-based simulation task, in which the student respondent ""watches"" a team develop a design (in this instance a software design) and makes recommendations as to how they should proceed. The specific issues assessed by the simulation were drawn from the research literature. For each issue the student is asked to describe, in words, what the team should do next and then asked to choose among alternatives that the ""team"" has generated. Thus, the task can be scored qualitatively and quantitatively. The paper describes the task and its uses in course-based assessment",2005,0, 2940,Work in Progress - Computer Software for Predicting Steadiness of the Students,"The paper presents a study which identifies a series of factors influencing students' steadiness in their option for engineering training and the final aim is to elaborate an IT system for monitoring the quality of educational offer. This aim is reached through a research developed in three stages. Only the first and the second stages were described here. The last one is in work. So, the first stage is materialized in elaborating and validating a questionnaire structured on three dimensions: finding the expectations, diagnosis of initial motivation for initiating students in engineering, specifying identity information and elements of personal history from educational student's experience. The sample is randomly chosen and the students from the research group belong to Technical University ""Gh. Asachi"" Iassy, Romania, attending first, second and third year of study. The second stage of the scientific research establishes the relations between the identified expectations, initial motivation of students for engineering training and personal history in educational area on the one hand and students' educational performance on the other hand. Afterwards, the results of the first two stages represents the starting point for planning computer software to predict the steadiness of students in their professional choice",2005,0, 2941,Prostatectomy Evaluation using 3D Visualization and Quantitation,"Prostate cancer is a disease with a long natural history. Differences in survival outcomes as indicators of inappropriate surgery would take decades to appear. Therefore, the evaluation of the excised specimen according to defined parameters provides a more reasonable and timely assessment of surgical quality. There are currently a number of very different surgical approaches. Some uniform guidelines and quality assessment measuring readily available parameters would be desirable to establish a standard for comparison of surgical approaches and for individual surgical performance. In this paper, we present a novel methodology to objectively quantify the assessment process utilizing a 3D reconstructed model for the prostate gland. To this end, we discuss the development of a process employing image reconstruction and analysis techniques to assess the percent of capsule covered by soft tissue. A final goal is to develop software for the purpose of a quality assurance assessment for pathologists and surgeons to evaluate the adequacy/appropriateness of each surgical procedure; laparoscopic versus open perineal or retropubic prostatectomy. Results from applying this technique are presented and discussed",2005,0, 2942,Development of a DSP-Based Power Quality Monitoring Instrumentfor Real-Time Detection of Power Disturbances,"Current trends of power quality monitoring instruments are based on digital signal processors (DSP), which are used to record waveforms and harmonics and comes with software for collecting data and viewing monitoring results. The variations in the DSP based instruments are in the way algorithms that are developed for processing the real-time power quality waveforms. At present, all of the available power quality monitoring instruments are not capable of troubleshooting and diagnosing power quality problems. Therefore, a DSP-based power quality monitoring instrument is proposed for real-time disturbance recognition and source detection. The proposed instrument uses the Texas Instruments TMS320C6711DSP starter kit with a TI ADS8364EVM analog digital converter mounted on the daughter card. The instrument architecture and the software implementation are discussed in the paper. Preliminary experimental results displaying the fast Fourier transform analysis of the real-time voltage signals are included",2005,0, 2943,Voltage Sag Mitigation using NAS Battery-based Standby Power Supply,"Advances in industrial automation and manufacturing systems led to the utilization of advanced systems comprise of computers, adjustable speed drives and various industrial control systems that are sensitive to power quality problem, mainly voltage sags. Voltage sag is the reduction of the voltage at a user position with duration of between one cycle and a few seconds caused by motor starting, short circuits and fast reclosing of circuit breakers that may cause expensive shutdowns to manufacturers. This paper proposes a Standby Power Supply (SPS) configuration using Sodium Sulfur (NAS) battery for voltage sag mitigation. An electrical battery model is developed for the NAS battery. Simulation results of the electrical battery model are compared and validated experimentally. In addition, voltage sag detection and power conversion system (PCS) model of the system is also described. Using the proposed NAS-SPS configuration system, power quality mitigation is simulated to observe its performance in protecting an important load. The simulation work described in this paper is carried out using EMTDC/PSCAD software tool. The research work was undertaken by TNB Research and Universiti Tenaga Nasional in collaboration with Tokyo Electric Power Company (TEPCO) in a study on the use of NAS battery-based SPS.",2005,0, 2944,Change Propagation for Assessing Design Quality of Software Architectures,"The study of software architectures is gaining importance due to its role in various aspects of software engineering such as product line engineering, component based software engineering and other emerging paradigms. With the increasing emphasis on design patterns, the traditional practice of ad-hoc software construction is slowly shifting towards pattern-oriented development. Various architectural attributes like error propagation, change propagation, and requirements propagation, provide a wealth of information about software architectures. In this paper, we show that change propagation probability (CP) is helpful and effective in assessing the design quality of software architectures. We study two different architectures (one that employs patterns versus one that does not) for the same application. We also analyze and compare change propagation metric with respect to other coupling-based metrics.",2005,0, 2945,Simulation of partial discharge propagation and location in Abetti winding based on structural data,"Power transformer monitoring as a reliable tool for maintaining purposes of this valuable asset of power systems has always comprised partial discharge offline measurements and online monitoring. The reason lies in non-destructive feature of PD monitoring. Partial discharge monitoring helps to detect incipient insulation faults and prevent insulation failure of power transformers. This paper introduces a software package developed based on structural data of power transformer and discusses the results of the simulation on Abetti winding, which might be considered as a basic layer winding. A hybrid model is used to model the transformer winding, which has been developed by first author. Firstly, winding is modeled by ladder network method to determine model parameters and then multi-conductor transmission line model is utilized to work out voltage and current vectors and study partial discharge propagation as well as its localization. Utilized method of modeling makes it possible to simulate a transformer winding over a frequency range from a few hundred kHz to a few tens of MHz. The results take advantage of accurate modeling method and provide a reasonable interpretation as to PD propagation and location studies",2005,0, 2946,An Improved Algorithm for Deadlock Detection and Resolution in Mobile Agent Systems,"Mobile agent systems have been proved that are the best paradigm for distributed applications. They have potential advantages to provide a convenient, efficient and high performance distributed applications. Many solutions for problems in distributed systems such as deadlock detection rely on assumptions such as data location and message passing mechanism and static network topology that could not be applied for mobile agent systems. In this paper an improved distributed deadlock detection and resolution algorithm is proposed. The algorithm is based on Ashfield et. al. process. There are some cases in which original algorithm detects false deadlock or does not detect global deadlocks. The proposed algorithm eliminates the original algorithm deficiencies and improves its performance. It also minimizes the detection agent travels through the communication network. Also it has a major impact on improving performance of the mobile agent systems",2005,0, 2947,UVSD: software for detection of color underwater features,"Underwater Video Spot Detector (UVSD) is a software package designed to analyze underwater video for continuous spatial measurements (path traveled, distance to the bottom, roughness of the surface etc.) Laser beams of known geometry are often used in underwater imagery to estimate the distance to the bottom. This estimation is based on the manual detection of laser spots which is labor intensive and time consuming so usually only a few frames can be processed this way. This allows for spatial measurements on single frames (distance to the bottom, size of objects on the sea-bottom), but not for the whole video transect. We propose algorithms and a software package implementing them for the semi-automatic detection of laser spots throughout a video which can significantly increase the effectiveness of spatial measurements. The algorithm for spot detection is based on the support vector machines approach to artificial intelligence. The user is only required to specify on certain frames the points he or she thinks are laser dots (to train an SVM model), and then this model is used by the program to detect the laser dots on the rest of the video. As a result the precise (precision is only limited by quality of the video) spatial scale is set up for every frame. This can be used to improve video mosaics of the sea-bottom. The temporal correlation between spot movements changes and their shape provides the information about sediment roughness. Simultaneous spot movements indicate changing distance to the bottom; while uncorrelated changes indicate small local bumps. UVSD can be applied to quickly identify and quantify seafloor habitat patches, help visualize habitats and benthic organisms within large-scale landscapes, and estimate transect length and area surveyed along video transects.",2005,0, 2948,Statistical similarity search applied to content-based video copy detection,"Content-based copy detection (CBCD) is one of the emerging multimedia applications for which there is a need of a concerted effort from the database community and the computer vision community. Recent methods based on interest points and local fingerprints have been proposed to perform robust CBCD of images and video. They include two steps: the search of similar fingerprints in the database and a voting strategy that merges all the local results in order to perform a global decision. In most image or video retrieval systems, the search of similar features in the database is performed by a geometrical query in a multidimensional index structure. Recently, the paradigm of approximate knearest neighbors query has shown that trading quality for time can be widely profitable in that context. In this paper, we introduce a new approximate search paradigm, called Statistical Similarity Search (S3), dedicated to local fingerprints and we describe the original indexing structure we have developped to compute efficiently the corresponding queries. The key-point relates to the distribution of the relevant fingerprints around a query. Since a video query can result from (a combination of ) more or less transformations of an original one, we modelize the distribution of the distorsion vector between a referenced fingerprint and a candidate one. Experimental results show that these statistical queries allow high performance gains compared to classical e-range queries. By studying the influence of this approximate search on a complete CBCD scheme based on local video fingerprints, we also show that trading quality for time during the search does not degrade seriously the global robustness of the system, even with very large databases including more than 20,000 hours of video.",2005,0, 2949,Collaborative sensing using uncontrolled mobile devices,"This paper considers how uncontrolled mobiles can be used to collaboratively accomplish sensing tasks. Uncontrolled mobiles are mobile devices whose movements cannot be easily controlled for the purpose of achieving a task. Examples include sensors mounted on mobile vehicles of people to monitor air quality and to detect potential airborne nuclear, biological, or chemical agents. We describe an approach for using uncontrolled mobile devices for collaborative sensing. Considering the potentially large number of mobile sensors that may be required to monitor a large geographical area such as a city, a key issue is how to achieve a proper balance between performance and costs. We present analytical results on the rate of information reporting by uncontrolled mobile sensors needed to cover a given geographical area. We also present results from testbed implementations to demonstrate the feasibility of using existing low-cost software technologies and platforms with existing standard protocols for information reporting and retrieval to support a large system of uncontrolled mobile sensors",2005,0, 2950,On-demand overlay networking of collaborative applications,"We propose a new overlay network, called Generic Identifier Network (GIN), for collaborative nodes to share objects with transactions across affiliated organizations by merging the organizational local namespaces upon mutual agreement. Using local namespaces instead of a global namespace can avoid excessive dissemination of organizational information, reduce maintenance costs, and improve robustness against external security attacks. GIN can forward a query with an O(1) latency stretch with high probability and achieve high performance. In the absence of a complete distance map, its heuristic algorithms for self configuration are scalable and efficient. Routing tables are maintained using soft-state mechanisms for fault tolerance and adapting to performance updates of network distances. Thus, GIN has significant new advantages for building an efficient and scalable distributed hash table for modern collaborative applications across organizations",2005,0, 2951,Harmonic calculation software for industrial applications with adjustable speed drives,This paper describes the evaluation of a new harmonic calculation software. By using a combination of a pre-stored database and new interpolation techniques the software can very fast provide the harmonic data on real applications. The harmonic results obtained with this software have acceptable precision even with limited input data. The evaluation concludes here that this approach is very practical compared to other advanced harmonic analysis methods. The results are supported by comparisons of calculations and measurements given in an industrial application,2005,0, 2952,Mobility Prediction with Direction Tracking on Dynamic Source Routing,"Mobility prediction is one the most efficient techniques used to calculate lifetime estimation of wireless links (link lifetime-LLT) for a more reliable path selection. To incorporate a mobility prediction model into ad hoc routing protocols, an updating scheme should be used to maintain prediction accuracy especially for those source initiated on-demand routing protocols where LLT values are continuously changing after the route discovery phase. In this paper, a light-weight direction tracking scheme is introduced to enhance performance of the mobility prediction based routing by reassuring the accuracy of LLT used for path selection process. The simulation results show an improvement in terms of average packet latency with minimum impact on number of control messages utilized.",2005,0, 2953,On the Effect of Ontologies on Quality of Web Applications,"The semantic Web can be seen as means to improve qualitative characteristics of Web applications. Ontologies play a key role in the semantic Web and, therefore, are expected to have a profound effect on quality of a Web application. We apply the Quint2 model to predict an impact of an ontology on a number of quality dimensions of a Web application. We estimate that an ontology is likely to significantly improve the functionality and maintainability dimensions. The usability dimension is affected to a lesser extent. We explain the expected increase of quality with improved effectiveness of development process caused primarily by application domain ontologies",2005,0, 2954,A New Allocation Scheme for Parallel Applications with Deadline and Security Constraints on Clusters,"Parallel applications with deadline and security constraints are emerging in various areas like education, information technology, and business. However, conventional job schedulers for clusters generally do not take security requirements of realtime parallel applications into account when making allocation decisions. In this paper, we address the issue of allocating tasks of parallel applications on clusters subject to timing and security constraints in addition to precedence relationships. A task allocation scheme, or TAPADS (task allocation for parallel applications with deadline and security constraints), is developed to find an optimal allocation that maximizes quality of security and the probability of meeting deadlines for parallel applications. In addition, we proposed mathematical models to describe a system framework, parallel applications with deadline and security constraints, and security overheads. Experimental results show that TAPADS significantly improves the performance of clusters in terms of quality of security and schedulability over three existing allocation schemes",2005,0, 2955,A Constraint-Based Solution for On-Line Testing of Processors Embedded in Real-Time Applications,"Software-based self-test has been proposed as a low-cost strategy for on-line periodic testing of embedded processors. In this paper, we show that structural test programs composed only by regular deterministic self-test routines may be unfeasible in a real-time embedded platform. Hence, we propose a method to consciously select a set of test routines from different test approaches to compose a test program for an embedded processor. The proposed method not only ensures the periodical execution of the test, but also considers the optimization of memory and real-time requirements of the application, which are important constraints in embedded systems. Experimental results for a Java processor running real-time tasks demonstrate the effectiveness of the proposed solution",2005,0, 2956,Automatic Generation of Test Sets for SBST of Microprocessor IP Cores,"Higher integration densities, smaller feature lengths, and other technology advances, as well as architectural evolution, have made microprocessor cores exceptionally complex. Currently, Software-Based Self-Test (SBST) is becoming an attractive test solution since it guarantees high fault coverage figures, runs at-speed, and matches core test requirements while exploiting low-cost ATEs. However, automatically generating test programs is still an open problem. This paper presents a novel approach for test program generation, that couples evolutionary techniques with hardware acceleration. The methodology was evaluated targeting a 5-stage pipelined processor implementing a SPARCv8 micro-processor core.",2005,0, 2957,Failure's Identification for Electromechanical Systems with Induction Motor,"In this paper the new approach for guaranteed state estimation for electromechanical systems with induction motor is implemented for failure's identification in such systems. Approach is based on matrix comparison systems method, and a comparison with some other estimation methods was realized in package MATLAB. The experimental unit made for practical approbation of identification algorithms is presented with received experimental data. The new approach (matrix comparison systems method) for guaranteed state estimation of an induction motor (IM) and detect faults was presented in [1, 2]. State estimation is based on a discrete-time varying linear model of the IM [3, 4] with uncertainties and perturbations, which belong to known bounded sets with no hypothesis on their distribution inside these sets. The approach uses the measurement results. Recursive and explicit algorithm is presented and illustrated by example with real 1M parameters, realized in program package MATLAB.",2005,0, 2958,Analysis of voltage dips in power system - case of study,"The knowledge about the occurrence of voltage dips in a network can be obtained through the use of power quality monitors or by means of prediction methods. For assessment of the number of voltage dips, the method of fault positions could be used. The method was applied to a real 110 kV grid. The common software for short-circuit analysis was used for calculation of remaining voltage in the network during the faults. All fault types were taken into account. The expected number of dips and distribution of remaining voltage in the chosen substation, and the exposed areas for this substation are calculated. The results were compared with the outcome of the voltage dip monitoring. Although the comparison shows a good correspondence, the accuracy of assessment could be very influenced by accuracy of the failure statistics.",2005,0, 2959,PD diagnosis on medium voltage cables with Oscillating Voltage (OWTS),"Detecting, locating and evaluating of partial discharges (PD) in the insulating material, terminations and joints provides the opportunity for a quality control after installation and preventive detection of arising service interruption. A sophisticated evaluation is necessary between PD in several insulating materials and also in different types of terminations and joints. For a most precise evaluation of the degree and risk caused by PD it is suggested to use a test voltage shape that is preferably like the same under service conditions. Only under these requirements the typical PD parameters like inception and extinction voltage, PD level and PD pattern correspond to significant operational values. On the other hand the stress on the insulation should be limited during the diagnosis to not create irreversible damages and thereby worsening the condition of the test object. The paper introduces an Oscillating Wave Test System (OWTS), which meets these mentioned demands well. The design of the system, its functionality and especially the operating software are made for convenient field application. Field data and experience reports will be presented and discussed. This field data serve also as good guide for the level of danger to the different insulating systems due to partial discharges.",2005,0,2629 2960,On-line detection of stator winding faults in controlled induction machine drives,"The operation of induction machines with fast switching power electric devices puts additional stress on the stator windings what leads to an increased probability of machine faults. These faults can cause considerable damage and repair costs and - if not detected in an early stage - may end up in a total destruction of the machine. To reduce maintenance and repair costs many methods have been developed and presented in literature for an early detection of machine faults. This paper gives an overview of todaypsilas detection techniques and divides them into three major groups according to their underlying methodology. The focus will be on methods which are applicable to todaypsilas inverter-fed machines. In that case and especially if operated under controlled mode, the behavior of the machine with respect to the fault is different than for grid supply. This behavior is discussed and suitable approaches for fault detection are presented. Which method is eventually to choose, will depend on the application and the available sensors as well as hard- and software resources, always considering that the additional effort for the fault detection algorithm has to be kept as low as possible. The applicability of the presented fault detection techniques are also confirmed with practical measurements.",2005,0, 2961,Resource mapping and scheduling for heterogeneous network processor systems,"Task to resource mapping problems are encountered during (i) hardware-software co-design and (ii) performance optimization of Network Processor systems. The goal of the first problem is to find the task to resource mapping that minimizes the design cost subject to all design constraints. The goal of the second problem is to find the mapping that maximizes the performance, subject to all architectural constraints. To meet the design goals in performance, it may be necessary to allow multiple packets to be inside the system at any given instance of time and this may give rise to the resource contention between packets. In this paper, a Randomized Rounding (RR) based solution is presented for the task to resource mapping and scheduling problem. We also proposed two techniques to detect and eliminate the resource contention. We evaluate the efficacy of our RR approach through extensive simulation. The simulation results demonstrate that this approach produces near optimal solutions in almost all instances of the problem in a fraction of time needed to find the optimal solution. The quality of the solution produced by this approach is also better than often used list scheduling algorithm for task to resource mapping problem. Finally, we demonstrate with a case study, the results of a Network Processor design and scheduling problem using our techniques.",2005,0, 2962,A Case Study: GQM and TSP in a Software Engineering Capstone Project,"This paper presents a case study, describing the use of a hybrid version of the team software process (TSP) in a capstone software engineering project. A mandatory subset of TSP scripts and reporting mechanisms were required, primarily for estimating the size and duration of tasks and for tracking project status against the project plan. These were supplemented by metrics and additional processes developed by students. Metrics were identified using the goal-question-metric (GQM) process and used to evaluate the effectiveness of project management roles assigned to each member of the project team. TSP processes and specific TSP forms are identified as evidence of learning outcome attainment. The approach allowed for student creativity and flexibility and limited the perceived overhead associated with use of the complete TSP. Students felt that the experience enabled them to further develop and demonstrate teamwork and leadership skills. However, limited success was seen with respect to defect tracking, risk management, and process improvement. The case study demonstrates that the approach can be used to assess learning outcome attainment and highlights for students the significance of software engineering project management",2005,0, 2963,Medical device software standards,"Much medical device software is safety-related, and therefore needs to have high integrity (in other words its probability of failure has to be low.) There is a consensus that if you want to develop high-integrity software, you need a quality system. This is because software is a complex product that is easy to change and difficult to test, and the management system that handles these issues must include such quality system elements as: detailed traceable specifications, disciplined processes, planned verification and validation and a comprehensive configuration management and change control system. It is also agreed that software quality management systems need specific processes which are different from and additional to more general quality management systems such as that required by EN 13485. Historically, ISO 9000-3 Part 3: Guidelines for the application of ISO 9001:1994 to the development, supply, installation and maintenance of computer software states these additional processes very clearly, but is not mandatory. (This is now ISO 90003.) In both Europe and the USA there is therefore a gap in both regulations and standards for Medical Devices. There is no comprehensive requirement specifically for software development methods. In Europe, IEC 60601-1 Medical electrical equipment Part 1: General requirements for safety and essential performance, has specific requirements for software in section 14 Programmable Electrical Medical Systems (PEMS). This requires (at a fairly abstract level) some basic processes and documents, and includes an invocation of the risk management process of ISO 14971 Medical devices Application of risk management to medical devices In the US, there is an FDA regulation requiring Good Manufacturing Practice, with guidance on software development methods (strangely entitled Software Validat",2005,0, 2964,Finds in testing experiments for model evaluation,"To evaluate the fault location and the failure prediction models, simulation-based and codebased experiments were conducted to collect the required failure data. The PIE model was applied to simulate failures in the simulation-based experiment. Based on syntax and semantic level fault injections, a hybrid fault injection model is presented. To analyze the injected faults, the difficulty to inject (DTI) and difficulty to detect (DTD) are introduced and are measured from the programs used in the code-based experiment. Three interesting results were obtained from the experiments: 1) Failures simulated by the PIE model without consideration of the program and testing features are unreliably predicted; 2) There is no obvious correlation between the DTI and DTD parameters; 3) The DTD for syntax level faults changes in a different pattern to that for semantic level faults when the DTI increases. The results show that the parameters have a strong effect on the failures simulated, and the measurement of DTD is not strict.",2005,0, 2965,Architectural optimizations for software-based MPEG4 video encoder,"This paper presents a set of architectural optimizations for improving the performance of an MPEG4 video encoder. The techniques presented here focus on optimizing the encoder architecture rather than module level algorithmic modifications. The optimizations contribute to the development of a fast and memory efficient encoder without affecting video quality. An interface driven methodology has been developed to identify and solve performance bottlenecks for the encoder. Appropriate data flow between components has been developed so that memory intensive operations, such as memory access and copying, are minimized. These optimizations have been applied on MPEG4 simple profile encoder. Results demonstrate orders of magnitude computational improvements without any algorithmic modifications.",2005,0, 2966,Can Cohesion Predict Fault Density?,"
First Page of the Article
",2006,1, 2967,Empirical Analysis of Object-Oriented Design Metrics for Predicting High and Low Severity Faults,"In the last decade, empirical studies on object-oriented design metrics have shown some of them to be useful for predicting the fault-proneness of classes in object-oriented software systems. This research did not, however, distinguish among faults according to the severity of impact. It would be valuable to know how object-oriented design metrics and class fault-proneness are related when fault severity is taken into account. In this paper, we use logistic regression and machine learning methods to empirically investigate the usefulness of object-oriented design metrics, specifically, a subset of the Chidamber and Kemerer suite, in predicting fault-proneness when taking fault severity into account. Our results, based on a public domain NASA data set, indicate that 1) most of these design metrics are statistically related to fault-proneness of classes across fault severity, and 2) the prediction capabilities of the investigated metrics greatly depend on the severity of faults. More specifically, these design metrics are able to predict low severity faults in fault-prone classes better than high severity faults in fault-prone classes",2006,1, 2968,Using Historical In-Process and Product Metrics for Early Estimation of Software Failures,"The benefits that a software organization obtains from estimates of product quality are dependent upon how early in the product cycle that these estimates are available. Early estimation of software quality can help organizations make informed decisions about corrective actions. To provide such early estimates we present an empirical case study of two large scale commercial operating systems, Windows XP and Windows Server 2003. In particular, we leverage various historical in-process and product metrics from Windows XP binaries to create statistical predictors to estimate the post-release failures/failure-proneness of Windows Server 2003 binaries. These models estimate the failures and failure-proneness of Windows Server 2003 binaries at statistically significant levels. Our study is unique in showing that historical predictors for a software product line can be useful, even at the very large scale of the Windows operating system",2006,1, 2969,Rate-distortion performance of H.264/AVC compared to state-of-the-art video codecs,"In the domain of digital video coding, new technologies and solutions are emerging in a fast pace, targeting the needs of the evolving multimedia landscape. One of the questions that arises is how to assess these different video coding technologies in terms of compression efficiency. In this paper, several compression schemes are compared by means of peak signal-to-noise ratio (PSNR) and just noticeable difference (JND). The codecs examined are XviD 0.9.1 (conform to the MPEG-4 Visual Simple Profile), DivX 5.1 (implementing the MPEG-4 Visual Advanced Simple Profile), Windows Media Video 9, MC-EZBC and H.264/AVC AHM 2.0 (version JM 6.1 of the reference software, extended with rate control). The latter plays a key role in this comparison because the H.264/AVC standard can be considered as the de facto benchmark in the field of digital video coding. The obtained results show that H.264/AVC AHM 2.0 outperforms current proprietary and standards-based implementations in almost all cases. Another observation is that the choice of a particular quality metric can influence general statements about the relation between the different codecs.",2006,0, 2970,A new hybrid fault detection technique for systems-on-a-chip,"Hardening SoCs against transient faults requires new techniques able to combine high fault detection capabilities with the usual requirements of SoC design flow, e.g., reduced design-time, low area overhead, and reduced (or ) accessibility to source core descriptions. This paper proposes a new hybrid approach which combines hardening software transformations with the introduction of an Infrastructure IP with reduced memory and performance overheads. The proposed approach targets faults affecting the memory elements storing both the code and the data, independently of their location (inside or outside the processor). Extensive experimental results, including comparisons with previous approaches, are reported, which allow practically evaluating the characteristics of the method in terms of fault detection capabilities and area, memory, and performance overheads.",2006,0, 2971,Measurement Framework for Assessing Risks in Component-Based Software Development,"As Component-based software development (CBSD) is getting popular and is being considered as both efficient and effective approach to build large software applications and systems, potential risks within component-based practicing areas such as quality assessment, complexity estimation, performance prediction, configuration, and application management should not be taken lightly. In the existing literature there is lack of systematic work in identifying and assessing these risks. In particular, there is lack of a structuring framework that could be helpful for related CBSD stakeholders to measure these risks. In this research we examine prior related research work in software measurement and aim to develop a practical risk measurement framework that classifies potential CBSD risks and related metrics and provides a practical guidance for CBSD stakeholders.",2006,0, 2972,CrossTalk: cross-layer decision support based on global knowledge,"The dynamic nature of ad hoc networks makes system design a challenging task. Mobile ad hoc networks suffer from severe performance problems due to the shared, interference-prone, and unreliable medium. Routes can be unstable due to mobility and energy can be a limiting factor for typical devices such as PDAs, mobile phones, and sensor nodes. In such environments cross-layer architectures are a promising new approach, as they can adapt protocol behavior to changing networking conditions. This article introduces CrossTalk, a cross-layer architecture that aims at achieving global objectives with local behavior. It further compares CrossTalk with other cross-layer architectures proposed. Finally, it analyzes the quality of the information provided by the architecture and presents a reference application to demonstrate the effectiveness of the general approach.",2006,0, 2973,Static code analysis of functional descriptions in SystemC,"The co-design of hardware and software systems with object oriented design languages like SystemC has become very popular Static analysis of those descriptions allows to conduct the design process with metrics regarding quality of the code as well as with estimations of the properties of the final design. This paper shows the utilization of software metrics and the computation of high level metrics for SystemC, whose generation is embedded into a complete design methodology. The performance of this analysis process is demonstrated on a UMTS cell searching unit",2006,0, 2974,A new insight into postsurgical objective voice quality evaluation: application to thyroplastic medialization,"This paper aims at providing new objective parameters and plots, easily understandable and usable by clinicians and logopaedicians, in order to assess voice quality recovering after vocal fold surgery. The proposed software tool performs presurgical and postsurgical comparison of main voice characteristics (fundamental frequency, noise, formants) by means of robust analysis tools, specifically devoted to deal with highly degraded speech signals as those under study. Specifically, we address the problem of quantifying voice quality, before and after medialization thyroplasty, for patients affected by glottis incompetence. Functional evaluation after thyroplastic medialization is commonly based on several approaches: videolaryngostroboscopy (VLS), for morphological aspects evaluation, GRBAS scale and Voice Handicap Index (VHI), relative to perceptive and subjective voice analysis respectively, and Multi-Dimensional Voice Program (MDVP), that provides objective acoustic parameters. While GRBAS has the drawback to entirely rely on perceptive evaluation of trained professionals, MDVP often fails in performing analysis of highly degraded signals, thus preventing from presurgical/postsurgical comparison in such cases. On the contrary, the new tool, being capable to deal with severely corrupted signals, always allows a complete objective analysis. The new parameters are compared to scores obtained with the GRBAS scale and to some MDVP parameters, suitably modified, showing good correlation with them. Hence, the new tool could successfully replace or integrate existing ones. With the proposed approach, deeper insight into voice recovering and its possible changes after surgery can thus be obtained and easily evaluated by the clinician.",2006,0, 2975,Telephony-based voice pathology assessment using automated speech analysis,"A system for remotely detecting vocal fold pathologies using telephone-quality speech is presented. The system uses a linear classifier, processing measurements of pitch perturbation, amplitude perturbation and harmonic-to-noise ratio derived from digitized speech recordings. Voice recordings from the Disordered Voice Database Model 4337 system were used to develop and validate the system. Results show that while a sustained phonation, recorded in a controlled environment, can be classified as normal or pathologic with accuracy of 89.1%, telephone-quality speech can be classified as normal or pathologic with an accuracy of 74.2%, using the same scheme. Amplitude perturbation features prove most robust for telephone-quality speech. The pathologic recordings were then subcategorized into four groups, comprising normal, neuromuscular pathologic, physical pathologic and mixed (neuromuscular with physical) pathologic. A separate classifier was developed for classifying the normal group from each pathologic subcategory. Results show that neuromuscular disorders could be detected remotely with an accuracy of 87%, physical abnormalities with an accuracy of 78% and mixed pathology voice with an accuracy of 61%. This study highlights the real possibility for remote detection and diagnosis of voice pathology.",2006,0, 2976,Motion estimation and temporal up-conversion on the TM3270 media-processor,"We present a qualitative performance evaluation of several components of a video format conversion algorithm (referred to as natural motion (NM)). The implementation platform is a new programmable media-processor, the TM3270, combined with dedicated hardware support. The performance of two compute-intense NM components, motion estimation (ME) and temporal up-conversion (TU), is evaluated. The impact of new TM3270 features, such as new video-processing operations and data prefetching, is quantified. We show that a real-time implementation of the ME and TU algorithms is achievable in a fraction of the available compute performance, when operating on standard definition video",2006,0, 2977,Optimal project feature weights in analogy-based cost estimation: improvement and limitations,"Cost estimation is a vital task in most important software project decisions such as resource allocation and bidding. Analogy-based cost estimation is particularly transparent, as it relies on historical information from similar past projects, whereby similarities are determined by comparing the projects' key attributes and features. However, one crucial aspect of the analogy-based method is not yet fully accounted for: the different impact or weighting of a project's various features. Current approaches either try to find the dominant features or require experts to weight the features. Neither of these yields optimal estimation performance. Therefore, we propose to allocate separate weights to each project feature and to find the optimal weights by extensive search. We test this approach on several real-world data sets and measure the improvements with commonly used quality metrics. We find that this method 1) increases estimation accuracy and reliability, 2) reduces the model's volatility and, thus, is likely to increase its acceptance in practice, and 3) indicates upper limits for analogy-based estimation quality as measured by standard metrics.",2006,0, 2978,Semi-Automatic Business Process Performance Optimization Based On Redundant Control Flow Detection,"Composite web services are often defined in terms of business processes. Therefore, their performance depends on the effectiveness of the said processes. This paper describes an approach allowing for business process performance optimization that eventually leads to improved composite web service performance. The approach is based on the discovery of the redundant control flow. It is argued that such is the flow which is not backed up by data flow. After detection, redundant control flow is replaced with nonredundant.",2006,0, 2979,Evaluating architectural stability using a metric-based approach,Architectural stability refers to the extent software architecture is flexible to endure evolutionary changes while leaving the architecture intact. Approaches to evaluate software architectures for stability can be retrospective or predictive. Retrospective evaluation looks at successive releases of a software system to analyze how smoothly the evolution has taken place. Predictive evaluation examines a set of likely changes and shows the architecture can endure these changes. This paper proposes a metric-based approach to evaluate architectural stability of a software system by combining these two traditional analysis techniques. Such an approach performs on the fact bases extracted from the source code by reverse engineering techniques. We also present experimental results by applying the proposed approach to analyze the architectural stability across different versions of two spreadsheet systems,2006,0, 2980,Non-intrusive monitoring of software quality,"Measurement based software process improvement needs a non-intrusive approach to determine what and where improvement is needed without knowing anything about the methods and techniques used during project execution. Beside, it is necessary for obtaining successful business management, an accurate process behavior prediction. In order to obtain these results we proposed to use statistical process control (SPC) tailored to the software process point of view. The paper proposes an appropriate SPC-Framework and presents two industrial experiences in order to validate the framework in two different software contexts: recalibration of effort estimation models; monitoring of the primary processes through the supporting ones. These experiences validate the framework and show how it can be successfully used as a decision support tool in software process improvement",2006,0, 2981,Software test cases: is one ever enough?,"In this paper, software testing theory was examined as it pertains to one test at a time. In doing so, the author hopes to highlight some useful facts about testing theory that are somewhat obvious but often overlooked. Some precise statements about how bad the one-test policy can be were also made",2006,0, 2982,Study for Classification of Quality Attributes in Argentinean ECommerce Sites,"This work starts with results obtained in [4] where quality attributes to be considered in operative web projects were analyzed and evaluated, mainly for the ""Functionality and Content"" feature of sites and applications of electronic commerce (e-commerce). The evaluation process applied to sites from the studies of 2000 and 2004 [2,4] produced a concentrated binary matrix, which has been processed in previous works [3] with multivariate approaches that in turn have provided useful information that allowed the identification of a precise relationship between attributes. Aiming to obtain a group of quality attributes where it is possible to identify inter-relations or common features between the variables that compose each class, the binary table from the previous study has been revisited, and has been processed with statistical cluster analysis. Ultimately, hypothesis tests are proposed to infer the behavior of the population of Argentinean e-commerce sites.",2006,0, 2983,Mission dependability modeling and evaluation of repairable systems considering maintenance capacity,"The mission dependability of repairable systems not only depends on mission reliability and capacity of the system, but also the maintenance capacity during the whole mission. The probability of mission successful completion is one of the important performance measures. For the complex mission that has many sub-missions of kinds, its success probability is associated with the ready and execution duration, maintenance conditions in the working field and success requirements of each sub-mission. Maintenance conditions in the sub-mission working field mainly include replacement and repair of the failed components. According to these different maintenance conditions, we classify all sub-mission into three classes. By analyzing the state transition during the ready period and execution period of each sub-mission, this paper presents a dependability model to evaluate the probability of mixed multi-mission success of repairable systems considering maintenance capacity. A simple example is provided to show the application of the model",2006,0, 2984,A routing methodology for achieving fault tolerance in direct networks,"Massively parallel computing systems are being built with thousands of nodes. The interconnection network plays a key role for the performance of such systems. However, the high number of components significantly increases the probability of failure. Additionally, failures in the interconnection network may isolate a large fraction of the machine. It is therefore critical to provide an efficient fault-tolerant mechanism to keep the system running, even in the presence of faults. This paper presents a new fault-tolerant routing methodology that does not degrade performance in the absence of faults and tolerates a reasonably large number of faults without disabling any healthy node. In order to avoid faults, for some source-destination pairs, packets are first sent to an intermediate node and then from this node to the destination node. Fully adaptive routing is used along both subpaths. The methodology assumes a static fault model and the use of a checkpoint/restart mechanism. However, there are scenarios where the faults cannot be avoided solely by using an intermediate node. Thus, we also provide some extensions to the methodology. Specifically, we propose disabling adaptive routing and/or using misrouting on a per-packet basis. We also propose the use of more than one intermediate node for some paths. The proposed fault-tolerant routing methodology is extensively evaluated in terms of fault tolerance, complexity, and performance.",2006,0, 2985,Performance analysis of the FastICA algorithm and Crame´r-rao bounds for linear independent component analysis,"The FastICA or fixed-point algorithm is one of the most successful algorithms for linear independent component analysis (ICA) in terms of accuracy and computational complexity. Two versions of the algorithm are available in literature and software: a one-unit (deflation) algorithm and a symmetric algorithm. The main result of this paper are analytic closed-form expressions that characterize the separating ability of both versions of the algorithm in a local sense, assuming a ""good"" initialization of the algorithms and long data records. Based on the analysis, it is possible to combine the advantages of the symmetric and one-unit version algorithms and predict their performance. To validate the analysis, a simple check of saddle points of the cost function is proposed that allows to find a global minimum of the cost function in almost 100% simulation runs. Second, the Crame´r-Rao lower bound for linear ICA is derived as an algorithm independent limit of the achievable separation quality. The FastICA algorithm is shown to approach this limit in certain scenarios. Extensive computer simulations supporting the theoretical findings are included.",2006,0, 2986,"Electronics at Nanoscale: Fundamental and Practical Challenges, and Emerging Directions","In electronics, i.e. when using charge transport and change of electromagnetic fields in devices and systems, non-linearity, collective effects, and a hierarchy of design across length and time scales is central to efficient information processing through manipulation and transmission of bits. Silicon-based electronics brings together a systematic interdependent framework that connects software and hardware to reproducibility, speed, power, noise margin, reliability, signal restoration and communication, low defect count, and an ability to do predictive design across the scales. In the limits of nanometer scale, the dominant practical constraints arise from power dissipation in ever smaller volumes and of efficient signal interconnectivity commensurate with the large density of devices. These limitations are tied to the physical basis in charge transport and changes of fields, and equally apply to other materials â€?hard, soft or molecular. At the largest scale, the limitations arise from partitioning and hierarchical apportionment for system performance, ease of design and manufacturing. Power management, behavioral encapsulation, fault tolerance, congestion avoidance, timing, placement, routing, electromagnetic cross-talk, etc. all need to be addressed from the perspective of centimeter scale. We take a hierarchical view of the underlying fundamental and practical challenges of the conventional and unconventional approaches using the analytic framework appropriate to the length scale to distinguish between fact and fantasy, and to point to practical emerging directions with a system-scale perspective.",2006,0, 2987,Agent-based self-healing protection system,"This paper proposes an agent-based paradigm for self-healing protection systems. Numerical relays implemented with intelligent electronic devices are designed as a relay agent to perform a protective relaying function in cooperation with other relay agents. A graph-theory-based expert system, which can be integrated with supervisory control and a data acquisition system, has been developed to divide the power grid into primary and backup protection zones online and all relay agents are assigned to specific zones according to system topological configuration. In order to facilitate a more robust, less vulnerable protection system, predictive and corrective self-healing strategies are implemented as guideline regulations of the relay agent, and the relay agents within the same protection zone communicate and cooperate to detect, locate, and trip fault precisely with primary and backup protection. Performance of the proposed protection system has been simulated with cascading fault, failures in communication and protection units, and compared with a coordinated directional overcurrent protection system.",2006,0, 2988,PET reconstruction with system matrix derived from point source measurements,"The quality of images reconstructed by statistical iterative methods depends on an accurate model of the relationship between image space and projection space through the system matrix. The elements of a system matrix on the HiRez scanner from CPS Innovations were acquired by positioning a point source at different positions in the scanner field of view. Then, a whole system matrix was derived by processing the responses in projection space. Such responses included both geometrical and detection physics components of the system matrix. The response was parameterized to correct for point source location, smooth projection noise and the whole system matrix was derived. The model accounts for axial compression (span) used on the scanner. The projection operator for iterative reconstruction was constructed using the estimated response parameters. Computer-generated and acquired data were used to compare reconstruction obtained by the HiRez standard software and that produced by better modeling. Results showed that better resolution and noise property can be achieved with the modeled system matrix.",2006,0,2635 2989,Software-based transparent and comprehensive control-flow error detection,"Shrinking microprocessor feature size and growing transistor density may increase the soft-error rates to unacceptable levels in the near future. While reliable systems typically employ hardware techniques to address soft-errors, software-based techniques can provide a less expensive and more flexible alternative. This paper presents a control-flow error classification and proposes two new software-based comprehensive control-flow error detection techniques. The new techniques are better than the previous ones in the sense that they detect errors in all the branch-error categories. We implemented the techniques in our dynamic binary translator so that the techniques can be applied to existing x86 binaries transparently. We compared our new techniques with the previous ones and we show that our methods cover more errors while has similar performance overhead.",2006,0, 2990,Algorithmic implementation and efficiency maintenance of real-time environment using low-bitrate wireless communication,"This paper presents different issues of the real-time compression algorithms without compromising the video quality in the distributed environment. The theme of this research is to manage the critical processing stages (speed, information lost, redundancy, distortion) having better encoded ratio, without the fluctuation of quantization scale by using IP configuration. In this paper, different techniques such as distortion measure with searching method cover the block phenomenon with motion estimation process while passing technique and floating measurement is configured by discrete cosine transform (DCT) to reduce computational complexity which is implemented in this video codec. While delay of bits in encoded buffer side especially in real-time state is being controlled to produce the video with high quality and maintenance a low buffering delay. Our results show the performance accuracy gain with better achievement in all the above processes in an encouraging mode",2006,0, 2991,Influence of adaptive data layouts on performance in dynamically changing storage environments,"For most of today's IT environments, the tremendous need for storage capacity in combination with a required minimum I/O performance has become highly critical. In dynamically growing environments, a storage management solution's underlying data distribution scheme has great impact to the overall system I/O performance. The evaluation of a number of open system storage visualization solutions and volume managers has shown that all of them lack the ability to automatically adapt to changing access patterns and storage infrastructures; many of them require an error prone manual re-layout of the data blocks, or rely on a very time consuming re-striping of all available data. This paper evaluates the performance of conventional data distribution approaches compared to the adaptive virtualization solution V:DRIVE in dynamically changing storage environments. Changes of the storage infrastructure are normally not considered in benchmark results, but can have a significant impact on storage performance. Using synthetic benchmarks, V:DRIVE is compared in such changing environments with the non-adaptive Linux logical volume manager (LVM). The performance results of our tests clearly outline the necessity of adaptive data distribution schemes.",2006,0, 2992,A framework to support run-time assured dynamic reconfiguration for pervasive computing environments,"With the increasing use of pervasive computing environments (PCEs), developing dynamic reconfigurable software in such environments becomes an important issue. The ability to change software components in running systems has advantages such as building adaptive, long-life, and self-reconfigurable software as well as increasing invisibility in PCEs. As dynamic reconfiguration is performed in error-prone wireless mobile systems frequently, it can threaten system safety. Therefore, a mechanism to support assured dynamic reconfiguration at run-time for PCEs is required. In this paper, we propose an assured dynamic reconfiguration framework (ADRF) with emphasis on assurance analysis. The framework is implemented and is available for further research. To evaluate the framework, an abstract case study including reconfigurations has been applied using our own developed simulator for PCEs. Our experience shows that ADRF properly preserves reconfiguration assurance.",2006,0, 2993,Will Johnny/Joanie Make a Good Software Engineer? Are Course Grades Showing the Whole Picture?,"Predicting future success of students as software engineers is an open research area. We posit that current grading means do not capture all the information that may predict whether students will become good software engineers. We use one such piece of information, traceability of project artifacts, to illustrate our argument. Traceability has been shown to be an indicator of software project quality in industry. We present the results of a case study of a University of Waterloo graduate-level software engineering course where traceability was examined as well as course grades (such as mid-term, project grade, etc.). We found no correlation between the presence of good traceability and any of the course grades, lending support to our argument",2006,0, 2994,Software Defect Prediction Using Regression via Classification,"
First Page of the Article
",2006,0, 2995,Toward trustworthy software systems,"Organizations such as Microsoft's Trusted Computing Group and Sun Microsystems' Liberty Alliance are currently leading the debate on ""trustworthy computing."" However, these and other initiatives primarily focus on security, and trustworthiness depends on many other attributes. To address this problem, the University of Oldenburg's TrustSoft Graduate School aims to provide a holistic view of trustworthiness in software - one that considers system construction, evaluation/analysis, and certification - in an interdisciplinary setting. Component technology is the foundation of our research program. The choice of a component architecture greatly influences the resulting software systems' nonfunctional properties. We are developing new methods for the rigorous design of trustworthy software systems with predictable, provable, and ultimately legally certifiable system properties. We are well aware that it is impossible to build completely error-free complex software systems. We therefore complement fault-prevention and fault-removal techniques with fault-tolerance methods that introduce redundancy and diversity into software systems. Quantifiable attributes such as availability, reliability, and performance call for analytical prediction models, which require empirical studies for calibration and validation. To consider the legal aspects of software certification and liability, TrustSoft integrates the disciplines of computer science and computer law.",2006,0, 2996,Detecting anomaly and failure in Web applications,"Improving Web application quality will require automated evaluation tools. Many such tools are already available either as commercial products or research prototypes. The authors use their automated evaluation tools, ReWeb and TestWeb, for Web analysis and testing that improves Web pages and applications and to find some anomalies and failures in four case studies.",2006,0, 2997,Effects of hardware imperfection on six-port direct digital receivers calibrated with three and four signal standards,"Online calibration is essential for the proper operation of six-port digital receivers in communication systems, as such calibration cancels out receiver ageing and manufacturing defects. Simple calibration methods, using only three or four signal standards (SS), were reported in a previous paper, where these methods were assessed using an ADS software simulation for an ideal six-port circuit. A unified and general theory is presented for examining the effects of hardware imperfection on the performance of a six-port receiver (SPR) calibrated using these simplified techniques. This can be used to establish permissible hardware tolerances for proper operation of SPRs in different digital modulations.",2006,0, 2998,Monitoring Computer Interactions to Detect Early Cognitive Impairment in Elders,Maintaining cognitive performance is a key factor influencing elders' ability to live independently with a high quality of life. We have been developing unobtrusive measures to monitor cognitive performance and potentially predict decline using information from routine computer interactions in the home. Early detection of cognitive decline offers the potential for intervention at a point when it is likely to be more successful. This paper describes recommendations for the conduct of studies monitoring cognitive function based on routine computer interactions in elders' home environments,2006,0, 2999,Reliability forecasting in complex hardware/software systems,"We describe a class of models used to evaluate and forecast the reliability of complex hardware/software systems. We assume the system may be described through a reliability block diagram. Blocks referring to hardware components are dealt with 'pending' continuous time Markov chain models, whereas blocks referring to software components are dealt with 'pending' software reliability growth models. Inference and forecasting tasks with such models are described.",2006,0, 3000,A methodological framework for conceptual modeling of optical networks,"We present a methodological framework for conceptual modeling of optical networks. The framework is targeted towards a larger scale initiative by us for designing a unified modeling language (UML) profile that can help with conceptual modeling of typical optical network design problems. While there are pieces of literature available that describe profiles for other application areas, there are no profiles for use in networking problems, in general. Our work is an important step forward to address this deficiency. Our proposed profile takes the existing UML notations and extends them to support the modeling of optical networks",2006,0, 3001,An efficient low complexity encoder for MPEG advanced audio coding,"MPEG advanced audio coding (AAC) is the current state of the art in perceptual audio coding technology standardized by MPEG. Nevertheless the real-time constraint of reference software implementation for the AAC encoder provided by MPEG leads to a heavy computational bottleneck on today's portable devices. In this paper we review briefly the algorithms specified in the MPEG AAC standard and analyze the algorithmic performance. Then we carry out algorithm level optimizations in order to get a low complexity as well as high quality audio encoder that can be easily ported onto multiple platforms. In psychoacoustic model, a MDCT-based psychoacoustic analysis and block switching algorithm in time domain and a tonality detection measure are presented. Moreover, we employ improved pre-processing and adjustment of scale factor in quantization module. Experimental results show the optimized encoder can save about half of overall computational complexity. According to subject evaluation result, the perceptual audio quality of improved encoder is better than the AAC LC reference encoder",2006,0, 3002,Design of a smart sensor for load insulation failure detection in DC supply system of UAV,"This paper introduces an embedded smart sensor in the DC supply system of an unmanned aerial vehicle (UAV) based on AT89C2051, which can measure load insulation failure in direct current supply system. The measuring principles are analyzed first in this paper and the system design schemes of software and hardware are given out as well. Finally, the measure error has been analyzed and counted. This sensor possesses characteristics of low-cost, low-power consumption, high measurement precision, and high reliability. Practical experiments show that the sensor system, of which the measurement precision achieves 0.01mA, could meet the demands of detection accuracy, achieve the expected criteria of design, and could be spread",2006,0, 3003,SHARP: a new real-time scheduling algorithm to improve security of parallel applications on heterogeneous clusters,"This paper addresses the problem of improving quality of security for real-time parallel applications on heterogeneous clusters. We propose a new security- and heterogeneity-driven scheduling algorithm (SHARP for short), which strives to maximize the probability that parallel applications are executed in time without any risk of being attacked. Because of high security overhead in existing clusters, an important step in scheduling is to guarantee jobs' security requirements while minimizing overall execution times. The SHARP algorithm accounts for security constraints in addition to different processing capabilities of each node in a cluster. We introduce two novel performance metrics, degree of security deficiency and risk-free probability, to quantitatively measure quality of security provided by a heterogeneous cluster. Both security and performance of SHARP are compared with two well-known scheduling algorithms. Extensive experimental studies using real-world traces confirm that the proposed SHARP algorithm significantly improves security and performance of parallel applications on heterogeneous clusters",2006,0, 3004,A QoS-negotiable middleware system for reliably multicasting messages of arbitrary size,"E-business organizations commonly trade services together with quality of service (QoS) guarantees that are often dynamically agreed upon prior to service provisioning. Violating agreed QoS levels incurs penalties and hence service providers agree to QoS requests only after assessing the resource availability. Thus the system should, in addition to providing the services: (i) monitor resource availability, (ii) assess the affordability of a requested QoS level, and (iii) adapt autonomically to QoS perturbations which might undermine any assumptions made during assessment. This paper will focus on building such a system for reliably multicasting messages of arbitrary size over a loss-prone network of arbitrary topology such as the Internet. The QoS metrics of interest will be reliability, latency and relative latency. We meet the objectives (i)-(iii) by describing a network monitoring scheme, developing two multicast protocols, and by analytically estimating the achievable latencies and reliability in terms of controllable protocol parameters. Protocol development involves extending in two distinct ways an existing QoS-adaptive protocol designed for a single packet. Analytical estimation makes use of experimentally justified approximations and their impact is evaluated through simulations. As the protocol extension approaches are complementary in nature, so are the application contexts they are found best suited to; e.g., one is suited to small messages while the other to large messages",2006,0, 3005,Exploit failure prediction for adaptive fault-tolerance in cluster computing,"As the scale of cluster computing grows, it is becoming hard for long-running applications to complete without facing failures on large-scale clusters. To address this issue, checkpointing/restart is widely used to provide the basic fault-tolerant functionality, yet it suffers from high overhead and its reactive characteristic. In this work, we propose FT-Pro, an adaptive fault management mechanism that optimally chooses migration, checkpointing or no action to reduce the application execution time in the presence of failures based on the failure prediction. A cost-based evaluation model is presented for dynamic decision at run-time. Using the actual failure log from a production cluster at NCSA, we demonstrate that even with modest failure prediction accuracy, FT-Pro outperforms the traditional checkpointing/restart strategy by 13%-30% in terms of reducing the application execution time despite failures, which is a significant performance improvement for long-running applications.",2006,0, 3006,Byzantine Anomaly Testing for Charm++: Providing Fault Tolerance and Survivability for Charm++ Empowered Clusters,"Recently shifts in high-performance computing have increased the use of clusters built around cheap commodity processors. A typical cluster consists of individual nodes, containing one or several processors, connected together with a high-bandwidth, low-latency interconnect. There are many benefits to using clusters for computation, but also some drawbacks, including a tendency to exhibit low Mean Time To Failure (MTTF) due to the sheer number of components involved. Recently, a number of fault-tolerance techniques have been proposed and developed to mitigate the inherent unreliability of clusters. These techniques, however, fail to address the issue of detecting non-obvious faults, particularly Byzantine faults. At present, effectively detecting Byzantine faults is an open problem. We describe the operation of ByzwATCh, a module for run-time detecting Byzantine hardware errors as part of the Charm++ parallel programming framework",2006,0, 3007,Survival of the Internet applications: a cluster recovery model,"Internet applications become increasingly widely used for millions of people in the world and on the other hand the accidents or disruptions of service are also dramatically increasing. Accidents or disruptions occur either because of disasters or because of malicious attacks. The disasters could not be completely prevented. Prevention is a necessary but not a sufficient component of disaster. In this case, we have to prepare thoroughly for reducing the recovery time and get the users back to work faster. In this paper, we present a cluster recovery model to increase the survivability level of Internet applications. We construct a state transition model to describe the behaviors of cluster systems. By mapping through recovery actions to this transition model with stochastic process, we capture system behaviors as well as we get mathematical steady-state solutions of that chain. We first carry out for steady-state behaviors leading to measures like steady-state availability. By transforming this model with the system states we compute a system measure, the mean time to repair (MTTR) and also compute probabilities of cluster systems failures due in face of disruptions. Our model with the recovery actions have several benefits, which include reducing the time to get the users back to work and making recovery performance insensitive to the selection of a failure treatment parameter",2006,0, 3008,Leveraged Quality Assessment using Information Retrieval Techniques,"The goal of this research is to apply language processing techniques to extend human judgment into situations where obtaining direct human judgment is impractical due to the volume of information that must be considered. On aspect of this is leveraged quality assessments, which can be used to evaluate third-party coded subsystems, to track quality across the versions of a program, to assess the compression effort (and subsequent cost) required to make a change, and to identify parts of a program in need of preventative maintenance. A description of the QALP tool, its output from just under two million lines of code, and an experiment aimed at evaluating the tool's use in leveraged quality assessment are presented. Statistically significant results from this experiment validate the use of the QALP tool in human leverage quality assessment",2006,0, 3009,A Metric-Based Heuristic Framework to Detect Object-Oriented Design Flaws,"One of the important activities in re-engineering process is detecting design flaws. Such design flaws prevent an efficient maintenance, and further development of a system. This research proposes a novel metric-based heuristic framework to detect and locate object-oriented design flaws from the source code. It is accomplished by evaluating design quality of an object-oriented system through quantifying deviations from good design heuristics and principles. While design flaws can occur at any level, the proposed approach assesses the design quality of internal and external structure of a system at the class level which is the most fundamental level of a system. In a nutshell, design flaws are detected and located systematically in two phases using a generic OO design knowledge-base. In the first phase, hotspots are detected by primitive classifiers via measuring metrics indicating a design feature (e.g. complexity). In the second phase, individual design flaws are detected by composite classifiers using a proper set of metrics. We have chosen JBoss application server as the case study, due to its pure OO large size structure, and its success as an open source J2EE platform among developers",2006,0, 3010,Towards a Client Driven Characterization of Class Hierarchies,"Object-oriented legacy systems are hard to maintain because they are hard to understand. One of the main understanding problems is revealed by the so-called ""yo-yo effect"" that appears when a developer or maintainer wants to track a polymorphic method call. At least part of this understanding problem is due to the dual nature of the inheritance relation i.e., the fact that it can he used both as a code and/or as an interface reuse mechanism. Unfortunately, in order to find out the original intention for a particular hierarchy it is not enough to look at the hierarchy itself; rather than that, an in-depth analysis of the hierarchy's clients is required. In this paper we introduce a new metrics-based approach that helps us characterize the extent to which a base class was intended for interface reuse, by analyzing how clients use the interface of that base class. The idea of the approach is to quantify the extent to which clients treat uniformly the instances of the descendants of the base class, when invoking methods belonging to this common interface, We have evaluated our approach on two medium-sized case studies and we have found that the approach does indeed help to characterize the nature of a base class with respect to interface reuse. Additionally, the approach can be used to detect some interesting patterns in the way clients actually use the descendants through the interface of the base class",2006,0, 3011,QoS assessment via stochastic analysis,"Using a stochastic modeling approach based on the Unified Modeling Language and enriched with annotations that conform to the UML profile for schedulability performance, and time, the authors propose a method for assessing quality of service (QoS) in fault-tolerant (FT) distributed systems. From the UML system specification, they produce a generalized stochastic Petri net (GSPN) performance model for assessing an FT application's QoS via stochastic analysis. The ArgoSPE tool provides support for the proposed technique, helping to automatically produce the GSPN model",2006,0, 3012,Automatic Instruction-Level Software-Only Recovery,"As chip densities and clock rates increase, processors are becoming more susceptible to transient faults that can affect program correctness. Computer architects have typically addressed reliability issues by adding redundant hardware, but these techniques are often too expensive to be used widely. Software-only reliability techniques have shown promise in their ability to protect against soft-errors without any hardware overhead. However, existing low-level software-only fault tolerance techniques have only addressed the problem of detecting faults, leaving recovery largely unaddressed. In this paper, we present the concept, implementation, and evaluation of automatic, instruction-level, software-only recovery techniques, as well as various specific techniques representing different trade-offs between reliability and performance. Our evaluation shows that these techniques fulfill the promises of instruction-level, software-only fault tolerance by offering a wide range of flexible recovery options",2006,0,3352 3013,Assessment of the Effect of Memory Page Retirement on System RAS Against Hardware Faults,"The Solaris 10 operating system includes a number of new features for predictive self-healing. One such feature is the ability of the fault management software to diagnose memory errors and drive automatic memory page retirement (MPR), intended to reduce the negative impact of permanent memory faults that generate either correctable or uncorrectable errors on system reliability, availability, and serviceability (RAS). The MPR technique allows memory pages suffering from correctable errors and relocatable clean pages suffering from uncorrectable errors to be removed from use in the virtual memory system without interrupting user applications. It also allows relocatable dirty pages associated with uncorrectable errors to be isolated with limited impact on affected user processes, avoiding an outage for the entire system. This study applies analytical models, with parameters calibrated by field experience, to quantify the reduction that can be made by this operating system self-healing technique on the system interruptions, yearly downtime, and number of services introduced by hardware permanent faults, for typical low-end and mid-range server systems. The results show that significant improvements can be made on these three system RAS metrics by deploying the MPR capability",2006,0, 3014,"Performance Assurance via Software Rejuvenation: Monitoring, Statistics and Algorithms","We present three algorithms for detecting the need for software rejuvenation by monitoring the changing values of a customer-affecting performance metric, such as response time. Applying these algorithms can improve the values of this customer-affecting metric by triggering rejuvenation before performance degradation becomes severe. The algorithms differ in the way they gather and use sample values to arrive at a rejuvenation decision. Their effectiveness is evaluated for different sets of control parameters, including sample size, using simulation. The results show that applying the algorithms with suitable choices of control parameters can significantly improve system performance as measured by the response time",2006,0, 3015,Climbing the commercialization hill the four phase of product development,"The article tried to give some sense of the activities and the amount of work still remaining after the creation of the first working product breadboard. It is estimated that typically only 10-50% of the work has been done by the time this first breadboard is done. The rest of the effort will typically require significant cooperation between employees in each functional area. All of these activities are designed to deliver a product that will allow the organization to manufacture the product repeatedly, with predictable delivery, quality, cost, and performance.",2006,0, 3016,Smart laser vision sensors simplify inspection,"For all in-process and finished product applications, laser sensors are used in the rubber and tire industry to enhance competitiveness by improving productivity. The basic benefits of using laser sensors for quality control include increasing yield and productivity, increasing quality by providing 100% product inspection, reducing scrap production and rejects, and in-process inspection to detect and correct trends quickly before production of scrap. New developments in laser-based measuring systems can now provide high-speed digital data communications, eliminating the effects of errors from electrical noise and eliminating the need for A/D converters. New smart sensor developments allow application specific analysis software to run inside the sensor, simplifying operation, improving reliability, and reducing cost by eliminating the need for external signal processing hardware.",2006,0, 3017,Industry-oriented software-based system for quality evaluation of vehicle audio environments,"A new set of integrated software tools are proposed for the evaluation of vehicle audio quality for industrial purposes, taking advantage of the auralization approach that allows to simulate the binaural listening experience outside the cockpit. Two main cooperating tools are implemented. The first fulfills the function of acquiring relevant data for system modeling and for canceling the undesired effects of the acquisition chain. The second offers a user-friendly interface for real-time simulation of different car audio systems and the consequent evaluation of both objective and subjective performances. In the latter case, the listening procedure is directly experienced at the PC workplace, leading to a significant simplification of the audio-quality assessing task for comparing the selected systems. Moreover, such kind of subjective evaluation allowed to validate the proposed approach through a complete set of experiments (developed by means of a dedicated software environment) based on appropriate ITU recommendations.",2006,0, 3018,SIP intrusion detection and prevention: recommendations and prototype implementation,"As VoIP deployment are expected to grow, intrusion problems similar to those of which data networks experience will become very critical. In the early stages of deployment, the intrusion and security problems have not been seriously considered, although they could have a negative impact on VoIP deployment. In the paper, SIP intrusion detection and prevention requirements are analyzed and an IDS/IPS architecture is proposed. A prototype of the proposed architecture was implemented using as a basis the very popular open-source software Snort, a network-based intrusion detection and prevention system. The prototype of the proposed architecture extends the basic functionality of Snort, making use of the preprocessing feature that permits analyzing protocols of layers above the TCP/UDP one. The preprocessors block is a very powerful one since it permits to implement both knowledge and behavior based intrusion detection and prevention techniques in Snort that basically adopts a network based technique. An important requirement of an IPS is that legitimate traffic should be forwarded to the recipient with no apparent disruption or delay of service. Hence, the performance of the proposed architecture has been evaluated in terms of impact that its operation has on the QoS experienced by the VoIP users.",2006,0, 3019,A configurable framework for stream programming exploration in baseband applications,"This paper presents a configurable framework to be used for rapid prototyping of stream based languages. The framework is based on a set of design patterns defining the elementary structure of a domain specific language for high-performance signal processing. A stream language prototype for baseband processing has been implemented using the framework. We introduce language constructs to efficiently handle dynamic reconfiguration of distributed processing parameters. It is also demonstrated how new language specific primitive data types and operators can be used to efficiently and machine independently express computations on bitfields and data-parallel vectors. These types and operators yield code that is readable, compact and amenable to a stricter type checking than is common practice. They make it possible for a programmer to explicitly express parallelism to be exploited by a compiler. In short, they provide a programming style that is less error prone and has the potential to lead to more efficient implementations",2006,0, 3020,Facing the challenges of multicore processor technologies using autonomic system software,"Summary form only given. Multicore processor technologies, which appear to dominate the processor design landscape, require a shift of paradigm in the development of programming models and supporting environments for scientific and engineering applications. System software for multicore processors needs to exploit fine-grain concurrent execution capabilities and cope with deep, non- uniform memory hierarchies. Software adaptation to multicore technologies needs to happen even as hardware platforms change underneath the software. Last but not least, due to the extremely high compute density of chip multiprocessing components, system software needs to increase its energy-awareness and treat energy and temperature distribution as first-class optimization targets. Unfortunately, energy awareness is most often at odds with high performance. In the first part of this talk, the author discusses some of the major challenges of software adaptation to multicore technologies and motivate the use of autonomic, self-optimizing system software, as a vehicle for both high performance portability and energy-efficient program execution. In the second part of the talk, the author presents ongoing research in runtime environments for dense parallel systems built from multicore and SMT components, and focus on two topics, polymorphic multithreading, and power-aware concurrency control with quality-of-service guarantees. In the same context, the author discusses enabling technologies for improved software autonomy via dynamic runtime optimization, including continuous hardware profilers, and online power-efficiency predictors",2006,0, 3021,Analysis of checksum-based execution schemes for pipelined processors,"The performance requirements for contemporary microprocessors are increasing as rapidly as their number of applications grows. By accelerating the clock, performance can be gained easily but only with high additional power consumption. The electrical potential between logic `0' and `1' is decreased as integration and clock rates grow, leading to a higher susceptibility for transient faults, caused e.g. by power fluctuations or single event upsets (SEUs). We introduce a technique which is based on the well-known cyclic redundancy check codes (CRCs) to secure the pipelined execution of common microprocessors against transient faults. This is done by computing signatures over the control signals of each pipeline stage including dynamic out-of-order scheduling. To correctly compute the checksums, we resolve the time-dependency of instructions in the pipeline. We first discuss important physical properties of single event upsets (SEUs). Then we present a model of a simple processor with the applied scheme as an example. The scheme is extended to support n-way simultaneous multithreaded systems, resulting in two basic schemes. A cost analysis of the proposed SEU-detection schemes leads to the conclusion that both schemes are applicable at reasonable costs for pipelines with 5 to 10 stages and maximal 4 hardware threads. A worst-case simulation using software fault-injection of transient faults in the processor model showed that errors can be detected with an average of 83% even at a fault rate of 10-2. Furthermore, the scheme is able to detect an error within an average of only 5.05 cycles",2006,0, 3022,Predicting failures of computer systems: a case study for a telecommunication system,"The goal of online failure prediction is to forecast imminent failures while the system is running. This paper compares similar events prediction (SEP) with two other well-known techniques for online failure prediction: a straightforward method that is based on a reliability model and dispersion frame technique (DFT). SEP is based on recognition of failure-prone patterns utilizing a semi-Markov chain in combination with clustering. We applied the approaches to real data of a commercial telecommunication system. Results are presented in terms of precision, recall, F-measure and accumulated runtime-cost. The results suggest a significantly improved forecasting performance.",2006,0, 3023,Evaluating cooperative checkpointing for supercomputing systems,"Cooperative checkpointing, in which the system dynamically skips checkpoints requested by applications at runtime, can exploit system-level information to improve performance and reliability in the face of failures. We evaluate the applicability of cooperative checkpointing to large-scale systems through simulation studies considering real workloads, failure logs, and different network topologies. We consider two cooperative checkpointing algorithms: work-based cooperative checkpointing uses a heuristic based on the amount of unsaved work and risk-based cooperative checkpointing leverages failure event prediction. Our results demonstrate that, compared to periodic checkpointing, risk-based checkpointing with event prediction accuracy as low as 10% is able to significantly improve system utilization and reduce average bounded slowdown by a factor of 9, without losing any additional work to failures. Similarly, work-based checkpointing conferred tremendous performance benefits in the face of large checkpoint overheads",2006,0, 3024,Soft real-time aspects for service-oriented architectures,"In today's businesses we can see the trend that service-oriented architectures (SOA) represent the main paradigm for IT infrastructures. In this setting, a software offers its functionality as an electronic service to other software in a network. In order to realise more complex tasks or business processes that are comprised of individual services, compositions of these are formed. For specific application cases such as the time-dependent trading of goods or the IT-based management of natural disasters, service compositions are anticipated to provide soft real-time characteristics. Soft real-time characteristics for service compositions involve the QoS-based trading of services as well as a real-time conforming trading process that is capable of optimising the QoS in predictable and feasible amount of time. This paper discusses the two aspects and introduce two heuristics for the QoS-aware trading that show feasible computational efforts. Based on a software simulation, the performance of these algorithms is discussed",2006,0, 3025,Comparison of Super-Resolution Algorithms Using Image Quality Measures,"This paper presents comparisons of two learning-based super-resolution algorithms as well as standard interpolation methods. To allow quality assessment of results, a comparison of a variety of image quality measures is also performed. Results show that a MRF-based super-resolution algorithm improves a previously interpolated image. The estimated degree of improvement varies both according to the quality measure chosen for the comparison as well as the image class.",2006,0, 3026,Design Phase Analysis of Software Qualities Using Aspect-Oriented Programming,"If we can analyze software qualities during the design phase of development without waiting until the implementation is completed and tested, the total development cost and time will be significantly saved. Therefore in the past many design analysis methods have been proposed but either they are hard-to-learn and use or, in the case of simulation-based analysis, functionality concerns and quality concerns were intermingled in the design as well as in the implementation thereby making development and maintenance more complicated. In this paper, we propose a simulation-based design phase analysis method based on aspect-oriented programming. In our method, quality aspects remain separate from functionality aspect in the design model and the implementation code for simulation is automatically obtained by injecting quality requirements into the skeleton code generated from the design level functionality model. Our method has advantages over the conventional approach in reducing both the development cost and the maintenance costly",2006,0, 3027,Application of set membership identification for fault detection of MEMS,"In this article, a set membership (SM) identification technique is tailored to detect faults in microelectromechanical systems. The SM-identifier estimates an orthotope which contains the system's parameter vector. Based on this orthotope, the system's output interval is predicted. If the actual output is outside of this interval, then a fault is detected. Utilization of this scheme can discriminate mechanical-component faults from electronic component variations frequently encountered in MEMS. For testing the suggested algorithm's performance in simulation studies, an interface between classical control-software (MATLAB) and circuit emulation (HSPICE) is developed",2006,0, 3028,Establishing software product quality requirements according to international standards,"Software product quality is an important concern in the computer environment and whose immediate results are appreciated in all the activities where computers are used. The ISO/IEC 9126 standard series settle a software product quality model, for example, in the annex, shows the identification of the quality requirements like a necessary step for product quality. However, the standard does not included the way to get quality requirements, neither how to establish metrics levels. Establishing quality requirements and metric levels seems to be simple activities but they could be annoying and prone to errors if there is not a systematic approach for the process. This article presents a proposal for establishing product quality requirements according to the ISO/IEC 9126 standard.",2006,0, 3029,Usability measures for software components,"The last decade marked the first real attempt to turn software development into engineering through the concepts of Component-Based Software Development (CBSD) and Commercial Off-The-Shelf (COTS) components, with the goal of creating high-quality parts that could be joined together to form a functioning system. One of the most critical processes in CBSD is the selection of the software components (from either in-house or external repositories) that fulfill some architectural and user-defined requirements. However, there is currently a lack of quality models and metrics that can help evaluate the quality characteristics of software components during this selection process. This paper presents a set of measures to assess the Usability of software components, and describes the method followed to obtain and validate them.",2006,0, 3030,Detecting computer-induced errors in remote-sensing JPEG compression algorithms,"The JPEG image compression standard is very sensitive to errors. Even though it contains error resilience features, it cannot easily cope with induced errors from computer soft faults prevalent in remote-sensing applications. Hence, new fault tolerance detection methods are developed to sense the soft errors in major parts of the system while also protecting data across the boundaries where data flow from one subsystem to the other. The design goal is to guarantee no compressed or decompressed data contain computer-induced errors without detection. Detection methods are expressed at the algorithm level so that a wide range of hardware and software implementation techniques can be covered by the fault tolerance procedures while still maintaining the JPEG output format. The major subsystems to be addressed are the discrete cosine transform, quantizer, entropy coding, and packet assembly. Each error detection method is determined by the data representations within the subsystem or across the boundaries. They vary from real number parities in the DCT to bit-level residue codes in the quantizer, cyclic redundancy check parities for entropy coding, and packet assembly. The simulation results verify detection performances even across boundaries while also examining roundoff noise effects in detecting computer-induced errors in processing steps.",2006,0, 3031,Fully 3-D PET reconstruction with system matrix derived from point source measurements,"The quality of images reconstructed by statistical iterative methods depends on an accurate model of the relationship between image space and projection space through the system matrix. The elements of the system matrix for the clinical Hi-Rez scanner were derived by processing the data measured for a point source at different positions in a portion of the field of view. These measured data included axial compression and azimuthal interleaving of adjacent projections. Measured data were corrected for crystal and geometrical efficiency. Then, a whole system matrix was derived by processing the responses in projection space. Such responses included both geometrical and detection physics components of the system matrix. The response was parameterized to correct for point source location and to smooth for projection noise. The model also accounts for axial compression (span) used on the scanner. The forward projector for iterative reconstruction was constructed using the estimated response parameters. This paper extends our previous work to fully three-dimensional. Experimental data were used to compare images reconstructed by the standard iterative reconstruction software and the one modeling the response function. The results showed that the modeling of the response function improves both spatial resolution and noise properties",2006,0, 3032,Production performance of the ATLAS semiconductor tracker readout system,"The ATLAS Semiconductor Tracker (SCT) together with the pixel and the transition radiation detectors will form the tracking system of the ATLAS experiment at LHC. It will consist of 20000 single-sided silicon microstrip sensors assembled back-to-back into modules mounted on four concentric barrels and two end-cap detectors formed by nine disks each. The SCT module production and testing has finished while the macro-assembly is well under way. After an overview of the layout and the operating environment of the SCT, a description of the readout electronics design and operation requirements will be given. The quality control procedure and the DAQ software for assuring the electrical functionality of hybrids and modules will be discussed. The focus will be on the electrical performance results obtained during the assembly and testing of the end-cap SCT modules.",2006,0, 3033,Benchmarks and implementation of the ALICE high level trigger,"The ALICE High Level Trigger combines and processes the full information from all major detectors in a large computer cluster. Data rate reduction is achieved by reducing the event rate by selecting interesting events (software trigger) and by reducing the event size by selecting sub-events and by advanced data compression. Reconstruction chains for the barrel detectors and the forward muon spectrometer have been benchmarked. The HLT receives a replica of the raw data via the standard ALICE DDL link into a custom PCI receiver card (HLT-RORC). These boards also provide a FPGA co-processor for data-intensive tasks of pattern recognition. Some of the pattern recognition algorithms (cluster finder, Hough transformation) have been re-designed in VHDL to be executed in the Virtex-4 FPGA on the HLT-RORC. HLT prototypes were operated during the beam tests of the TPC and TRD detectors. The input and output interfaces to DAQ and the data flow inside of HLT were successfully tested. A full-scale prototype of the dimuon-HLT achieved the expected data flow performance. This system was finally embedded in a GRID-like system of several distributed clusters demonstrating the scalability and fault-tolerance of the HLT.",2006,0, 3034,Design and development of a graphical setup software for the CMS global trigger,"The CMS experiment at CERN's Large Hadron Collider will search for new physics at the TeV energy scale. Its trigger system is an essential component in the selection process of potentially interesting events. The Global Trigger is the final stage of the first-level selection process. It is implemented as a complex electronic system containing logic devices, which need to be programmed according to physics requirements. It has to reject or accept events for further processing based on coarse measurements of particle properties such as energies, momenta, and location. Algorithms similar to the ones used in the physics analysis are executed in parallel during the event selection process. A graphical setup program to define these algorithms and to subsequently configure the hardware has been developed. The design and implementation of the program, guided by the principal requirements of flexibility, quality assurance, platform-independence and extensibility, are described.",2006,0, 3035,GNAM: a low-level monitoring program for the ATLAS experiment,"During the last years many test-beam sessions were carried out on each ATLAS subdetector in order to assess the performances in standalone mode. During these tests, different monitoring programs were developed to ease the setup of correct running conditions and the assessment of data quality. The experience has converged into a common effort to develop a monitoring program, which aims to be exploitable by various subdetector groups. The requirements which drove the design of the program as well as its architecture are discussed in this paper. Characteristic features of the application are a modular software based on a Finite State Machine core to implement the synchronization with the data acquisition system and exploiting the ROOT Tree as transient data store. The first version of this monitoring program was used for the 2004 ATLAS Combined Test Beam.",2006,0, 3036,A Hierarchical Approach to Internet Distance Prediction,"Internet distance prediction gives pair-wise latency information with limited measurements. Recent studies have revealed that the quality of existing prediction mechanisms from the application perspective is short of satisfactory. In this paper, we explore the root causes and remedies for this problem. Our experience with different landmark selection schemes shows that although selecting nearby landmarks can increase the prediction accuracy for short distances, it can cause the prediction accuracy for longer distances to degrade. Such uneven prediction quality significantly impacts application performance. Instead of trying to select the landmark nodes in some ""intelligent"" fashion, we propose a hierarchical prediction approach with straightforward landmark selection. Hierarchical prediction utilizes multiple coordinate sets at multiple distance scales, with the ""right"" scale being chosen for prediction each time. Experiments with Internet measurement datasets show that this hierarchical approach is extremely promising for increasing the accuracy of network distance prediction.",2006,0, 3037,The Power of the Defender,"We consider a security problem on a distributed network. We assume a network whose nodes are vulnerable to infection by threats (e.g. viruses), the attackers. A system security software, the defender, is available in the system. However, due to the network’s size, economic and performance reasons, it is capable to provide safety, i.e. clean nodes from the possible presence of attackers, only to a limited part of it. The objective of the defender is to place itself in such a way as to maximize the number of attackers caught, while each attacker aims not to be caught. In [7], a basic case of this problem was modeled as a non-cooperative game, called the Edge model. There, the defender could protect a single link of the network. Here, we consider a more general case of the problem where the defender is able to scan and protect a set of k links of the network, which we call the Tuple model. It is natural to expect that this increased power of the defender should result in a better quality of protection for the network. Ideally, this would be achieved at little expense on the existence and complexity of Nash equilibria (profiles where no entity can improve its local objective unilaterally by switching placements on the network). In this paper we study pure and mixed Nash equilibria in the model. In particular, we propose algorithms for computing such equilibria in polynomial time and we provide a polynomial-time transformation of a special class of Nash equilibria, called matching equilibria, between the Edge model and the Tuple model, and vice versa. Finally, we establish that the increased power of the defender results in higher-quality protection of the network.",2006,0, 3038,Productivity and code quality improvement of mixed-signal test software by applying software engineering methods,Typical nowadays mixed-signal ICs are approaching 1000 or even more parametric tests. These tests are usually coded in a procedural or a semi-object oriented language. The huge code base of the programs is a significant challenge for maintaining code quality which inherently translates into outgoing quality. The paper presents software metrics of typical mixed-signal power management and audio devices with regard to the number of tests conducted. It is shown that classical ways to handle test programs are error prone and tend to systematically repeat known mistakes. The adoption of selected software engineering methods can avoid such mistakes and improves the productivity of the mixed-signal test generation. Results of a pilot project show significant productivity improvement. Open-source based software is employed to provide the necessary tool support. They establish a potential roadmap to get away from proprietary tester specific tool sets,2006,0, 3039,A Classification Scheme for Evaluating Management Instrumentation in Distributed Middleware Infrastructure,"Management instrumentation is an integrated capability of a software system that enables an external observer to monitor the system's availability, performance, and reliability during operation. It is highly useful for taking both proactive and reactive actions to keep a software system operational in mission-critical environments where tolerance for an unavailable or poor-performing system is very low. Middleware infrastructure components have taken important positions in distributed software systems due to various benefits related to the development, deployment, and runtime operations. Keeping these components highly available and up to the expected performance requires integrated capabilities that allow regular monitoring of critical functionality, measurement of Quality of Service (QoS), debugging and troubleshooting, and health-checks in the context of actual business processes.. Yet, currently there is no approach that enables systematic evaluation of the relative strengths and weaknesses of a middleware component's management instrumentation. In this paper, we will present an approach to evaluating management instrumentation of middleware infrastructure components. We use a classification-based scheme that has a functional dimension called Capability and two main quality dimensions called Usability and Precision. We further categorize each dimension into smaller, more precise instrumentation features, such as Tracing, Distributed Correlation and Granularity. In presenting our approach, we hope to achieve the following: i) educate middleware users on how to systematically assess or compare the overall manageability of a MidIn component using the classification scheme, and ii) share with middleware researchers on the importance of good integrated manageability in middleware infrastructure.",2006,0, 3040,Improving Accuracy of Multiple Regression Analysis for Effort Prediction Model,"In this paper, we outline the effort prediction model and the evaluation experiment. In addition we explore the parameters in the model. The model predicts effort of embedded software developments via multiple regression analysis using the collaborative filtering. Because companies, recently, focus on methods to predict effort of projects, which prevent project failures such as exceeding deadline and cost, due to more complex embedded software, which brings the evolution of the performance and function enhancement. In the model, we have fixed two parameters named k and ampmax, which would influence the accuracy of predicting effort. Hence, we investigate a tendency of them in the model and find the optimum value",2006,0, 3041,Topology Discovery for Coexisting IPv6 and IPv4 Networks,"Having an accurate network topology is vital for network performance optimization, configuration control, and fault monitoring...etc. In this paper, two network layer topology discovery algorithms for IPv6-only and IPv4-only networks, a data link layer topology discovery algorithm, and a transition detection algorithm of coexisting networks are presented. The network layer topology is constructed using the routing information from ip6RouteTable and ipRouteTable, while the data-link layer map is built using data from atTable and dotIdTpFdbTable. The transition detection is carried out by collecting and analyzing all IP addresses of both IP versions, through making the use of multicasting inverse IPv6 neighbor discovery messages and broadcasting IPv4 RARP messages. It is useful for finding out which transition mechanism is deployed in a coexisting network. Consequently, overlapped routers and hosts can be determined to complete the IP network topology discovery",2006,0, 3042,PalProtect: A Collaborative Security Approach to Comment Spam,"Collaborative security is a promising solution to many types of security problems. Organizations and individuals often have a limited amount of resources to detect and respond to the threat of automated attacks. Enabling them to take advantage of the resources of their peers by sharing information related to such threats is a major step towards automating defense systems. In particular, comment spam posted on blogs as a way for attackers to do search engine optimization (SEO) is a major annoyance. Many measures have been proposed to thwart such spam, but all such measures are currently enacted and operate within one administrative domain. We propose and implement a system for cross-domain information sharing to improve the quality and speed of defense against such spam",2006,0, 3043,Resource Availability Prediction in Fine-Grained Cycle Sharing Systems,"Fine-grained cycle sharing (FGCS) systems aim at utilizing the large amount of computational resources available on the Internet. In FGCS, host computers allow guest jobs to utilize the CPU cycles if the jobs do not significantly impact the local users of a host. A characteristic of such resources is that they are generally provided voluntarily and their availability fluctuates highly. Guest jobs may fail because of unexpected resource unavailability. To provide fault tolerance to guest jobs without adding significant computational overhead, it requires to predict future resource availability. This paper presents a method for resource availability prediction in FGCS systems. It applies a semi-Markov Process and is based on a novel resource availability model, combining generic hardware-software failures with domain-specific resource behavior in FGCS. We describe the prediction framework and its implementation in a production FGCS system named iShare. Through the experiments on an iShare testbed, we demonstrate that the prediction achieves accuracy above 86% on average and outperforms linear time series models, while the computational cost is negligible. Our experimental results also show that the prediction is robust in the presence of irregular resource unavailability",2006,0, 3044,Market-Based Resource Allocation using Price Prediction in a High Performance Computing Grid for Scientific Applications,"We present the implementation and analysis of a market-based resource allocation system for computational grids. Although grids provide a way to share resources and take advantage of statistical multiplexing, a variety of challenges remain. One is the economically efficient allocation of resources to users from disparate organizations who have their own and sometimes conflicting requirements for both the quantity and quality of services. Another is secure and scalable authorization despite rapidly changing allocations. Our solution to both of these challenges is to use a market-based resource allocation system. This system allows users to express diverse quantity- and quality-of-service requirements, yet prevents them from denying service to other users. It does this by providing tools to the user to predict and tradeoff risk and expected return in the computational market. In addition, the system enables secure and scalable authorization by using signed money-transfer tokens instead of identity-based authorization. This removes the overhead of maintaining and updating access control lists, while restricting usage based on the amount of money transferred. We examine the performance of the system by running a bioinformatics application on a fully operational implementation of an integrated grid market",2006,0, 3045,Ensuring numerical quality in grid computing,"We propose an approach which gives the user valuable information on the various platforms avail able in a grid in order to assess the numerical quality of an algorithm run on each of these platforms. In this manner, the user is provided with at least very strong hints whether a program performs reliably in a grid before actually executing it. Our approach extends IeeeCC754 by two ""grid-enabled"" modes: The first mode calculates a ""numerical checksum"" on a specific grid host and executes the job only if the check sum is identical to a locally generated one. The second mode provides the user with information on the reliability and IEEE 754-conformity of the underlying floating-point implementation of various platforms. In addition, it can help to find a set of compiler options to optimize the application's performance while retaining numerical stability",2006,0,3293 3046,CEDA: control-flow error detection through assertions,"This paper presents an efficient software technique, control flow error detection through assertions (CEDA), for online detection of control flow errors. Extra instructions are automatically embedded into the program at compile time to continuously update run-time signatures and to compare them against pre-assigned values. The novel method of computing run-time signatures results in a huge reduction in the performance overhead, as well as the ability to deal with complex programs and the capability to detect subtle control flow errors. The widely used C compiler, GCC, has been modified to implement CEDA, and the SPEC benchmark programs were used as the target to compare with earlier techniques. Fault injection experiments were used to evaluate the fault detection capabilities. Based on a new comparison metric, method efficiency, which takes into account both error coverage and performance overhead, CEDA is found to be much better than previously proposed methods",2006,0, 3047,A low-cost SEU fault emulation platform for SRAM-based FPGAs,"In this paper, we introduce a fully automated low cost hardware/software platform for efficiently performing fault emulation experiments targeting SEUs in the configuration bits of FPGA devices, without the need for expensive radiation experiments. We propose a method for significantly reducing the fault list by removing the faults on unused LUT bit positions. We also target the design flip-flops found in the configurable logic blocks (CLBs) inside the FPGA. Run-time reconfigurability of Virtex devices using JBits is exploited to provide the means not only for fault injection but fault detection as well. First, we consider five possible application scenarios for evaluating different self-test schemes. Then, we apply the least favorite and most time consuming of these scenarios on two 32times32 multiplier designs, demonstrating that transferring the simulation processing workload to FPGA hardware can allow for acceleration of simulation time of more than two orders of magnitude",2006,0, 3048,Software-based adaptive and concurrent self-testing in programmable network interfaces,"Emerging network technologies have complex network interfaces that have renewed concerns about network reliability. In this paper, we present an effective low-overhead failure detection technique, which is based on a software watchdog timer that detects network processor hangs and a self-testing scheme that detects interface failures other than processor hangs. The proposed adaptive and concurrent self-testing scheme achieves failure detection by periodically directing the control flow to go through only active software modules in order to detect errors that affect instructions in the local memory of the network interface. The paper shows how this technique can be made to minimize the performance impact on the host system and be completely transparent to the user",2006,0, 3049,The Mars Exploration Rover surface mobility flight software driving ambition,"NASA's Mars exploration rovers' (MER) onboard mobility flight software was designed to provide robust and flexible operation. The MER vehicles can be commanded directly, or given autonomous control over multiple aspects of mobility: which motions to drive, measurement of actual motion, terrain interpretation, even the selection of targets of interest (although this mode remains largely underused). Vehicle motion can be commanded using multiple layers of control: motor control, direct drive operations (arc, turn in place), and goal-based driving (goto waypoint). Multiple layers of safety checks ensure vehicle performance: command limits (command timeout, time of day limit, software enable, activity constraints), reactive checks (e.g., motor current limit, vehicle tilt limit), and predictive checks (e.g., step, tilt, roughness hazards). From January 2004 through October 2005, Spirit accumulated over 5000 meters and Opportunity 6000 meters of odometry, often covering more than 100 meters in a single day. In this paper we describe the software that has driven these rovers more than a combined 11,000 meters over the Martian surface, including its design and implementation, and summarize current mobility performance results from Mars",2006,0, 3050,Coverall algorithm for test case reduction,"This paper proposes a technique called Coverall algorithm, which is based on a conventional attempt to reduce cases that have to be tested for any given software. The approach utilizes the advantage of regression testing where fewer test cases would lessen time consumption of the testing as a whole. The technique also offers a means to perform test case generation automatically. Compared to most of the techniques in the literature where the tester has no option but to perform the test case generation manually, the proposed technique provides a better option. As for the test cases reduction, the technique uses simple algebraic conditions to assign fixed values to variables (maximum, minimum and constant variables). By doing this, the variables values would be limited within a definite range, resulting in fewer numbers of possible test cases to process. The technique can also be used in program loops and arrays. After a comparative assessment of the technique, it has been confirmed that the technique could reduce number of test cases by more than 99%. As for the other features of the technique, automatic test cases generation, all four step of test cases generation in the proposed technique have been converted into an operational program. The success of the program in performing these steps is indeed significant since it represents a practical means for performing test cases generation automatically by a computer algorithm",2006,0, 3051,Fixing BIT on the V-22 Osprey,"The V-22 Osprey measured an unsatisfactory high built-in-test (BIT) false alarm rate of 92% (threshold les 25%) during its first Operational Test And Evaluation Phase (OPEVAL) in 2000. On a good note, the V-22 did exceed its operational objectives for BIT fault detection and fault isolation rates. Afterwards, the Blue Ribbon Panel report identified the need to fix false alarms: ""Expedite the plan to reduce the V-22 false-alarm rate in both the aircraft and ground systems, with priority on aircraft software"". Correction of false alarms then became a high priority issue on the program. Therefore, a success-oriented engineering approach was developed and implemented to mature the diagnostics system in order to meet the operational requirements",2006,0, 3052,OpenMP extension to SMP clusters,"This article discusses the approaches to apply the OpenMP programming model to symmetric multiprocessor (SMP) clusters using software distributed shared memory (SDSM). The major focus of this article is on the challenges that the prior studies faced and on their solution techniques. Exploiting message-passing primitives explicitly for the openMP synchronization and work-sharing directives enables light interprocess synchronizations. The studies on loop scheduling for SMP clusters will promise significant improvement in system performance. Finally, OpenMP is considered as promising programming model",2006,0, 3053,Decision support system for software project management,"Decision support systems combine individuals' and computers' capabilities to improve the quality of decisions. Usually adopted in manufacturing to design floor plans and optimize resource allocation and performance, DSSs are penetrating increasingly complex application areas, from insurance fraud detection to military system procurement to emergency planning. Although researchers have suggested many approaches, DSSs haven't yet entered the main stream of software engineering tools. The complexity of the software process and its sociotechnical nature are often mentioned as the main obstacles to their adoption. As DSSs are developed for other equally complex application areas, we need to identify approaches that can overcome these difficulties and enable project managers to exploit DSSs' capabilities in their daily activities. A hybrid two-level modeling approach is a step in this direction",2006,0, 3054,Partition-based vector filtering technique for suppression of noise in digital color images,"A partition-based adaptive vector filter is proposed for the restoration of corrupted digital color images. The novelty of the filter lies in its unique three-stage adaptive estimation. The local image structure is first estimated by a series of center-weighted reference filters. Then the distances between the observed central pixel and estimated references are utilized to classify the local inputs into one of preset structure partition cells. Finally, a weighted filtering operation, indexed by the partition cell, is applied to the estimated references in order to restore the central pixel value. The weighted filtering operation is optimized off-line for each partition cell to achieve the best tradeoff between noise suppression and structure preservation. Recursive filtering operation and recursive weight training are also investigated to further boost the restoration performance. The proposed filter has demonstrated satisfactory results in suppressing many distinct types of noise in natural color images. Noticeable performance gains are demonstrated over other prior-art methods in terms of standard objective measurements, the visual image quality and the computational complexity.",2006,0, 3055,Near-Optimal Low-Cost Distortion Estimation Technique for JPEG2000 Encoder,"Optimal rate-control is a very important feature of JPEG2000 which allows simple truncation of compressed bit-stream to achieve best image quality at a given target bit-rate. Accurate distortion estimation with respect to allowed bit-stream truncation points, is essential for rate-control performance. In this paper, we address the issues involved in accurate distortion estimation for hardware oriented implementation of JPEG2000 encoding systems with generic block coding capabilities. We propose a novel hardware friendly distortion estimation technique. Rate control based on the proposed technique results in only an average 0.02 dB PSNR degradation with respect to the optimal distortion estimation approach used in the software implementations of JPEG2000. This is the best performance reported in comparison to existing techniques. The proposed technique requires only an additional 4096 bits per block coder which is 80% less than the memory requirements of optimal approach",2006,0, 3056,Data Warehousing Process Maturity: An Exploratory Study of Factors Influencing User Perceptions,"This paper explores the factors influencing perceptions of data warehousing process maturity. Data warehousing, like software development, is a process, which can be expressed in terms of components such as artifacts and workflows. In software engineering, the Capability Maturity Model (CMM) was developed to define different levels of software process maturity. We draw upon the concepts underlying CMM to define different maturity levels for a data warehousing process (DWP). Based on the literature in software development and maturity, we identify a set of features for characterizing the levels of data warehousing process maturity and conduct an exploratory field study to empirically examine if those indeed are factors influencing perceptions of maturity. Our focus in this paper is on managerial perceptions of DWP. The results of this exploratory study indicate that several factors-data quality, alignment of architecture, change management, organizational readiness, and data warehouse size-have an impact on DWP maturity, as perceived by IT professionals. From a practical standpoint, the results provide useful pointers, both managerial and technological, to organizations aspiring to elevate their data warehousing processes to more mature levels. This paper also opens up several areas for future research, including instrument development for assessing DWP maturity",2006,0, 3057,Building statistical test-cases for smart device software - an example,"Statistical testing (ST) of software or logic-based components can produce dependability information on such components by yielding an estimate for their probability of failure on demand. An example of software-based components that are increasingly used within safety-related systems e.g. in the nuclear industry, are smart devices. Smart devices are devices with intelligence, capable of more than merely representing correctly a sensed quantity but of functionality such as processing data, self-diagnosis and possibly exchange of data with other devices. Examples are smart transmitters or smart sensors. If such devices are used in a safety-related context, it is crucial to assess whether they fulfil the dependability requirements posed on them to ensure they are dependable enough to be used within the specific safety-related context. This involves making a case for the probability of systematic failure of the smart device. This failure probability is related to faults present in the logic or software-based part of the device. In this paper we look at a technique that can be used to establish a probability of failure for the software part of a smart monitoring unit. This technique is ""statistical testing"" (ST). Our aim is to share our own experience with ST and to describe some of the issues we have encountered so far on the way to perform ST on this device software.",2006,0, 3058,QMON: QoS- and Utility-Aware Monitoring in Enterprise Systems,"The scale, reliability and cost requirements of enterprise data centers require automation of center management. Examples include provisioning, scheduling, capacity planning, logging and auditing. A key component of such automation functions is online monitoring. In contrast to monitoring systems designed for human users, a particular concern for online enterprise monitoring is Quality of Service (QoS). Since breaking service level agreements (SLAs) has direct financial and legal implications, enterprise monitoring must be conducted so as to maintain SLAs. This includes the ability to differentiate the QoS of monitoring itself for different classes of users or more generally, for software components subject to different SLAs. Thus, without embedding notions of QoS into the monitoring systems used in next generation data centers, it will not be possible to accomplish the desired automation of their operation. This paper both demonstrates the importance of QoS in monitoring and it presents a QoS-capable monitoring system, termed QMON. QMON supports utility-aware monitoring while also able to differentiate between different classes of monitoring, corresponding to classes of SLAs. The implementation of QMON offers high levels of predictability for service delivery (i.e., predictable performance) and it is dynamically configurable to deal with changes in enterprise needs or variations in services and applications. We demonstrate the importance of QoS in monitoring and the QoS capabilities of QMON in a series of case studies and experiments, using a multi-tier web service benchmark.",2006,0, 3059,Service System Resource Management Based on a Tracked Layered Performance Model,"Autonomic computer systems adapt themselves to cope with changes in the operating conditions and to meet the service-level agreements with a minimum of resources. Changes in operating conditions include hardware and software failures, load variation and variations in user interaction with the system. The self adaptation can be achieved by tuning the software, balancing the load or through hardware provisioning. This paper investigates a feed-forward adaptation scheme in which tuning and provisioning decisions are based on a dynamic predictive performance model of the system and the software. The model consists of a layered queuing network whose parameters are tuned by tracking the system with an Extended Kalman Filter. An optimization algorithm searches the system configuration space by using the predictive performance model to evaluate every configuration.",2006,0, 3060,Continuous geodetic time-transfer analysis methods,"We address two issues that limit the quality of time and frequency transfer by carrier phase measurements from the Global Positioning System (GPS). The first issue is related to inconsistencies between code and phase observations. We describe and classify several types of events that can cause inconsistencies and observe that some of them are related to the internal clock of the GPS receiver. Strategies to detect and overcome time-code inconsistencies have been developed and implemented into the Bernese GPS software package. For the moment, only inconsistencies larger than the 20 ns code measurement noise level can be detected automatically. The second issue is related to discontinuities at the day boundaries that stem from the processing of the data in daily batches. Two new methods are discussed: clock handover and ambiguity stacking. The two approaches are tested on data obtained from a network of stations, and the results are compared with an independent time-transfer method. Both methods improve the stability of the transfer for short averaging times, but there is no benefit for averaging times longer than 8 days. We show that continuous solutions are sufficiently robust against modeling and preprocessing errors to prevent the solution from accumulating a permanent bias.",2006,0, 3061,An Efficient Radio Admission Control Algorithm for 2.5G/3G Cellular Networks,"We design an efficient radio admission control algorithm that minimizes blocking probability subject to the condition that the overload probability is smaller than a pre-specified threshold. Our algorithm is quite general and can be applied to both TDMA-based cellular technologies, such as GPRS and EDGE, and CDMA-based technologies, such as UMTS and CDMA2000. We extend prior work in measurement-based admission control in wireline networks to wireless cellular networks and to heterogeneous users. We take the variance of the resource requirement into account while making the admission decision. Using simulation results, we show that our admission control algorithm is able to meet the target overload probability over a range of call arrival rates and radio conditions. We also compare our scheme with a simple admission control algorithm and also show how to use our approach for the carrier selection problem",2006,0, 3062,VoIP service quality monitoring using active and passive probes,"Service providers and enterprises all over the world are rapidly deploying Voice over IP (VoIP) networks because of reduced capital and operational expenditure, and easy creation of new services. Voice traffic has stringement requirements on the quality of service, like strict delay and loss requirements, and 99.999% network availability. However, IP networks have not been designed to easily meet the above requirements. Thus, service providers need service quality management tools that can proactively detect and mitigate service quality degradation of VoIP traffic. In this paper, we present active and passive probes that enable service providers to detect service impairments. We use the probes to compute the network parameters (delay, loss and jitter) that can be used to compute the call quality as a Mean Opinion Score using a voice quality metric, E-model. These tools can be used by service providers and enterprises to identify network impairments that cause service quality degradation and take corrective measures in real time so that the impact on the degradation perceived by end-users is minimal",2006,0, 3063,Performance of Real-Time Traffic in EDCA-BASED IEEE 802.11 b/g WLANS,"The IEEE 802.11e draft specification is intended to solve the performance problems of WLANs for real time traffic by extending the original 802.11 medium access control (MAC) protocol and introducing priority access mechanisms in the form of the enhanced distributed channel access mechanism (EDCA) and hybrid coordination channel access (HCCA). The draft standard comes with a lot of configurable parameters for channel access, admission control, etc. but it is not very clear how real time traffic actually performs with respect to capacity and throughput in such WLANs that deploy this upcoming standard. In this report we have provided detailed simulation results on the performance of enterprise-anticipated real time VoIP application and collaborative video conferencing in presence of background traffic in EDCA-based IEEE 802.11 WLANs. We estimate the channel capacity and acceptable load conditions for some important enterprise usage scenarios. Subsequently, admission control limits are experimentally derived for these usage scenarios for voice and video traffic. Our simulations show that admission control greatly helps in maintaining the quality of admitted voice calls and video conferencing sessions that are prioritized as per EDCA mechanisms within acceptable channel load conditions. The use of admission control allows admitted voice calls and video sessions to retain their throughput and delay characteristics while unadmitted traffic (voice/video streams) greatly suffer from poor quality (delays, packet drops, etc.) as the channel load increases",2006,0, 3064,Application Aware Overlay One-to-Many Data Dissemination Protocol for High-Bandwidth Sensor Actuator Networks,"An application-aware deterministic overlay one-to-many (DOOM) protocol is proposed for meeting heterogeneous QoS requirements of multiple end users of high-bandwidth sensor actuator network (HB-SAN) applications. Although DOOM is initially targeted for use in collaborative adaptive systems of weather radars, it has been designed for use in wider class of sensing systems. DOOM protocol performs rate-based application aware congestion control by selecting end user specific subset of the sensor data for transmission thus adapting to available network infrastructure under dynamic network conditions. Performance of DOOM is evaluated for radar networking using a combination of Planetlab as well as an emulation based test-bed. It is shown that DOOM protocol is able to meet individual end user QoS requirements as well as aggregate QoS requirements of different end users. Moreover, multiple DOOM streams are friendly to each other as well as to TCP cross- traffic sharing the bottleneck link",2006,0, 3065,A Low-level Simulation Study of Prioritization in IEEE 802.11e Contention-based Networks,"This work deals with the performance evaluation of the IEEE 802.11e EDCA proposal for service prioritization in wireless LANs. A large amount of study has been carried out in the scientific community to evaluate the performance of the EDCA proposal, mainly in terms of throughput and access delay differentiation. However, we argue that further performance insights are needed in order to fully understand the principles behind the EDCA prioritization mechanisms. To this purpose, rather than limit our investigation on throughput and delay performance figures, we take a closer look to their operation also in terms of low-level performance metrics (such as probability of accessing specific channel slots). The paper contribution is threefold: first, we specify a detailed NS2 simulation model by enlightening the typical mis-configuration and errors that may occur when NS2 is used as simulation platform for WLANs and we cross-validate the simulation results with our custom-made C++ simulation tool; second, we describe some performance figures related to the different forms of prioritization provided by the EDCA mechanisms; finally, we verify some assumptions commonly used in the EDCA analytical models",2006,0, 3066,Photovoltaic Power Conditioning System With Line Connection,"A photovoltaic (PV) power conditioning system (PCS) with line connection is proposed. Using the power slope versus voltage of the PV array, the maximum power point tracking (MPPT) controller that produces a smooth transition to the maximum power point is proposed. The dc current of the PV array is estimated without using a dc current sensor. A current controller is suggested to provide power to the line with an almost-unity power factor that is derived using the feedback linearization concept. The disturbance of the line voltage is detected using a fast sensing technique. All control functions are implemented in software with a single-chip microcontroller. Experimental results obtained on a 2-kW prototype show high performance such as an almost-unity power factor, a power efficiency of 94%, and a total harmonic distortion (THD) of 3.6%",2006,0, 3067,ReStore: Symptom-Based Soft Error Detection in Microprocessors,"Device scaling and large-scale integration have led to growing concerns about soft errors in microprocessors. To date, in all but the most demanding applications, implementing parity and ECC for caches and other large, regular SRAM structures have been sufficient to stem the growing soft error tide. This will not be the case for long and questions remain as to the best way to detect and recover from soft errors in the remainder of the processor - in particular, the less structured execution core. In this work, we propose the ReStore architecture, which leverages existing performance enhancing checkpointing hardware to recover from soft error events in a low cost fashion. Error detection in the ReStore architecture is novel: symptoms that hint at the presence of soft errors trigger restoration of a previous checkpoint. Example symptoms include exceptions, control flow misspeculations, and cache or translation look-aside buffer misses. Compared to conventional soft error detection via full replication, the ReStore framework incurs little overhead, but sacrifices some amount of error coverage. These attributes make it an ideal means to provide very cost effective error coverage for processor applications that can tolerate a nonzero, but small, soft error failure rate. Our evaluation of an example ReStore implementation exhibits a 2times increase in MTBF (mean time between failures) over a standard pipeline with minimal hardware and performance overheads. The MTBF increases by 20times if ReStore is coupled with protection for certain particularly vulnerable pipeline structures",2006,0, 3068,A behavior-based process for evaluating availability achievement risk using stochastic activity networks,"With the increased focus on the availability of complex, multifunction systems, modeling processes and analysis tools are needed that help the availability systems engineer understand the impact of architectural and logistics design choices concerning system availability. Because many fielded systems are required to achieve a specified minimal availability over a short measurement period, a modeling methodology must also support computation of the distribution of operational availability for the specified measurement period. This paper describes a two-part behavior-based availability achievement risk methodology that starts with a description of the system's availability related behavior followed by a stochastic activity network-based simulation to obtain numeric estimate of expected availability and the distribution of availability over a selected time frame. The process shows how the system engineer freed to explore complex behavior not possible with combinatorial estimation methods in wide use today",2006,0, 3069,Better software reliability by getting the requirements right,"For too long software has been produced using processes that result in a product that is completed late, over-budget, and below the quality expectations of end users. Over the last 25 years a great many incremental, evolutionary, and iterative software development process models have been introduced with the intention of improving software development; yet a recent study by the Standish Group reported that fewer than 28% of software projects are completed on schedule within budget and with most of the originally required features. We believe that getting the requirements right is the key to building successful and reliable software products. In the requirements process the product features are defined, the users are identified, and the user interactions with the system are designed. In addition, usage profiles are developed and estimates of the anticipated software reliability based on those profiles and on formal models of the system are obtained. Thus, reliability becomes an integral part of the requirement development process rather than an after thought of testing at the end of the development process. Although, defining the requirements may be an iterative process, a complete and correct set of requirements must be completed before starting to build the product",2006,0, 3070,Risk assessment of real time digital control systems,"This paper describes stochastic methods for assessing risk in integrated hardware and software systems. The methods assess evaluate availability, outage probabilities, and effectiveness-weighted degraded states based on data from measurements with a specified confidence level. System-level reliability/availability models can also identify the elements where failure rate, recovery probability, or recovery time improvement will provide the greatest benefit. The validity of this approach is determined by the extent to which the system failure behavior conforms to a stochastic process (i.e., random, non-deterministic failures). Evidence from large studies of other high availability computer systems provides substantial evidence of such behavior in mature systems. The approach is limited to the systems with failure rates higher than 10-6per hour and the availability below 0.999999, i.e., below safety grade. To assess safety critical systems, the risk assessment method described here can be used as an adjunct for other approaches described in various industry standards that intended to minimize the likelihood that deterministic defects are introduced into the system design",2006,0, 3071,Safety assessment for safety-critical systems including physical faults and design faults,"Two types of faults, design faults and physical faults, are discussed in this paper. Since they are two mutually exclusive and complete fault types on the fault space, the safety assessment of safety-critical computer systems in this paper considers the hazard contribution from both types. A three-state Markov model is introduced to model safety-critical systems. Steady state safety and mean time to unsafe failure (MTTUF) are the two most important metrics for safety assessment. Two homogenous Markov models are derived from the three-state Markov model to estimate the steady state safety and the MTTUF. The estimation results are generalized given the fault space is divided by M mutually exclusive and complete types of faults",2006,0, 3072,Estimation of Nonlinear Errors-in-Variables Models for Computer Vision Applications,"In an errors-in-variables (EIV) model, all the measurements are corrupted by noise. The class of EIV models with constraints separable into the product of two nonlinear functions, one solely in the variables and one solely in the parameters, is general enough to represent most computer vision problems. We show that the estimation of such nonlinear EIV models can be reduced to iteratively estimating a linear model having point dependent, i.e., heteroscedastic, noise process. Particular cases of the proposed heteroscedastic errors-in-variables (HEIV) estimator are related to other techniques described in the vision literature: the Sampson method, renormalization, and the fundamental numerical scheme. In a wide variety of tasks, the HEIV estimator exhibits the same, or superior, performance as these techniques and has a weaker dependence on the quality of the initial solution than the Levenberg-Marquardt method, the standard approach toward estimating nonlinear models",2006,0, 3073,Performance Modeling and Evaluation of Distributed Component-Based Systems Using Queueing Petri Nets,"Performance models are used increasingly throughout the phases of the software engineering lifecycle of distributed component-based systems. However, as systems grow in size and complexity, building models that accurately capture the different aspects of their behavior becomes a more and more challenging task. In this paper, we present a novel case study of a realistic distributed component-based system, showing how queueing Petri net models can be exploited as a powerful performance prediction tool in the software engineering process. A detailed system model is built in a step-by-step fashion, validated, and then used to evaluate the system performance and scalability. Along with the case study, a practical performance modeling methodology is presented which helps to construct models that accurately reflect the system performance and scalability characteristics. Taking advantage of the modeling power and expressiveness of queueing Petri nets, our approach makes it possible to model the system at a higher degree of accuracy, providing a number of important benefits",2006,0, 3074,Data Mining Based Fuzzy Classification Algorithm for Imbalanced Data,"The elegant fuzzy classification algorithm proposed by Ishibuchi et al. (I-algorithm) has achieved satisfactory performance on many well-known test data sets that have usually been carefully preprocessed. However, the algorithm does not provide satisfactory performance for the problems with imbalanced data that are often encountered in real-world applications. This paper presents an extension of the I-algorithm to E-algorithm to alleviate the effect of data imbalance. Both the I-algorithm and the E-algorithm are applied to Duke Energy outage data for power distribution systems fault cause identification. Their performance on this real-world imbalanced data set is presented, compared, and analyzed to demonstrate the improvement of the extended algorithm.",2006,0, 3075,"""Word-of-Mouth"" in Radio Access Markets","Opening up interoperability between wide and local area networks seems to be a very promising solution for delivering improved user experience while reducing overall service costs. In a scenario, where a single operator owns different types of networks, QoS provision can be achieved by introducing complex multi-system resource management. On the contrary, if local area networks are deployed by different entities, due to the lack of both coordination and centralized RRM management, the experienced QoS may drastically fluctuate, ranging from SLAs in wide area, to only ""best effort"" expectations in local area networks. In order for ""nomadic"" terminal agents to perform ""informed"" access selection decisions, we propose the adoption of ""word-of-mouth"" (WoM), a novel scheme for sharing, in a peer-to-peer fashion, information about the service quality experienced with different networks. The performances of our proposed WoM scheme have been evaluated for a file pre-fetching service, considering information characterized by various degrees of time criticality, when different RRM strategies are implemented in the local area networks. The results show that if a critical mass of terminal agents exchange experienced QoS information, the overall network selection decision is improved: terminal agents can estimate, on beforehand, which type of performances to expect with different candidate networks, and avoid to select those not satisfying service requirements. This, in turn brings two main positive effects: on one hand, user perceived performance is improved, and, on the other hand, the adoption of RRM strategies providing some degree QoS is incentivated",2006,0, 3076,Toward Formal Verification of 802.11 MAC Protocols: a Case Study of Applying Petri-nets to Modeling the 802.11 PCF,"Centralized control functions for the IEEE 802.11 family of WLAN standards are vital for the distribution of traffic with stringent quality of service (QoS) requirements. These centralized control functions overlay a time-based organizational ""super-frame"" structure on the medium, allocating part of the super-frame to polling traffic and part to contending traffic. This allocation directly determines how well the two forms of traffic are supported. Given the vital role of this allocation in the success of a system, we must have confidence in the configuration used, beyond that provided by empirical simulation results. Formal mathematical methods are a means to conduct rigorous analysis that will permit us such confidence, and the Petri-net formalism offers an intuitive representation with formal semantics. We present an extended Petri-net model of the super-frame, and use this model to assess the performance of different super-frame configurations and the effects of different traffic patterns. We believe that using such a model to analyze performance in this manner is new in itself",2006,0, 3077,Fast Variable Block Size Motion Estimation by Adaptive Early Termination,"This paper presents a fast motion estimation algorithm by adaptively changing the early termination threshold for the current accumulated partial sum of absolute difference (SAD) value. The simulation results show that the proposed algorithm can provide the similar quality while saving 77.9% and 50.6% of SAD computation when comparing with early termination methods in MPEG-4 VM18.0 and H.264 JM9.0, respectively",2006,0, 3078,Component Reusability and Cohesion Measures in Object-Oriented Systems,"In software component reuse processing, the success of software systems is decided by the quality of components. One important characteristic to measure quality of components is component reusability. Component reusability measures how easily the component can be reused in a new environment. This paper provides a new measure of cohesion developed to assess the reusability of Java components retrieved from the Internet by a search engine. This measure differs from the majority of established metrics in two respects: it reflects the degree of similarity between classes quantitatively, and they also take account of indirect similarities. An empirical comparison of the new measure with the established metrics is described. The new measures are shown to be consistently superior at ranking components according to their reusability",2006,0, 3079,Fault Tolerance in Mobile Agent Systems by Cooperating the Witness Agents,"Mobile agents travel through servers for perform their programs and fault tolerance is fundamental and important in their itinerary. In the paper, are considering and described existent methods of fault tolerance in mobile agents. Then the method is considered that which uses cooperating agents to fault tolerance and to detect server and agent failure, meaning three type of agents involved: actual agent which performs programs for its owner, witness agent which monitors the actual agent and the witness agent after itself, probe which is sent for recovery the actual agent or the witness agent on the side of the witness agent. Traveling agent through servers, the witness is created by actual agent. Scenarios of failure and recovery of server and agent are discussed in the method. During performing the actual agent, the witness agents are increased by addition the servers. Proposed scheme is that minimizes the witness agents as far as possible, because with considering and comparing could concluded that existing all of witness agent is not necessary on the initial servers. Simulation of this method is done by C-Sim",2006,0, 3080,Toward Exception-Handling Best Practices and Patterns,"An exception condition occurs when an object or component, for some reason, can't fulfil a responsibility. Poor exception-handling implementations can thwart even the best design. It's high time we recognize exception handling's importance to an implementation's overall quality. Agreeing on a reasonable exception-handling style for your application and following a consistent set of exception-handling practices is crucial to implementing software that's easy to comprehend, evolve, and refactor. The longer you avoid exceptions, the harder it is to wedge cleanly designed exception-handling code into working software. To demystify exception-handling design, we must write about - and more widely disseminate - proven techniques, guidelines, and patterns",2006,0, 3081,moPGA: Towards a New Generation of Multi-objective Genetic Algorithms,"This paper describes a multi-objective parameter-less genetic algorithm (moPGA), which combines several recent developments including efficient non-dominated sorting, linkage learning, isin-Dominance, building-block mutation and convergence detection. Additionally, a novel method of clustering in the objective space using an isin-Pareto Set is introduced. Comparisons with well-known multi-objective GAs on scalable benchmark problems indicate that the algorithm scales well with problem size in terms of number of function evaluations and quality of solutions found. moPGA was built for easy usage and hence, in addition to the problem function and encoding, there are only two required user defined parameters; (1) the maximum running time or generations and (2) the precision of the desired solutions (isin).",2006,0, 3082,An Anomaly Detection-Based Classification System,"In this paper, we describe the construction of a classification system based on an anomaly detection system that employs constraint-based detectors, which are generated using a genetic algorithm. The performance of the classification system was evaluated using two benchmark datasets including the Wisconsin breast cancer dataset and the Fisher's iris dataset.",2006,0, 3083,IMPRES: integrated monitoring for processor reliability and security,"Security and reliability in processor based systems are concerns requiring adroit solutions. Security is often compromised by code injection attacks, jeopardizing even 'trusted software'. Reliability is of concern where unintended code is executed in modern processors with ever smaller feature sizes and low voltage swings causing bit flips. Countermeasures by software-only approaches increase code size by large amounts and therefore significantly reduce performance. Hardware assisted approaches add extensive amounts of hardware monitors and thus incur unacceptably high hardware cost. This paper presents a novel hardware/software technique at the granularity of micro-instructions to reduce overheads considerably. Experiments show that our technique incurs an additional hardware overhead of 0.91% and clock period increase of 0.06%. Average clock cycle and code size overheads are just 11.9% and 10.6% for five industry standard application benchmarks. These overheads are far smaller than have been previously encountered",2006,0, 3084,Signature-based workload estimation for mobile 3D graphics,"Until recently, most 3D graphics applications had been regarded as too computationally intensive for devices other than desktop computers and gaming consoles. This notion is rapidly changing due to improving screen resolutions and computing capabilities of mass-market handheld devices such as cellular phones and PDAs. As the mobile 3D gaming industry is poised to expand, significant innovations are required to provide users with high-quality 3D experience under limited processing, memory and energy budgets that are characteristic of the mobile domain. Energy saving schemes such as dynamic voltage and frequency scaling (DVFS), as well as system-level power and performance optimization methods for mobile devices require accurate and fast workload prediction. In this paper, we address the problem of workload prediction for mobile 3D graphics. We propose and describe a signature-based estimation technique for predicting 3D graphics workloads. By analyzing a gaming benchmark, we show that monitoring specific parameters of the 3D pipeline provides better prediction accuracy over conventional approaches. We describe how signatures capture such parameters concisely to make accurate workload predictions. Signature-based prediction is computationally efficient because first, signatures are compact, and second, they do not require elaborate model evaluations. Thus, they are amenable to efficient, real-time prediction. A fundamental difference between signatures and standard history-based predictors is that signatures capture previous outcomes as well as the cause that led to the outcome, and use both to predict future outcomes. We illustrate the utility of signature-based workload estimation technique by using it as a basis for DVFS in 3D graphics pipelines",2006,0, 3085,A Software Component Quality Model: A Preliminary Evaluation,"Component-based software development is becoming more generalized, representing a considerable market for the software industry. The perspective of reduced development costs and shorter life cycles acts as a motivation for this expansion. However, several technical issues remain unsolved before software component's industry reaches the maturity exhibited by other component industries. Problems such as the component selection by their integrators and the uncertain quality of third-party developed components, bring new challenges to the software engineering community. By the other hand, the software components certification area is still immature and further research is needed in order to obtain well-defined standards for certification. In this way, we aim to propose a component quality model, describing consistent and well-defined characteristics, quality attributes and related metrics for the components evaluation. A preliminary evaluation to analyze the results of using the component quality model proposed is also presented",2006,0, 3086,State of the Art and Practice of OpenSource Component Integration,"The open source software (OSS) development approach has become a remarkable option to consider for cost-efficient, high quality software development. Utilizing OSS as part of an in-house software application requires the software company to take the role of a component integrator. In addition, integrating OSS as part of in-house software has a few differences compared to integrating closed source software and in-house software, such as access to source code and the fact that OSS evolves differently than closed source software. This paper describes the current state of the art and practice of open source integration techniques. The main observations are that the lack of documentation and heterogeneity of platforms are problems that neither the state of the art or practice could solve. In addition, although literature provides techniques and methods for predicting and solving both architecture and component level integration problems, these were not used in practice. Instead, companies relied on experience and rules of thumb",2006,0, 3087,Software Defect Identification Using Machine Learning Techniques,"Software engineering is a tedious job that includes people, tight deadlines and limited budgets. Delivering what customer wants involves minimizing the defects in the programs. Hence, it is important to establish quality measures early on in the project life cycle. The main objective of this research is to analyze problems in software code and propose a model that will help catching those problems earlier in the project life cycle. Our proposed model uses machine learning methods. Principal component analysis is used for dimensionality reduction, and decision tree, multi layer perceptron and radial basis functions are used for defect prediction. The experiments in this research are carried out with different software metric datasets that are obtained from real-life projects of three big software companies in Turkey. We can say that, the improved method that we proposed brings out satisfactory results in terms of defect prediction",2006,0, 3088,Managing Risk of Inaccurate Runtime Estimates for Deadline Constrained Job Admission Control in Clusters,"The advent of service-oriented grid computing has resulted in the need for grid resources such as clusters to enforce user-specific service needs and expectations. Service level agreements (SLAs) define conditions which a cluster needs to fulfill for various jobs. An example of SLA requirement is the deadline by which a job has to be completed. In addition, these clusters implement job admission control so that overall service performance does not deteriorate due to accepting exceeding amount of jobs. However, their effectiveness is highly dependent on accurate runtime estimates of jobs. This paper thus examines the adverse impact of inaccurate runtime estimates for deadline constrained job admission control in clusters using the earliest deadline first (EDF) strategy and a deadline-based proportional processor share strategy called Libra. Simulation results show that an enhancement called LibraRisk can manage the risk of inaccurate runtime estimates better than EDF and Libra by considering the risk of deadline delay",2006,0, 3089,Finite Horizon QoS Prediction of Reconfigurable Firm Real-Time Sytems,"Updating real-time system software is often needed in response to errors and added requirements to the software. Stopping a running application, updating the software, and then restarting the application is not suitable for systems with high availability requirements. On the other hand, dynamically updating a system may increase the execution time of the tasks, thus, degrading the performance of the system. Degradation is not acceptable for performance-critical real-time systems as there are strict requirements on the performance. In this paper we present an approach that enables dynamic reconfiguration of a real-time system, where the performance of the system during a reconfiguration satisfies a given worst-case performance specification. Evaluation shows that the presented method is efficient in guaranteeing the worst-case performance of dynamically reconfigurable firm real-time systems",2006,0, 3090,Applying System Execution Modeling Tools to Evaluate Enterprise Distributed Real-time and Embedded System QoS,"Component middleware is popular for enterprise distributed systems because it provides effective reuse of the core intellectual property (i.e., the ""business logic""). Component-based enterprise distributed real-time and embedded (DRE) systems, however, incur new integration problems associated with component configuration and deployment. New research is therefore needed to minimize the gap between the development and deployment/configuration of components, so that deployment and configuration strategies can be evaluated well before system integration. This paper uses an industrial case study from the domain of shipboard computing to show how system execution modeling tools can provide software and system engineers with quantitative estimates of system bottlenecks and performance characteristics to help evaluate the performance of component-based enterprise DRE systems and reduce time/effort in the integration phase. The results from our case study show the benefits of system execution modeling tools and pinpoint where more work is needed",2006,0, 3091,"Monitoring and Improving the Quality of ODC Data using the ""ODC Harmony Matrices"": A Case Study","Orthogonal defect classification (ODC) is an advanced software engineering technique to provide in-process feedback to developers and testers using defect data. ODC institutionalization in a large organization involves some challenging roadblocks such as the poor quality of the collected data leading to wrong analysis. In this paper, we have proposed a technique ('Harmony Matrix') to improve the data collection process. The ODC Harmony Matrix has useful applications. At the individual defect level, results can be used to raise alerts to practitioners at the point of data collection if a low probability combination is chosen. At the higher level, the ODC Harmony Matrix helps in monitoring the quality of the collected ODC data. The ODC Harmony Matrix complements other approaches to monitor and enhances the ODC data collection process and helps in successful ODC institutionalization, ultimately improving both the product and the process. The paper also describes precautions to take while using this approach",2006,0, 3092,Experiences with product line development of embedded systems at Testo AG,"Product line practices are increasingly becoming popular in the domain of embedded software systems. This paper presents results of assessing success, consistency, and quality of Testo's product line of climate and flue gas measurement devices after its construction and the delivery of three commercial products. The results of the assessment showed that the incremental introduction of architecture-centric product line development can be considered successful even though there is no quantifiable reduction of time-to-market as well as development and maintenance costs so far. The success is mainly shown by the ability of Testo to develop more complex products and the satisfaction of the involved developers. A major issue encountered is ensuring the quality of reusable components and the conformance of the products to the architecture during development and maintenance",2006,0, 3093,Transient fault-tolerance through algorithms,"This article describes that single-version enhanced processing logic or algorithms can be very effective in gaining dependable computing through hardware transient fault tolerance (FT) in an application system. Transients often cause soft errors in a processing system resulting in mission failure. Errors in program flow, instruction codes, and application data are often caused by electrical fast transients. However, firmware and software fixes can have an important role in designing an ESD, or EMP-resistant system and are more cost effective than hardware. This technique is useful for detecting and recovering transient hardware faults or random bit errors in memory while an application is in execution. The proposed single-version software fix is a practical, useful, and economic tool for both offline and online memory scrubbing of an application system without using conventional N versions of software (NVS) and hardware redundancy in an application like a frequency measurement system",2006,0, 3094,An efficient SNR scalability coding framework hybrid open-close loop FGS coding,"This paper presents a novel high-efficient hybrid open-close loop based fine granularity scalable (HOCFGS) coding framework supporting different decoding complexity applications. The open-loop motion compensation for inter-pictures is introduced to efficiently exploit the temporal correlation among adjacent pictures for both base layer and FGS enhancement layer within a wide bit-rate range. An efficient rate-distortion optimized macro-block mode decision rule is used to reduce drifting error and achieve comparable coding performance at the lowest bit-rate point (base layer) with non-scalable coding. An approach like MPEG-4 FGS coding with close-loop motion compensation only at base layer is used for some inter-pictures to stop the drifting error propagation. Furthermore, to achieve better coding performance for these inter-pictures, block based progressive fine granularity scalable (BLPFGS) is introduced, in which leaky prediction is used to generate high-quality reference for FGS enhancement layer. In BLPFGS coding, efficient bit-plane coding and de-blocking techniques are investigated to improve the coding performance for FGS enhancement layer, especially at low bit-rate points. The coding performance for the proposed method is verified by integrating it into MPEG SVC reference software",2006,0, 3095,A case-study on multimedia applications for the XiRisc reconfigurable processor,"Embedded real-time multimedia applications pose several challenges in order to satisfy increasing quality of service (QoS) and energy consumption constraints, that are hardly matched by the capabilities of general-purpose standard processors. Reconfigurable processors, coupling the flexibility of software-programmable devices with the computational efficiency of application specific architectures, represent an appealing trade-off for next generation devices in the digital signal processing application domain. In this paper, we present a benchmarking application for the XiRisc reconfigurable processor, based on a public release of the MPEG-2 video encoder. The introduction of the reconfigurable logic gives a 5times performance improvement and a 66% energy saving",2006,0, 3096,Improved refinement search for H.263 to H.264/AVC transcoding based on the minimum cost tendency search,"An improved refinement search method for transcoding from H.263 to H.264/AVC is proposed in this paper. Many existing motion re-estimation methods refine the input motion vector (MV) with a small search range, which is usually input MV biased. Motion estimation (ME) in H.263 usually does not consider the rate required for coding the MV, and hence, the input MV may incur a large cost in H.264/AVC. To overcome this problem, we introduce a refinement search method, called minimum cost tendency search (MCTS), which takes the difference between the cost functions for ME in H.263 and H.264/AVC into consideration. The input MV and the predictor MV are used as two anchor points. The proposed MCTS starts searching from the anchor point with a higher cost to another. Finally, the best point is chosen as the center for further refinement. The performance of MCTS is evaluated by comparing with full search, FME in JM software and refinement scheme using small diamond pattern around the input MV (RSD). Experimental results show the proposed MCTS performs more stable than FME and RSD over a wide range of output video quality",2006,0, 3097,Measuring Instrument of Parameters of Quality of Electric Energy,"In electric power industry, the important role is played with quality of electric energy as suitability for use of electric energy depends on its quality. Within the limits of the given work development and research of an electro technical complex of an estimation of quality of electric energy is carried out",2006,0, 3098,Performance evaluation of maximal ratio combining diversity over the Weibull fading channel in presence of co-channel interference,"The Weibull distribution has recently attracted much attention among the radio community as a statistical model that better describes the fading phenomenon on wireless channels. In this paper, we consider a multiple access system in which each of the desired signal as well as the co-channel interferers are subject to Weibull fading. We analyze the performance of the L-branch maximal ratio combining (MRC) receiver in terms of the outage probability in such scenario in the two cases where the diversity branches are assumed to be independent or correlated. The analysis is also applicable to the cases where the diversity branches and/or the interferers fading amplitudes are non-identically distributed. Due to the difficulty of handling the resulting outage probability expressions numerically using the currently available mathematical software packages, we alternatively propose using Pade approximation (PA) to make the results numerically tractable. We provide numerical results for different number of interferers, different number of diversity branches as well as different degrees of correlation and power unbalancing between diversity branches. All our numerical results are verified by means of Monte-Carlo simulations and excellent agreement between the two sets is noticed",2006,0, 3099,A Software Simulation Study of a MD DS/SSMA Communication System with Adaptive Channel Coding,"Studies have shown that adaptive forward error correction (FEC) coding schemes enable a communication system to take advantage of varying channel conditions by switching to less powerful and/or less redundant FEC channel codes when conditions are good, thus enabling an increase in system throughput. The focus of this study is the simulation performance of a complete simulated multi-dimensional (MD) direct-sequence spread spectrum multiple access (DS/SSMA) communication system that employs an advanced adaptive channel coding scheme. The system is simulated and evaluated over a fully user-definable software-based multi-user (MU) multipath fading channel simulator (MFCS). Channel conditions are varied and the switching and adaptation performance of the system is monitored and evaluated. Sensing for adaptation is made possible by a sophisticated quality-of-service monitoring unit (QoSMU) that uses a sophisticated pseudo-error-rate (PER) extrapolation technique to estimate the system's true probability-of-error in real-time, without the need for known transmitted data. The system attempts to keep the estimated bit-error-rate (BER) performance within a predetermined range by switching between different FEC codes as conditions change. This paper commences with a short overview of each of the functional units of the system. Lastly, the simulation results for the coded and uncoded BER performances, as well as the real-time adaptation performance of the system are presented and discussed. This paper conclusively proves that adaptive coded systems have large throughput utilization advantages over that of fixed coded systems",2006,0, 3100,Prediction Strategies for Proactive Management in Dynamic Distributed Systems,"In real-life distributed systems, dynamic changes are unavoidable properties because of the degeneration or improvement in system performance. Hence to understand the dynamic changes and to ""catch the trend"" of the changes in distributed systems will be very important for distributed systems management. In order to model the dynamic changes in distributed systems, temporal extensions of Bayesian networks are employed to address the temporal factors and to model the dynamic changes of managed entities and the dependencies between them. Furthermore, the prediction capabilities are investigated by means of the relevant inference techniques when the imprecise and dynamic management information occurs in the distributed system",2006,0, 3101,Simple Shadow Remova,"Given the location of shadows, how can we obtain high-quality shadow-free images? Several methods have been proposed so far, but they either introduce artifacts or can be difficult to implement. We propose here a simple method that results in virtually error and shadow-free images in a very short time. Our approach is based on the insight that shadow regions differ from their shadow-free counterparts by a single scaling factor. We derive a robust method to obtain that factor. We show that for complex scenes - containing many disjointed shadow regions - our new method is faster and more robust than others previously published. The method delivers good performance on a variety of outdoor images",2006,0, 3102,Analysis of Restart Mechanisms in Software Systems,"Restarts or retries are a common phenomenon in computing systems, for instance, in preventive maintenance, software rejuvenation, or when a failure is suspected. Typically, one sets a time-out to trigger the restart. We analyze and optimize time-out strategies for scenarios in which the expected required remaining time of a task is not always decreasing with the time invested in it. Examples of such tasks include the download of Web pages, randomized algorithms, distributed queries, and jobs subject to network or other failures. Assuming the independence of the completion time of successive tries, we derive computationally attractive expressions for the moments of the completion time, as well as for the probability that a task is able to meet a deadline. These expressions facilitate efficient algorithms to compute optimal restart strategies and are promising candidates for pragmatic online optimization of restart timers",2006,0, 3103,Design by Contract to Improve Software Vigilance,"Design by contract is a lightweight technique for embedding elements of formal specification (such as invariants, pre and postconditions) into an object-oriented design. When contracts are made executable, they can play the role of embedded, online oracles. Executable contracts allow components to be responsive to erroneous states and, thus, may help in detecting and locating faults. In this paper, we define vigilance as the degree to which a program is able to detect an erroneous state at runtime. Diagnosability represents the effort needed to locate a fault once it has been detected. In order to estimate the benefit of using design by contract, we formalize both notions of vigilance and diagnosability as software quality measures. The main steps of measure elaboration are given, from informal definitions of the factors to be measured to the mathematical model of the measures. As is the standard in this domain, the parameters are then fixed through actual measures, based on a mutation analysis in our case. Several measures are presented that reveal and estimate the contribution of contracts to the overall quality of a system in terms of vigilance and diagnosability",2006,0, 3104,Development of circuit models for extractor components in high power microwave sources,"Summary form only given. The state-of-the-art in high power microwave (HPM) sources has greatly improved in recent years, in part due to advances in the computational tools available to analyze such devices. Chief among these advances is the widespread use of parallel particle-in-cell (PIC) techniques. Parallel PIC software allows high fidelity, three-dimensional, electromagnetic simulations of these complex devices to be performed. Despite these advances, however, parallel PIC software could be greatly supplemented by fast-running parametric codes specifically designed to mimic the behavior of the source in question. These tools can then be used to develop zero-order point designs for eventual assessment via full PIC simulation. One promising technique for these parametric formulations is circuit models, where in the full field description is reduced to capacitances, inductances, and resistances that can be quickly solved to yield small signal growth rates, resonant frequencies, quality factors, and potentially efficiencies. Building on the extensive literature from the vacuum electronics community, this poster will investigate the circuit models associated with the purely electromagnetic components of the extractor in the absence of space charge. Specifically, three-dimensional time-domain computational electromagnetics (AFRL's ICEPIC software) will be used to investigate the modification of the resonant frequencies and mode quality factors as a function of slot and load geometry. These field calculations will be reduced to circuit parameters for potential inclusion in parametric models, and the fidelity of the resulting description will be assessed",2006,0, 3105,On the Use of Mutation Faults in Empirical Assessments of Test Case Prioritization Techniques,"Regression testing is an important activity in the software life cycle, but it can also be very expensive. To reduce the cost of regression testing, software testers may prioritize their test cases so that those which are more important, by some measure, are run earlier in the regression testing process. One potential goal of test case prioritization techniques is to increase a test suite's rate of fault detection (how quickly, in a run of its test cases, that test suite can detect faults). Previous work has shown that prioritization can improve a test suite's rate of fault detection, but the assessment of prioritization techniques has been limited primarily to hand-seeded faults, largely due to the belief that such faults are more realistic than automatically generated (mutation) faults. A recent empirical study, however, suggests that mutation faults can be representative of real faults and that the use of hand-seeded faults can be problematic for the validity of empirical results focusing on fault detection. We have therefore designed and performed two controlled experiments assessing the ability of prioritization techniques to improve the rate of fault detection of test case prioritization techniques, measured relative to mutation faults. Our results show that prioritization can be effective relative to the faults considered, and they expose ways in which that effectiveness can vary with characteristics of faults and test suites. More importantly, a comparison of our results with those collected using hand-seeded faults reveals several implications for researchers performing empirical studies of test case prioritization techniques in particular and testing techniques in general",2006,0, 3106,A case study of supply chain management in oil industry,"The paper deals with business renovation, the effective utilisation of information technology and the role of business process modelling in supply chain integration project in oil industry. The main idea is to show how the performance of the supply chain can be improved with the integration of various tiers in the chain. Integration is a prerequisite for the effective sharing and utilisation of information between different companies in the chain. Simulation-based methodology for measuring the benefits combines the simulation of business processes with the simulation of supply and demand. A case study of the procurement process in a petrol company is shown with old and renewed business process models and changes in lead-times, process execution costs, quality of the process and inventory costs are estimated",2006,0, 3107,Dependability analysis: performance evaluation of environment configurations,Prototyping-based fault injection environments are employed to perform dependability analysis and thus predict the behavior of circuits in presence of faults. A novel environment has been recently proposed to perform several types of dependability analyses in a common optimized framework. The approach takes advantage of hardware speed and of software flexibility to achieve optimized trade-offs between experiment duration and processing complexity. This paper discusses the possible repartition of tasks between hardware and embedded software with respect to the type of circuit to analyze and to the instrumentation achieved. The performances of the approach are evaluated for each configuration of the environment,2006,0, 3108,Practical application of probabilistic reliability analyses,"Liberalization of the energy markets has increased the cost pressure on network operators significantly. Corresponding cost saving measures in general will have negative effects on quality of supply, especially on supply reliability. On the other hand, a decrease of supply reliability is not acceptable for customers and politicians - and the public awareness was focused by several blackouts in the European and American transmission systems. In order to handle this delicate question of balancing network costs and supply reliability, detailed and above all quantitative information is required in the network planning process. A suitable tool for this task is probabilistic reliability analysis, which has already been in use for several years successfully. The method and possible applications are briefly described here and application is demonstrated with various examples from practical studies focusing on distribution systems. The results prove that reliability analyses are becoming an indispensable component of customer-oriented network planning",2006,0, 3109,Investigation of radiometric partial discharge detection for use in switched HVDC testing,"This paper reports on initial trials of a non-contact radio frequency partial discharge detection technique that has potential for use within fast switching HVDC test systems. Electromagnetic environments of this type can arise within important electrical transmission nodes such converter stations, so the methods described could in future be useful for condition monitoring purposes. The radiometric technique is outlined and the measurement system and its components are described. Preliminary field trials are reported and results presented for a discharge detected in part of the HV test system during set-up for long-term testing of a reactor. The calculated and observed locations of the discharge were in agreement to within 60 cm inside a test housing of diameter 5 m and height 8 m. Techniques for improving the location accuracy are discussed. The issue of data volume presents a considerable challenge for the RF measurement techniques. On the basis of observations, strategies for moving towards automated interpretation of the partial discharge signals are proposed, which will make use of intelligent software techniques",2006,0, 3110,Fault Tolerant Control Research for High-Speed Maglev System with Sensor Failure,"The faults of sensors occurred in Maglev train's suspension system reduce the running performance even disable the suspension system. Especially in the high-speed Maglev train, the loss is unpredictable. This paper focuses on the fault tolerant control problem of the suspension system. And the methods of control strategy reconstruction and state estimator were adopted. Based on the model of the simplified suspension system, the reconstructed controller and state estimator were designed. Furthermore, the simulated analysis for the static and dynamic levitation process was done by the software of Simulink. It is clearly observed that the FTC scheme can remain the static and dynamic performance in conformity with the original system. At last, the new FTC index used to judge the FTC problem of the unstable system is brought forwarded",2006,0, 3111,Continuous Measurement of Aggregate Size and Shape by Image Analysis of a Falling Stream,"Quality control of aggregate is very important for mining, mineral processing and aggregate production. In last decade, several authors presented a system for automatic image analysis of aggregates on a moving conveyor belt. The test results showed that the system was capable of estimating size distributions of aggregates in real time, but could not give any information about shape(s) because of the difficulties of handling overlapping aggregates. As a consequence we initiated a study using an on-line computer system to analyse aggregate in a gravitational flow, i.e. a falling stream of particles. Compared with images from a moving conveyor belt, in gravity flow both aggregates and background appear more and less afflicted by overlaps. This, apart from alleviating the problems of overlapping, implies that the size and shape of aggregates can be measured more accurately. The system hardware includes a CCD camera, mounted at the end of a conveyor belt, an image frame grabber and a PC computer. The system software consists of image acquisition and selection, grey level image binarization, splitting of touching aggregates based on polygon approximation and size and shape analysis based on an oriented Feret box. The system has been tested in a quarry for aggregate production at Vasteras, Central Sweden",2006,0, 3112,Multi-function Intelligent Instrument of Vehicle Detection Based on Embedded System,"By analyzing all kinds of application situation about speed and sideslip test-bed, a multi-function intelligent instrument of speed and sideslip detection based on embedded system was designed. Its operation principle, hardware constitution and design method of system software was described. It has such functions as data acquisition, data processing, fault diagnosis and etc. This instrument suits to three kinds of sensor. It is fit for different applications, and the application results indicate that it is not only a low cost and highly universal detection instrument of speed and sideslip, but also the only kind of small-scale multi-function instrument up until now",2006,0, 3113,Sonar Power Amplifier Testing System Based on Virtual Instrument,"This paper presents an intelligent test system for power amplifiers by using virtual instrument technology. The automatic range switching circuit and Hall current transducers are applied in this system, realizing effectively the measurement of wide-voltage signal and large current. LabVIEW, a graphical programming language, and expert system technology are employed to develop testing software, implementing performance test of sonar power amplifier parts, and measurement of technique index and fault diagnosis. The proposed system has been used in the certain type sonar power amplifier system. The test results show that flexibility and data processing capacity of test instruments are improved upon greatly, which can satisfy more needs of measurement. And the proposed system can detect and locate the fault position quickly",2006,0, 3114,Utilizing Computational Intelligence in Estimating Software Readiness,"Defect tracking using computational intelligence methods is used to predict software readiness in this study. By comparing predicted number of faults and number of faults discovered in testing, software managers can decide whether the software are ready to be released or not. Our predictive models can predict: (i) the number of faults (defects), (ii) the amount of code changes required to correct a fault and (iii) the amount of time (in minutes) to make the changes in respective object classes using software metrics as independent variables. The use of neural network model with a genetic training strategy is introduced to improve prediction results for estimating software readiness in this study. Existing object-oriented metrics and complexity software metrics are used in the Business Tier neural network based prediction model. New sets of metrics have been defined for the Presentation Logic Tier and Data Access Tier.",2006,0, 3115,A Computational Intelligence Strategy for Software Complexity Prediction,"The automated prediction of software module complexity using quantitative measures is a desirable goal in the area of software engineering. A computational intelligence based strategy, stochastic feature selection, is investigated as a classification system to determine the subset of software measures that yields the greatest predictive power for module complexity. This strategy stochastically examines subsets of software measures for predictive power. Its effectiveness is measured against a conventional artificial neural network benchmark.",2006,0, 3116,"A High-Precision NDIR Gas Sensor for Automotive Applications","A new high-precision spectroscopic gas sensor measuring carbon dioxide (CO2) for harsh environmental conditions of automotive applications is presented. The carbon dioxide concentration is the primary parameter for sensing in cabin air quality, as well as an important safety parameter when R744 (carbon dioxide) is used as the refrigerant in the air conditioning system. The automotive environment challenges the potential sensor principles because of the wide temperature range from -40degC to +85degC, the atmospheric pressure from 700 to 1050 mbar, and relative humidity from 0% to 95%. The presented sensor system is based on the nondispersive infrared principle with new features for reaching high precision criteria and for enhancing long-term stability. A second IR source is used for internal recalibration of the primary IR source, redundancy purposes, and software plausibility checks. The CO2 sensor system achieves an accuracy of better than plusmn5.5% over the whole temperature, pressure, and humidity ranges, with a resolution below 15 ppm and a response time shorter than 5 s. The operating time of the sensor system is more than 6000 h over a corresponding lifetime of more than 15 years. Experimental results show outstanding results for the intended automotive applications",2006,0, 3117,A Modular Software System to Assist Interpretation of Medical Images—Application to Vascular Ultrasound Images,"Improvements in medical imaging technology have greatly contributed to early disease detection and diagnosis. However, the accuracy of an examination depends on both the quality of the images and the ability of the physician to interpret those images. Use of output from computerized analysis of an image may facilitate the diagnostic tasks and, potentially improve the overall interpretation of images and the subsequent patient care. In this paper, Analysis, a modular software system designed to assist interpretation of medical images, is described in detail. Analysis allows texture and motion estimation of selected regions of interest (ROIs). Texture features can be estimated using first-order statistics, second-order statistics, Laws' texture energy, neighborhood gray-tone difference matrix, gray level difference statistics, and the fractal dimension. Motion can be estimated from temporal image sequences using block matching or optical flow. Image preprocessing, manual and automatic definition of ROIs, and dimensionality reduction and clustering using fuzzy c-means, are also possible within Analysis. An important feature of Analysis is the possibility for online telecollaboration between health care professionals under a secure framework. To demonstrate the applicability and usefulness of the system in clinical practice, Analysis was applied to B-mode ultrasound images of the carotid artery. Diagnostic tasks included automatic segmentation of the arterial wall in transverse sections, selection of wall and plaque ROIs in longitudinal sections, estimation of texture features in different image areas, motion analysis of tissue ROIs, and clustering of the extracted features. It is concluded that Analysis can provide a useful platform for computerized analysis of medical images and support of diagnosis",2006,0, 3118,Emulation of Software Faults: A Field Data Study and a Practical Approach,"The injection of faults has been widely used to evaluate fault tolerance mechanisms and to assess the impact of faults in computer systems. However, the injection of software faults is not as well understood as other classes of faults (e.g., hardware faults). In this paper, we analyze how software faults can be injected (emulated) in a source-code independent manner. We specifically address important emulation requirements such as fault representativeness and emulation accuracy. We start with the analysis of an extensive collection of real software faults. We observed that a large percentage of faults falls into well-defined classes and can be characterized in a very precise way, allowing accurate emulation of software faults through a small set of emulation operators. A new software fault injection technique (G-SWFIT) based on emulation operators derived from the field study is proposed. This technique consists of finding key programming structures at the machine code-level where high-level software faults can be emulated. The fault-emulation accuracy of this technique is shown. This work also includes a study on the key aspects that may impact the technique accuracy. The portability of the technique is also discussed and it is shown that a high degree of portability can be achieved",2006,0, 3119,User Interface Design for VCMMs: an Approach to Increase Fidelity and Usability,"In recent years the declaration of uncertainties according to the ISO GUM (1995) has pushed the development of the so called Virtual Coordinate Measuring Machines - VCMMs as a software tool to generate uncertainty estimates for industrial applications. That is, a tool to calculate measurement uncertainty estimates based on real metrology information to be used in the context of quality systems. Recent tests on commercial VCMMs have shown that the fidelity of the simulation process to the real world is as important as the numerical correctness of uncertainty calculations. In this paper a VCMM interface design is proposed, focusing on techniques to increase the software fidelity and usability in industrial metrology applications. Results show that the quality of the simulation can be improved by proper user guidance as much as the software usability by the application of basic principles of software prototyping using the MVC model",2006,0, 3120,Improving a 3 Data-Source Diagnostic Method,"In this paper, we present improvements to a diagnosis method for bridging faults combining three different data sources. The first data source is a set of IDDQ measurements used to identify the most probable fault type. The second source is a list of parasitic capacitances extracted from layout and used to create a list of realistic potential bridging fault sites. The third source is logical faults detectable at the primary outputs (including scan flip flops), used to limit the number of suspected gates. Combining these data significantly reduces the number of potential fault sites to consider in the diagnosis process. The modifications proposed in this paper allow the technique to be even more suitable for very large devices. Results obtained with different circuits are provided",2006,0, 3121,Weighted Proportional Sampling : AGeneralization for Sampling Strategies in Software Testing,"Current activities to measure the quality of software products rely on software testing. The size and complexity of software systems make it almost impossible to perform complete coverage testing. During the past several years, many techniques to improve the test effectiveness (i.e., the ability to find faults) have been proposed to address this issue. Two examples of such strategies are random testing and partition testing. Both strategies follow an input domain sampling to perform the testing process. The procedure and assumptions for selecting these points seem to be different for both strategies: random testing considers only the probability of each sub-domain (i.e. uniform sampling) while partition testing considers only the sampling rate of each sub-domain (i.e., proportional sampling). This paper describes a more general sampling strategy, named weighted proportional sampling strategy. This strategy unifies both strategies into a general model that encompasses both of them as special cases. This paper also proposes an optimization model to determine the number of sampled points depending on the sampling strategy",2006,0, 3122,Estimating the Heavy-Tail Index for WWW Traces,"Heavy-tailed behavior of WWW traffic has serious implications for the design and performance analysis of computer networks. This behavior gives rise to rare events which could be catastrophic for the QoS of an application. Thus, an accurate detection and quantification of the degree of thickness of a distribution is required. In this paper we detect and quantify the degree of tail-thickness for the file size and transfer times distributions of several WWW traffic traces. For accomplishing the above, the behavior of four estimators in real WWW traces characteristics is studied. We show that Hill-class estimators present varying degrees of accuracy and should be used as a first step towards the estimation of the tail-index. The QQ estimator, on the other hand, is shown to be more robust and adaptable, thus giving rise to more confident point estimates",2006,0, 3123,Developing an Effective Validation Strategy for Genetic Programming Models Based on Multiple Datasets,"Genetic programming (GP) is a parallel searching technique where many solutions can be obtained simultaneously in the searching process. However, when applied to real-world classification tasks, some of the obtained solutions may have poor predictive performances. One of the reasons is that these solutions only match the shape of the training dataset, failing to learn and generalize the patterns hidden in the dataset. Therefore, unexpected poor results are obtained when the solutions are applied to the test dataset. This paper addresses how to remove the solutions which will have unacceptable performances on the test dataset. The proposed method in this paper applies a multi-dataset validation phase as a filter in GP-based classification tasks. By comparing our proposed method with a standard GP classifier based on the datasets from seven different NASA software projects, we demonstrate that the multi-dataset validation is effective, and can significantly improve the performance of GP-based software quality classification models",2006,0, 3124,An FPGA-Based Application-Specific Processor for Efficient Reduction of Multiple Variable-Length Floating-Point Data Sets,"Reconfigurable computers (RCs) that combine generalpurpose processors with field-programmable gate arrays (FPGAs) are now available. In these exciting systems, the FPGAs become reconfigurable application-specific processors (ASPs). Specialized high-level language (HLL) to hardware description language (HDL) compilers allow these ASPs to be reconfigured using HLLs. In our research we describe a novel toroidal data structure and scheduling algorithm that allows us to use an HLL-to-HDL environment to implement a high-performance ASP that reduces multiple, variable-length sets of 64-bit floating-point data. We demonstrate the effectiveness of our ASP by using it to accelerate a sparse matrix iterative solver. We compare actual wall clock run times of a production-quality software iterative solver with an ASP-augmented version of the same solver on a current generation RC. Our ASP-augmented solver runs up to 2.4 times faster than software. Estimates show that this same design can run over 6.4 times faster on a next-generation RC.",2006,0, 3125,Automated Information Aggregation for Scaling Scale-Resistant Services,"Machine learning provides techniques to monitor system behavior and predict failures from sensor data. However, such algorithms are ""scale resistant"" $high computational complexity and not parallelizable. The problem then becomes identifying and delivering the relevant subset of the vast amount of sensor data to each monitoring node, despite the lack of explicit ""relevance"" labels. The simplest solution is to deliver only the ""closest"" data items under some distance metric. We demonstrate a better approach using a more sophisticated architecture: a scalable data aggregation and dissemination overlay network uses an influence metric reflecting the relative influence of one node's data on another, to efficiently deliver a mix of raw and aggregated data to the monitoring components, enabling the application of machine learning tools on real-world problems. We term our architecture level of detail after an analogous computer graphics technique",2006,0, 3126,Multilevel Modelling Software Development,"Different from other engineering areas, the level of reuse in software engineering is very low. Also, developing large-scale applications which involve thousands of software elements such as classes and thousands of interactions among them is a complex and error-prone task. Industry currently lacks modelling practices and modelling tool support to tackle these issues. Model driven development (MDD) has emerged as an approach to diminishing software development complexity. We claim that models alone are not enough to tackle low reuse and complexity. Our contribution is a multilevel modelling development (MMD) framework whereby models are defined at different abstraction levels. A modelling level is constructed out by assembling software elements defined at the adjacent lower-level. MMD effectively diminish development complexity and facilitates large-scale reuse",2006,0, 3127,PAFBV: A Novel Parallel Aggregated and Folded Bit Vector Packet Classification Scheme for IPv6 Routers,"Packet classification is a central function for a number of network applications, such as routing and firewalls. Most existing algorithms for packet classification scale poorly in either time or space when the databases grow in size. The scalable algorithm Aggregated Bit Vector (ABV) is an improvement on the Lucent bit vector scheme (BV), but has some limitations such as large variance in performance, rule mapping back and preprocessing cost. Our paradigm, Parallel Aggregated and Folded bit vector (PAFBV) seeks to reduce false matches while keeping the benefits of bit vector aggregation and avoiding rearrangement. This model also uses multi-ary trie structures to reduce the seek time of bit vectors and thereby increasing the speed of packet classification. The objective of this paper is to propose a scalable packet classification algorithm with increased speed for even large database size for IPv6 addresses",2006,0, 3128,Risk: A Good System Security Measure,"What gets measured gets done. Security engineering as a discipline is still in its infancy. The field is hampered by its lack of adequate measures of goodness. Without such a measure, it is difficult to judge progress and it is particularly difficult to make engineering trade-off decisions when designing systems. The qualities of a good metric include that it: (1) measures the right thing, (2) is quantitatively measurable, (3) can be measured accurately, (4) can be validated against ground truth, and (5) be repeatable. By ""measures the right thing"", the author means that it measures some set of attributes that directly correlates to closeness to meeting some stated goal. For system security, the author sees the right goal as ""freedom from the possibility of suffering damage or loss from malicious attack."" Damage or loss applies to the mission effectiveness of the information infrastructure of a system. The mission can be maximizing profits while making quality cars or it could be defending an entire nation against foreign incursion",2006,0, 3129,On the Distribution of Property Violations in Formal Models: An Initial Study,"Model-checking techniques are successfully used in the verification of both hardware and software systems of industrial relevance. Unfortunately, the capability of current techniques is still limited and the effort required for verification can be prohibitive (if verification is possible at all). As a complement, fast, but incomplete, search tools may provide practical benefits not attainable with full verification tools, for example, reduced need for manual abstraction and fast detection of property violations during model development. In this report we investigate the performance of a simple random search technique. We conducted an experiment on a production-sized formal model of the mode-logic of a flight guidance system. Our results indicate that random search quickly finds the vast majority of property violations in our case-example. In addition, the times to detect various property violations follow an acutely right-skewed distribution and are highly biased toward the easy side. We hypothesize that the observations reported here are related to the phase transition phenomenon seen in Boolean satisfiability and other NP-complete problems. If so, these observations could be revealing some of the fundamental aspects of software (model) faults and have implications on how software engineering activities, such as analysis, testing, and reliability modeling, should be performed",2006,0, 3130,Scale Free in Software Metrics,"Software has become a complex piece of work by the collective efforts of many. And it is often hard to predict what the final outcome will be. This transition poses new challenge to the software engineering (SE) community. By employing methods from the study of complex network, we investigate the object oriented (OO) software metrics from a different perspective. We incorporate the weighted methods per class (WMC) metric into our definition of the weighted OO software coupling network as the node weight. Empirical results from four open source OO software demonstrate power law distribution of weight and a clear correlation between the weight and the out degree. According to its definition, it suggests uneven distribution of function among classes and a close correlation between the functionality of a class and the number of classes it depending on. Further experiment shows similar distribution also exists between average LCOM and WMC as well as out degree. These discoveries will help uncover the underlying mechanisms of software evolution and will be useful for SE to cope with the emerged complexity in software as well as efficient test cases design",2006,0, 3131,Proportional Intensity-Based Software Reliability Modeling with Time-Dependent Metrics,"The black-box approach based on stochastic software reliability models is a simple methodology with only software fault data in order to describe the temporal behavior of fault-detection processes, but fails to incorporate some significant development metrics data observed in the development process. In this paper we develop proportional intensity-based software reliability models with time-dependent metrics, and propose a statistical framework to assess the software reliability with the time-dependent covariate as well as the software fault data. The resulting models are similar to the usual proportional hazard model, but possess somewhat different covariate structure from the existing one. We compare these metrics-based software reliability models with some typical non-homogeneous Poisson process models, which are the special cases of our models, and evaluate quantitatively the goodness-of-fit from the viewpoint of information criteria. As an important result, the accuracy on reliability assessment strongly depends on the kind of software metrics used for analysis and can be improved by incorporating the time-dependent metrics data in modeling",2006,0, 3132,Reference Models and Automatic Oracles for the Testing of Mesh Simplification Software for Graphics Rendering,"Software with graphics rendering is an important class of applications. Many of them use polygonal models to represent the graphics. Mesh simplification is a vital technique to vary the levels of object details and, hence, improve the overall performance of the rendering process. It progressively enhances the effectiveness of rendering from initial reference systems. As such, the quality of its implementation affects that of the associated graphics rendering application. Testing of mesh simplification is essential towards assuring the quality of the applications. Is it feasible to use the reference systems to serve as automated test oracles for mesh simplification programs? If so, how well are they useful for this purpose? We present a novel approach in this paper. We propose to use pattern classification techniques to address the above problem. We generate training samples from the reference system to test samples from the implementation. Our experimentation shows that the approach is promising",2006,0, 3133,A Technique to Reduce the Test Case Suites for Regression Testing Based on a Self-Organizing Neural Network Architecture,"This paper presents a technique to select subsets of the test cases, reducing the time consumed during the evaluation of a new software version and maintaining the ability to detect defects introduced. Our technique is based on a model to classify test case suites by using an ART-2A self-organizing neural network architecture. Each test case is summarized in a feature vector, which contains all the relevant information about the software behavior. The neural network classifies feature vectors into clusters, which are labeled according to software behavior. The source code of a new software version is analyzed to determine the most adequate clusters from which the test case subset will be selected. Experiments compared feature vectors obtained from all-uses code coverage information to a random selection approach. Results confirm the new technique has improved the precision and recall metrics adopted",2006,0, 3134,ARIMAmmse: An Improved ARIMA-based,"Productivity is a critical performance index of process resources. As successive history productivity data tends to be auto-correlated, time series prediction method based on auto-regressive integrated moving average (ARIMA) model was introduced into software productivity prediction by Humphrey et al. In this paper, a variant of their prediction method named ARIMAmmse is proposed. This variant formulates the ARIMA parameter estimation issue as a minimum mean square error (MMSE) based constrained optimization problem. The ARIMA model is used to describe constraints of the parameter estimation problem, while MMSE is used as the objective function of the constrained optimization problem. According to the optimization theory, ARIMAmmse will definitely achieve a higher MMSE prediction precision than Humphrey et al's which is based on the Yule-Walk estimation technique. Two comparative experiments are also presented. The experimental results further confirm the theoretical superiority of ARIMAmmse",2006,0, 3135,Automated Health-Assessment of Software Components using Management Instrumentatio,"Software components are regularly reused in many large-scale, mission-critical systems where the tolerance for poor performance is quite low. As new components are integrated within an organization's computing infrastructure, it becomes critical to ensure that these components continue to meet the expected quality of service (QoS) requirements. Management instrumentation is an integrated capability of a software system that enables an external entity to assess that system's internals, such as its operational states, execution traces, and various quality attributes during runtime. In this paper, we present an approach that enables the efficient generation, measurement, and assessment of various QoS attributes of software components during runtime using management instrumentation. Monitoring the quality of a component in this fashion has many benefits, including the ability to proactively detect potential QoS-related issues within a component to avoid potentially expensive downtime of the overall environment. The main contributions of our approach consist of three parts: a lightweight component instrumentation framework that transparently generates a pre-defined set of QoS-related diagnostic data when integrated within a component, a method to formally define the health state of a component in terms of the expected QoS set forth by the target environment, and finally a method for publishing the QoS-related diagnostic data during runtime so that an external entity can measure the current health of a component and take appropriate actions. The main QoS types that we consider are: performance, reliability, availability, throughput, and resource usage. Experimentation results show that our approach can be efficiently utilized in large mission-critical systems",2006,0, 3136,Consensus ontology generation in a socially interacting multiagent system,"This paper presents an approach for building consensus ontologies from the individual ontologies of a network of socially interacting agents. Each agent has its own conceptualization of the world. The interactions between agents are modeled by sending queries and receiving responses and later assessing each other's performance based on the results. This model enables us to measure the quality of the societal beliefs in the resources which we represent as the expertise in each domain. The dynamic nature of our system allows us to model the emergence of consensus that mimics the evolution of language. We present an algorithm for generating the consensus ontologies which makes use of the authoritative agent's conceptualization in a given domain. As the expertise of agents change after a number of interactions, the consensus ontology that we build based on the agents' individual views evolves. We evaluate the consensus ontologies by using different heuristic measures of similarity based on the component ontologies",2006,0, 3137,QoS-Based Service Composition,"QoS has been one of the major challenges in Web services area. Though negotiated in the contract, service quality usually can not be guaranteed by providers. Therefore, the service composer is obligated to detect real quality status of component services. Local monitoring cannot fulfil this task. We propose a Probe-based architecture to address this problem. By running light weighted test cases, the Probe can collect accurate quality data to support runtime service composition and composition re-planning",2006,0, 3138,Measurement-Based Analysis: A Key to Experimental Research in Dependability,"This paper reflects on years of our experience in measurement-based dependability analysis of operational systems. In particular, the discussion brings examples illustrating a key importance of operational data analysis in designing and experimental assessment of dependable computing systems",2006,0, 3139,Dynamic Derivation of Application-Specific Error Detectors and their Implementation in Hardware,"This paper proposes a novel technique for preventing a wide range of data errors from corrupting the execution of applications. The proposed technique enables automated derivation of fine-grained, application-specific error detectors. An algorithm based on dynamic traces of application execution is developed for extracting the set of error detector classes, parameters, and locations in order to maximize the error detection coverage for a target application. The paper also presents an automatic framework for synthesizing the set of detectors in hardware to enable low-overhead runtime checking of the application execution. Coverage (evaluated using fault injection) of the error detectors derived using the proposed methodology, the additional hardware resources needed, and performance overhead for several benchmark programs are also reported",2006,0, 3140,Rephrasing Rules for Off-The-Shelf SQL Database Servers,"We have reported previously (Gashi et al., 2004) results of a study with a sample of bug reports from four off-the-shelf SQL servers. We checked whether these bugs caused failures in more than one server. We found that very few bugs caused failures in two servers and none caused failures in more than two. This would suggest a fault-tolerant server built with diverse off-the-shelf servers would be a prudent choice for improving failure detection. To study other aspects of fault tolerance, namely failure diagnosis and state recovery, we have studied the ""data diversity"" mechanism and we defined a number of SQL rephrasing rules. These rules transform a client sent statement to an additional logically equivalent statement, leading to more results being returned to an adjudicator. These rules therefore help to increase the probability of a correct response being returned to a client and maintain a correct state in the database",2006,0, 3141,Model-Based Testing of Community-Driven Open-Source GUI Applications,"Although the World-Wide-Web (WWW) has significantly enhanced open-source software (OSS) development, it has also created new challenges for quality assurance (QA), especially for OSS with a graphical-user interface (GUI) front-end. Distributed communities of developers, connected by the WWW, work concurrently on loosely-coupled parts of the OSS and the corresponding GUI code. Due to the unprecedented code churn rates enabled by the WWW, developers may not have time to determine whether their recent modifications have caused integration problems with the overall OSS; these problems can often be detected via GUI integration testing. However, the resource-intensive nature of GUI testing prevents the application of existing automated QA techniques used during conventional OSS evolution. In this paper we develop new process support for three nested techniques that leverage developer communities interconnected by the WWW to automate model-based testing of evolving GUI-based OSS. The ""innermost"" technique (crash testing) operates on each code check-in of the GUI software and performs a quick and fully automatic integration test. The second technique {smoke testing) operates on each day's GUI build and performs functional ""reference testing"" of the newly integrated version of the GUI. The third (outermost) technique (comprehensive GUI testing) conducts detailed integration testing of a major GUI release. An empirical study involving four popular OSS shows that (1) the overall approach is useful to detect severe faults in GUI-based OSS and (2) the nesting paradigm helps to target feedback and makes effective use of the WWW by implicitly distributing QA",2006,0, 3142,Regression Testing UML Designs,"As model driven architectures (MDAs) gain in popularity, several techniques that test the UML models have been proposed. These techniques aim at early detection and correction of faults to reduce the overall cost of correcting them later in the software life-cycle. Recently, Pilskalns et al., 2003 proposed an approach to test the UML design models to check for inconsistencies. They create an aggregate model which merges information from class diagrams, sequence diagrams and OCL statements, then generate test cases to identify inconsistencies. Since designs change often in the early stages of the software life-cycle, we need a regression testing approach that can be performed on the UML model. By classifying design changes, and then further classifying the test cases, we provide a set of rules about how to reuse part of the existing test cases, and generate new ones to ensure all affected parts of the system are tested adequately. The approach is a safe and efficient selective retest strategy. A case-study is reported to demonstrate the benefits",2006,0, 3143,The Conceptual Coupling Metrics for Object-Oriented Systems,"Coupling in software has been linked with maintainability and existing metrics are used as predictors of external software quality attributes such as fault-proneness, impact analysis, ripple effects of changes, changeability, etc. Many coupling measures for object-oriented (OO) software have been proposed, each of them capturing specific dimensions of coupling. This paper presents a new set of coupling measures for OO systems - named conceptual coupling, based on the semantic information obtained from the source code, encoded in identifiers and comments. A case study on open source software systems is performed to compare the new measures with existing structural coupling measures. The case study shows that the conceptual coupling captures new dimensions of coupling, which are not captured by existing coupling measures; hence it can be used to complement the existing metrics",2006,0, 3144,Towards Portable Metrics-based Models for Software Maintenance Problems,"The usage of software metrics for various purposes has become a hot research topic in academia and industry (e.g. detecting design patterns and bad smells, studying change-proneness, quality and maintainability, predicting faults). Most of these topics have one thing in common: they are all using some kind of metrics-based models to achieve their goal. Unfortunately, only few researchers have tested these models on unknown software systems so far. This paper tackles the question, which metrics are suitable for preparing portable models (which can be efficiently applied to unknown software systems). We have assessed several metrics on four large software systems and we found that the well-known RFC and WMC metrics differentiate the analyzed systems fairly well. Consequently, these metrics cannot be used to build portable models, while the CBO, LCOM and LOC metrics behave similarly on all systems, so they seem to be suitable for this purpose",2006,0, 3145,A Method for an Accurate Early Prediction of Faults in Modified Classes,"In this paper we suggest and evaluate a method for predicting fault densities in modified classes early in the development process, i.e., before the modifications are implemented. We start by establishing methods that according to literature are considered the best for predicting fault densities of modified classes. We find that these methods can not be used until the system is implemented. We suggest our own methods, which are based on the same concept as the methods suggested in the literature, with the difference that our methods are applicable before the coding has started. We evaluate our methods using three large telecommunication systems produced by Ericsson. We find that our methods provide predictions that are of similar quality to the predictions based on metrics available after the code is implemented. Our predictions are, however, available much earlier in the development process. Therefore, they enable better planning of efficient fault prevention and fault detection activities",2006,0, 3146,Delay Time Identification and Dynamic Characteristics Study on ANN Soft Sensor,"Soft sensor software based on ANN (artificial neural network) using BP or RBF was developed to estimate unmeasured variables such as product quality online. Some important topics including how to determine the delay time, how to simulate the dynamic system were discussed and solved. We applied a 3 layers BP network to identify the delay time of nonlinear system, feedback output variables to input layer, and weight of all the input variables to describe dynamic characteristics of the system. This makes the ANN soft sensor reflect truly both the static and dynamic characteristics of the system and provide more adaptability",2006,0, 3147,Notice of Violation of IEEE Publication Principles
An Artificial Immune Recognition System-based Approach to Software Engineering Management: with Software Metrics Selection,"Notice of Violation of IEEE Publication Principles

""An Artificial Immune Recognition System-based Approach to Software Engineering Management: with Software Metrics Selection""
by Xin Jin, Rongfang Bie, and X.Z. Gao
in the Proceedings of the Sixth International Conference on Intelligent Systems Design and Applications, 2006, pp. 523-528

After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

This paper improperly paraphrased portions of original text from the paper cited below. The original text was paraphrased without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.

Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following article:

""Artificial Immune Recognition System (AIRS): A Review and Analysis""
by Jason Brownlee,
in Technical Report 1-02, Center for Intelligent Systems and Complex Processes (CISCP), Faculty of Information and Communication Technologies, Swinburne University of Technology, January 2005Artificial immune systems (AIS) are emerging machine learners, which embody the principles of natural immune systems for tackling complex real-world problems. The artificial immune recognition system (AIRS) is a new kind of supervised learning AIS. Improving the quality of software products is one of the principal objectives of software engineering. It is well known that software metrics are the key tools in the software quality management. In this paper, we propose an AIRS-based method for software quality classification. We also compare our scheme with other conventional classification techniques. In addition, the gain ratio is employed to select relevant software metrics for classifiers. Results on t- he MDP benchmark dataset using the error rate (ER) and average sensitivity (AS) as the performance measures demonstrate that the AIRS is a promising method for software quality classification and the gain ratio-based metrics selection can considerably improve the performance of classifiers",2006,0, 3148,Empirically Validating Software Metrics for Risk Prediction Based on Intelligent Methods,"The software systems which are related to national projects are always very crucial. This kind of systems always involves hi-tech factors and has to spend a large amount of money, so the quality and reliability of the software deserve to be further studied. Hence, we propose to apply three classification techniques most used in data mining fields: Bayesian belief networks (BBN), nearest neighbor (NN) and decision tree (DT), to validate the usefulness of software metrics for risk prediction. Results show that comparing with metrics such as Lines of code (LOQ and Cyclomatic complexity (V(G)) which are traditionally used for risk prediction, Halstead program difficulty (D), Number of executable statements (EXEC) and Halstead program volume (V) are the more effective metrics as risk predictors. By analyzing we also found that BBN was more effective than the other two methods in risk prediction",2006,0, 3149,Stabilization Time - A Quality Metric for Software Products,"In software products, often the failure rate decreases after installation, eventually reaching a steady state. The time it takes for a product to reach its steady state reliability depends on different product parameters. In this paper we propose a new metric for software products called stabilization time which is the time taken after installation for the reliability of the product to stabilize. This metric can be used for comparing products, and can be useful for organizations and individuals using the product as well as for the product vendor. We also present an approach for determining the stabilization time of a product from its failure and sales data. We apply the approach to three real life products using their failure and sales data after release",2006,0, 3150,Metrics-Based Software Reliability Models Using Non-homogeneous Poisson Processes,"The traditional software reliability models aim to describe the temporal behavior of software fault-detection processes with only the fault data, but fail to incorporate some significant test-metrics data observed in software testing. In this paper we develop a useful modeling framework to assess the quantitative software reliability with time-dependent covariate as well as software-fault data. The basic ideas employed here are to introduce the discrete proportional hazard model on a cumulative Bernoulli trial process, and to represent a generalized fault-detection processes having time-dependent covariate structure. The resulting stochastic models are regarded as combinations of the proportional hazard models and the familiar non-homogeneous Poisson processes. We compare these metrics-based software reliability models with some typical non-homogeneous Poisson process models, and evaluate quantitatively both goodness-of-fit and predictive performances from the viewpoint of information criteria. As an important result, the accuracy on reliability assessment strongly depends on the kind of software metrics used for analysis and can be improved by incorporating time-dependent metrics data in modeling",2006,0, 3151,"Studying the Characteristics of a ""Good"" GUI Test Suite","The widespread deployment of graphical-user interfaces (GUIs) has increased the overall complexity of testing. A GUI test designer needs to perform the daunting task of adequately testing the GUI, which typically has very large input interaction spaces, while considering tradeoffs between GUI test suite characteristics such as the number of test cases (each modeled as a sequence of events), their lengths, and the event composition of each test case. There are no published empirical studies on GUI testing that a GUI test designer may reference to make decisions about these characteristics. Consequently, in practice, very few GUI testers know how to design their test suites. This paper takes the first step towards assisting in GUI test design by presenting an empirical study that evaluates the effect of these characteristics on testing cost and fault detection effectiveness. The results show that two factors significantly effect the fault-detection effectiveness of a test suite: (1) the diversity of states in which an event executes and (2) the event coverage of the suite. Test designers need to improve the diversity of states in which each event executes by developing a large number of short test cases to detect the majority of ""shallow"" faults, which are artifacts of modern GUI design. Additional resources should be used to develop a small number of long test cases to detect a small number of ""deep"" faults",2006,0, 3152,Assessing the Relationship between Software Assertions and Faults: An Empirical Investigation,"The use of assertions in software development is thought to help produce quality software. Unfortunately, there is scant empirical evidence in commercial software systems for this argument to date. This paper presents an empirical case study of two commercial software components at Microsoft Corporation. The developers of these components systematically employed assertions, which allowed us to investigate the relationship between software assertions and code quality. We also compare the efficacy of assertions against that of popular bug finding techniques like source code static analysis tools. We observe from our case study that with an increase in the assertion density in a file there is a statistically significant decrease in fault density. Further, the usage of software assertions in these components found a large percentage of the faults in the bug database",2006,0, 3153,Testing During Refactoring: Adding Aspects to Legacy Systems,"Moving program code that implements cross-cutting concerns into aspects can improve the maintainability of legacy systems. This kind of refactoring, called aspectualization, can also introduce faults into a system. A test driven approach can identify these faults during the refactoring process so that they can be removed. We perform systematic testing as we aspectualize commercial VLSI CAD applications. The process of refactoring these applications revealed the kinds of faults that can arise during aspectualization, and helped us to develop techniques to reduce their occurrences",2006,0, 3154,Model Simulation for Test Execution Capacity Estimation,"It is important for a company to know if it is able to attend the clients' requests on time and according to their expectations, mainly in competitive markets. A usual activity performed to ensure quality is the test activity. Software testing is been considered so important that organizations can allocate teams exclusively for test activities. In this context, it is important that test managers know the capacity of their test team. This paper defines a capacity model for test execution teams and presents the use of simulation technique for estimating teams' capacities for executing tests in mobile applications. We also report our experiences using this model in an organizational setting",2006,0, 3155,On the Effect of Fault Removal in Software Testing - Bayesian Reliability Estimation Approach,"In this paper, we propose some reliability estimation methods in software testing. The proposed methods are based on the familiar Bayesian statistics, and can be characterized by using test outcomes in input domain models. It is shown that the resulting approaches are capable of estimating software reliability in the case where the detected software faults are removed. In numerical examples, we compare the proposed methods with the existing method, and investigate the effect of fault removal on the reliability estimation in software testing. We show that the proposed methods can give more accurate estimates of software reliability",2006,0, 3156,"Reliability and Performance of Component Based Software Systems with Restarts, Retries, Reboots and Repairs","High reliability and performance are vital for software systems handling diverse mission critical applications. Such software systems are usually component based and may possess multiple levels of fault recovery. A number of parameters, including the software architecture, behavior of individual components, underlying hardware, and the fault recovery measures, affect the behavior of such systems, and there is a need for an approach to study them. In this paper we present an integrated approach for modeling and analysis of component based systems with multiple levels of failures and fault recovery both at the software, as well as the hardware level. The approach is useful to analyze attributes such as overall reliability, performance, and machine availabilities for such systems, wherein failures may happen at the software components, the operating system, or at the hardware, and corresponding restarts, retries, reboots or repairs are used for mitigation. Our approach encompasses Markov chain, and queueing network modeling, for estimating system reliability, machine availabilities and performance. The approach is helpful for designing and building better systems and also while improving existing systems",2006,0, 3157,Performance Evaluation of Contour Determination Algorithm Using Conventional and Compounded Ultrasound Medical Images,"Performance evaluation of contour determination algorithm using both conventional ultrasound images and ultrasound images, generated by multi-angle compounding imaging (MACI) is presented. A method for modelling different contour shapes with known true positions and a method for error estimation are used to compare the results in both cases. A probabilistic data association combined with interacting multiple model (IMMPDA) approach is used for contour determination algorithm. The results are presented in figures and show that the contour determination algorithm works more robust and more precisely with compound images",2006,0, 3158,Automatic Usability Evaluation Using AOP,Usability is one of the most important qualities of a software system. Designing for usability is a complex task and sometimes even expensive. Automatic usability evaluation can ease the evaluation process. In this paper we try to analyze where and how can aspect oriented programming be used to develop modules to support automatic usability evaluation. A small family budget application is used to test the module we have designed and implemented using aspect oriented programming,2006,0, 3159,Data acquisition equipment for measurement of the parameters for electric energy quality,"The paper presents the equipment used to measure some parameters that characterize the electric energy quality. The proposed equipment performs test and acquisition of analogue data (U and I) and numerical data. The sampled data are recorded when preset thresholds are exceeded by the analogical inputs or when the digital inputs states change. The fixed variant is supplementary provided with 2 analogue outputs and 8 numerical outputs. The operation of equipment is simulated and the corresponding software are exemplified for the case of a highly distorting consumer, a set of electric energy quality parameters being determined for this case",2006,0, 3160,An Approach for Evaluating Trust in IT Infrastructure,Trustworthiness of an IT infrastructure can be justified using the concept of trust case which denotes a complete and explicit structure encompassing all the evidence and argumentation supporting trust within a given context. A trust case is developed by making an explicit set of claims about the system of interest and showing how the claims are interrelated and supported by evidence. The approach uses Dempster-Shafer belief function framework to quantify the trust case. We demonstrate how recommendations issued by different stakeholders enable stakeholder-specific views of the trust case and reasoning about the level of trust in a given IT infrastructure,2006,0, 3161,Measurement Techniques in On-Demand Overlays for Reliable Enterprise IP Telephony,"Maintaining good quality of service for real-time applications like IP Telephony requires quick detection and reaction to network impairments. In this paper, we propose and study novel measurement techniques in ORBIT, which is a simple, easily deployable architecture that uses single-hop overlays implemented with intelligent endpoints and independent relays. The measurement techniques provide rapid detection and recovery of IP Telephony during periods of network trouble. We study our techniques via detailed simulations of several multi-site enterprise topologies of varying sizes and three typical fault models. We show that our proposed techniques can detect network impairments rapidly and rescue IP Telephony calls in sub-second intervals. We observed that all impacted calls were rescued with only a few relays in the network and the run-time overhead was low. Furthermore, the relay sites needed to be provisioned with minimal additional bandwidth to support the redirected calls.",2006,0, 3162,Modeling and Performance Analysis of Beyond 3G Integrated Wireless Networks,"Next-generation wireless networking is evolving towards a multi-service heterogeneous paradigm that converges different pervasive access technologies and provides a large set of novel revenue generating applications. Hence, system complexity increases due to its embedded heterogeneity, which can not be accounted by the existing modeling and performance evaluation techniques. Consequently, the development of new modeling approaches becomes as a crucial requirement for proper system design and performance evaluation. This paper presents a novel mobility model for a two-tier integrated wireless system using a new modeling approach that accommodates the aforementioned complexity. Additionally, a novel session model is developed as an adapted version of the proposed mobility model. These models use phase-type distributions that are known to approximate any generic probability laws. Using the proposed session model, a novel generic analytical framework is developed to obtain several salient performance metrics such as network utilization times and handoff rates. Simulation and analysis results prove the proposed model validity and demonstrate the accuracy of the novel modeling approach when compared with traditional modeling techniques.",2006,0, 3163,Performance Analysis and Enhancement for Priority Based IEEE 802.11 Network,"In this paper, a novel non-saturation analytical model for priority based IEEE 802.11 network is introduced. Unlike previous work that is focused on MAC backoff for saturation stations, this model uses Markov and M/ M/1/K theories to predict MAC and queuing service time and loss. Then a performance prediction based enhancement scheme is proposed. By dynamic tuning of protocol options, this proposed scheme limits end-to-end delay and loss rate of real-time traffic and maximizes throughput. Consequently, call admission control is taken to protect existing traffics when the channel is saturated. Simulations validate this model and the comparison with IEEE802.11e EDCA shows that our mechanism can guarantee quality of service more efficiently.",2006,0, 3164,Field Data-based Study on Electric Arc Furnace Flicker Mitigation,"As an industry customer of utility, electrical arc furnace (EAF), acting as a varying and reactive power sink, is the major flicker source to influence the grid power quality. In this paper, based on the field data recorded at the electric system surrounding EAF, the flicker issues and reactive power compensation requirement are quantitatively analyzed. The possible compensation solutions, including passive filter, SVC (static Var compensator), STATCOM (static synchronous compensator) are discussed. Furthermore, a field data-based EAF model is proposed and implemented in the simulation software. The fast-bandwidth STATCOM solution with or without capacitor bank are designed and implemented. With the validated EAF and STATCOM model, the extensive simulations are conducted to study compensator performance and cost-effectiveness for EAF flicker mitigation. Finally, the guideline for compensator selection and performance estimation is obtained",2006,0, 3165,Development of a Controller Virtual Maintenance System Through the Internet,"There is a great deal of benefits for control systems to adopt the Internet to controller maintenance. This paper aims at developing a controller virtual maintenance system via the Internet to solve controller performance monitoring, controller fault detection and remote upgrading controller issues. The basic model of Internet-based virtual maintenance system has been proposed, which is very suitable for remote monitoring and controller maintenance applications. Several essential implementation issues of the system, including system architecture, network communication architecture and Web-based user interface, have also been discussed. A water tank system with a simple PID controller has been used as a case study to illustrate the solution for the remote controller maintenance",2006,0, 3166,Research on Test-platform and Condition Monitoring Method for AUV,"To improve the reliability and intelligence of autonomous underwater vehicle, an AUV test-platform named ""Beaver"" is developed. The hardware and software system structure are introduced in detail. By analyzing the performance and the fault mechanism of thruster, it establishes the condition monitoring system for thrusters and sensors based on the double closed-loop PID controller, which includes the performance model of thruster based on RBF neural network and forward model of AUV based on improved dynamic recursive Elman neural network, and it probes into the method of combine fault detection. The results of experiment indicate that the ""Beaver"" can achieve the basic motion control and meet the requirement in test, and the combine fault detection method by parallel connected performance model and forward model can detect the typical fault of thrusters and sensors, which certificates the reliability and the effectiveness of condition monitoring system",2006,0, 3167,Validating Requirements Engineering Process Improvements - A Case Study,"The quality of the Requirements Engineering (RE) process plays a critical role in successfully developing software systems. Often, in software organizations, RE processes are assessed and improvements are applied to overcome their deficiency. However, such improvements may not yield desired results for two reasons. First, the assessed deficiency may be inaccurate because of ambiguities in measurement. Second, the improvements are not validated to ascertain their correctness to overcome the process deficiency. Therefore, a Requirements Engineering Process Improvement (REPI) exercise may fail to establish its purpose. A major shortfall in validating RE processes is the difficulty in representing process parameters in some cognitive form. We address this issue with an REPI framework that has both measurement and visual validation properties. The REPI validation method presented is empirically tested based on a case study in a large software organization. The results are promising towards considering this REPI validation method in practice by organizations.",2006,0, 3168,Bug Classification Using Program Slicing Metrics,"In this paper, we introduce 13 program slicing metrics for C language programs. These metrics use program slice information to measure the size, complexity, coupling, and cohesion properties of programs. Compared with traditional code metrics based on code statements or code structure, program slicing metrics involve measures for program behaviors. To evaluate the program slicing metrics, we compare them with the Understand for C++ suite of metrics, a set of widely-used traditional code metrics, in a series of bug classification experiments. We used the program slicing and the Understand for C++ metrics computed for 887 revisions of the Apache HTTP project and 76 revisions of the Latex2rtf project to classify source code files or functions as either buggy or bug-free. We then compared their classification prediction accuracy. Program slicing metrics have slightly better performance than the Understand for C++ metrics in classifying buggy/bug-free source code. Program slicing metrics have an overall 82.6% (Apache) and 92% (Latex2rtf) accuracy at the file level, better than the Understand for C++ metrics with an overall 80.4% (Apache) and 88% (Latex2rtf) accuracy. The experiments illustrate that the program slicing metrics have at least the same bug classification performance as the Understand for C++ metrics.",2006,0, 3169,Contextual Spectrums in Technology and Information Platform,"Summary form only given. The technology and information platform (TIP) is a three-dimensional architecture framework to effectively cope with the architecture complexity and manage the architectural assets of information system applications in a service-oriented paradigm. This holistic framework comprises the generic architecture stack (GAS), which is made up of a series of architecture layers, and the contextual spectrums, which consist of the process, abstraction, latitude, and maturity (PALM) dimensions. This paper describes the detailed attributes in the four dimensions of the contextual spectrums. The process dimension covers operations, risk, financial, resources, estimation, planning, execution, policies, governance, compliance, organizational politics, etc. The abstraction dimension deals with what, why, who, where, when, which and how (6W+1H). The latitude dimension includes principles, functional, logical, physical, interface, integration & interoperability, access & delivery, security, quality of services, patterns, standards, tools, skills, and so forth. Finally the maturity dimension is about performance, metrics, competitive assessment, scorecards, capacity maturity, benchmarks, service management, productivity, gap analysis, transition, etc",2006,0, 3170,Performance Modeling of WS-BPEL-Based Web Service Compositions,This paper addresses quality of service aspects of Web service orchestrations created using WS-BPEL from the standpoint of a Web service integrator. A mathematical model based on operations research techniques and formal semantics of WS-BPEL is proposed to estimate and forecast the influence of the execution of orchestrated processes on utilization and throughput of individual involved nodes and of the whole system. This model is applied to the optimization of service levels agreement process between the involved parties,2006,0, 3171,Modeling Request Routing in Web Applications,"For Web applications, determining how requests from a Web page are routed through server components can be time-consuming and error-prone due to the complex set of rules and mechanisms used in a platform such as J2EE. We define request routing to be the possible sequences of server-side components that handle requests. Many maintenance tasks require the developer to understand the request routing, so this complexity increases maintenance costs. However, viewing this problem at the architecture level provides some insight. The request routing in these Web applications is an example of a pipeline architectural pattern: each request is processed by a sequence of components that form a pipeline. Communication between pipeline stages is event-based, which increases flexibility but obscures the pipeline structure because communication is indirect. Our approach for improving the maintainability of J2EE Web applications is to provide a model that exposes this architectural information. We use Z to formally specify request routing models and analysis operations that can be performed on them, then provide tools to extract request routing information from an application's source code, create the request routing model, and analyze it automatically. We have applied this approach to a number of existing applications up to 34K LOC, showing improvement via typical maintenance scenarios. Since this particular combination of patterns is not unique to Web applications, a model such as our request routing model could provide similar benefits for these systems",2006,0, 3172,An Efficient Test Pattern Selection Method for Improving Defect Coverage with Reduced Test Data Volume and Test Application Time,"Testing using n-detection test sets, in which a fault is detected by n (n > 1) input patterns, is being increasingly advocated to increase defect coverage. However, the data volume for an n-detection test set is often too large, resulting in high testing time and tester memory requirements. Test set selection is necessary to ensure that the most effective patterns are chosen from large test sets in a high-volume production testing environment. Test selection is also useful in a time-constrained wafer-sort environment. The authors use a probabilistic fault model and the theory of output deviations for test set selection - the metric of output deviation is used to rank candidate test patterns without resorting to fault grading. To demonstrate the quality of the selected patterns, experimental results were presented for resistive bridging faults and non-feedback zero-resistance bridging faults in the ISCAS benchmark circuits. Our results show that for the same test length, patterns selected on the basis of output deviations are more effective than patterns selected using several other methods",2006,0, 3173,Design Structural Stability Metrics and Post-Release Defect Density: An Empirical Study,"This paper empirically explores the correlations between a suite of structural stability metrics for object-oriented designs and post-release defect density. The investigated stability metrics measure the extent to which the structure of a design is preserved throughout the evolution of the software from one release to the next. As a case study, thirteen successive releases of Apache Ant were analyzed. The results indicate that some of the stability metrics are significantly correlated with post-release defect density. It was possible to construct statistically significant regression models to estimate post-release defect density from subsets of these metrics. The results reveal the practical significance and usefulness of some of the investigated stability metrics as early indicators of one of the important software quality outcomes, which is post-release defect density.",2006,0, 3174,Combined software and hardware techniques for the design of reliable IP processors,"In the recent years both software and hardware techniques have been adopted to carry out reliable designs, aimed at autonomously detecting the occurrence of faults, to allow discarding erroneous data and possibly performing the recovery of the system. The aim of this paper is the introduction of a combined use of software and hardware approaches to achieve complete fault coverage in generic IP processors, with respect to SEU faults. Software techniques are preferably adopted to reduce the necessity and costs of modifying the processor architecture; since a complete fault coverage cannot be achieved, partial hardware redundancy techniques are then introduced to deal with the remaining, not covered, faults. The paper presents the methodological approach adopted to achieve the complete fault coverage, the proposed resulting architecture, and the experimental results gathered from the fault injection analysis campaign",2006,0, 3175,A Software-Based Error Detection Technique Using Encoded Signatures,"In this paper, a software-based control flow checking technique called SWTES (software-based error detection technique using encoded signatures) is presented and evaluated. This technique is processor independent and can be applied to any kind of processors and microcontrollers. To implement this technique, the program is partitioned to a set of blocks and the encoded signatures are assigned during the compile time. In the run-time, the signatures are compared with the expected ones by a monitoring routine. The proposed technique is experimentally evaluated on an ATMEL MCS51 microcontroller using software implemented fault injection (SWIFI). The results show that this technique detects about 90% of the injected errors. The memory overhead is about 135% on average, and the performance overhead varies between 11% and 191% depending on the workload used",2006,0, 3176,Load Board Designs Using Compound Dot Technique and Phase Detector for Hierarchical ATE Calibrations,This paper presents two load board designs for hierarchical calibration of largely populated ATE. Compound dot technique and phase detector are used on both boards to provide automatic and low cost calibration of ATE with or without a single reference clock. Two different relay tree structures are implemented on the two boards with advanced board design techniques for group offset calibration. Various error sources have been identified and analyzed on both boards based on SPICE simulations and real measurements. TDR measurement compares the two approaches and shows that the two load boards give a maximum of 37ps group timing skew and can be calibrated out by the calibration software,2006,0, 3177,The Filter Checker: An Active Verification Management Approach,"Dynamic verification architectures provide fault detection by employing a simple checker processor that dynamically checks the computations of a complex processor. For dynamic verification to be viable, the checker processor must keep up with the retirement throughput of the core processor. However, the overall throughput would be limited if the checker processor is neither fast nor wide enough to keep up with the core processor. The authors investigate the impact of checker bandwidth on performance. As a solution for the checker's congestion, the authors propose an active verification management (AVM) approach with a filter checker. The goal of AVM is to reduce overloaded verification in the checker with a congestion avoidance policy and to minimize the performance degradation caused by congestion. Before the verification process starts at the checker processor, a filter checker marks a correctness non-criticality indicator (CNI) bit in advance to indicate how likely these pre-computed results are to be unimportant for reliability. Then AVM decides how to deal with the marked instructions by using a congestion avoidance policy. Both reactive and proactive congestion avoidance policies are proposed to skip the verification process at the checker. Results show that the proposed AVM has the potential to solve the verification congestion problem when perfect fault coverage is not needed. With no AVM, congestion at the checker badly affects performance, to the tune of 57%, when compared to that of a non-fault-tolerant processor. With good marking by AVM, the performance of a reliable processor approaches 95% of that of a non-fault-tolerant processor. Although instructions can be skipped on a random basis, such an approach reduces the fault coverage. A filter checker with a marking policy correlated with the correctness non-criticality metric, on the other hand, significantly reduces the soft error rate. Finally, the authors also present results showing the trade-off be- - tween performance and reliability",2006,0, 3178,Extending an SSI Cluster for Resource Discovery in Grid Computing,"Grid technologies enable large-scale sharing of resources within formal or informal consortia of individuals and/or virtual organizations. In these settings, the discovery, characterization, and monitoring of resources, services, and computations can be challenging due to the considerable diversity, large numbers, dynamic behavior, and geographical distribution of the entities in which a user might be interested. Hence, information services are a vital part of any grid software infrastructure, providing fundamental mechanisms for discovery and monitoring, and thus for planning and adapting application behavior. This paper proposes a resource discovery system for grid computing with fault-tolerant capabilities starting from an SSI clustering operating system. The proposed system uses dynamic leader-determination and registration mechanisms to automatically recover from nodes and network failures. The system is centralized and uses dynamic (or soft-state) registration to detect and recover from failures. Provisional or backup leader determination provides tolerance and recovery in the event of the leader node failing. The system was tested against a control network modeled after existing grid computing resource discovery components, such as Globus monitoring and discovery system (MDS). In various failure scenarios, the proposed system showed better resilience and performance than the control system",2006,0, 3179,Data Processing Workflow Automation in Grid Architecture,"Because of the poor performance and the expensive license cost, traditional relational database management systems are no longer good choices for processing huge amount of data. Grid computing is replacing the place of RDBMS in data processing. Traditionally a workflow is generated by data experts, which is time consuming, labor intensive and error prone. More over, it becomes the bottleneck of the overall performance of data processing in the grid architecture. This paper proposes a multi-layer workflow automation strategy that can automatically generate a workflow from a business language. A prototype has been implemented and a simulation has been designed",2006,0, 3180,On the Assessment of the Mean Failure Frequency of Software in Late Testing,"We propose an approach for assessing the mean failure frequency of a program, based on the statistical test of hypotheses. The approach can be used to establish stopping rules and evaluate the quality of a program based on its mean failure frequency during the late testing phases. Our proposal shows how to set and satisfy conservative bounds for the minimum number of test executions that are needed to achieve a target mean failure frequency with a specified level of statistical significance, based on the quality goal of testing and the specific test execution profile chosen. We relax a few assumptions of the literature, so our approach can be used in a larger set of real-life cases.",2006,0, 3181,Estimating Software Quality with Advanced Data Mining Techniques,"Current software quality estimation models often involve the use of supervised learning methods for building a software fault prediction models. In such models, dependent variable usually represents a software quality measurement indicating the quality of a module by risk-basked class membership, or the number of faults. Independent variables include various software metrics as McCabe, Error Count, Halstead, Line of Code, etc... In this paper we present the use of advanced tool for data mining called Multimethod on the case of building software fault prediction model. Multimethod combines different aspects of supervised learning methods in dynamical environment and therefore can improve accuracy of generated prediction model. We demonstrate the use Multimethod tool on the real data from the Metrics Data Project Data (MDP) Repository. Our preliminary empirical results show promising potentials of this approach in predicting software quality in a software measurement and quality dataset.",2006,0, 3182,DuoTracker: Tool Support for Software Defect Data Collection and Analysis,"In today software industry defect tracking tools either help to improve an organization's software development process or an individual's software development process. No defect tracking tool currently exists that help both processes. In this paper we present DuoTracker, a tool that makes possible to track and analyze software defects for organizational and individual software process decision making. To accomplish this, DuoTracker has capabilities to classify defects in a manner that makes analysis at both organizational and individual software processes meaningful. The benefit of this approach is that software engineers are able to see how their personal software process improvement impacts their organization and vice versa. This paper shows why software engineers need to keep track of their program defects, how this is currently done, and how DuoTracker offers a new way of keeping track of software errors. Furthermore, DuoTracker is compared to other tracking tools that enable software developers to record program defects that occur during their individual software processes.",2006,0, 3183,Towards an Identification Framework for Software Drifts: A Case Study,"Software drift is a common phenomenon in software development processes, which may lead to process deviation and then affect software quality. Effectively controlling negative software drifts is the key to achieving acceptable, predictable, and dependable software evolution in the model-driven development. In this paper we put forward a general taxonomy to identify different drifts in development processes according to their effects on software development. Based on the taxonomy, categories of process drift and quality drift are detailedly introduced. Moreover, we propose an integration framework for different drifts to realize the integration between development process and product quality. Eventually, a case study from practical project development is shown to prove the validity of our identification framework.",2006,0, 3184,Application of Computational Redundancy in Dangling Pointers Detection,"Many programmers manipulate dynamic data improperly, which may produce dynamic memory problems, such as dangling pointer. Dangling pointers can occur when a function returns a pointer to an automatic variable, or when trying to access a deleted object. The existence of dangling pointers causes the programs to behave incorrectly. Dangling pointers are common defect that are easy to commit, but are difficult to discover. In this paper we propose a dynamic approach that detects dangling pointers in computer programs. Redundant computation is an execution of a program statement(s) that does not contribute to the program output. The notion of redundant computation is introduced as a potential indicator of defects in programs. We investigate the application of redundant computation in dangling pointers detection. The results of the initial experiment show that, the redundant computation detection can help the debuggers to localize the source(s) of the dangling pointers. During the experiment, we find that, our approach may be capable of detecting other types of dynamic memory problems such as memory leaks and inaccessible objects, which we plan to investigate in our future research.",2006,0, 3185,Performance of MPI Parallel Applications,"The study of the performance of parallel applications may have different reasons. One of them is the planning of the execution architecture to guaranty a quality of service. In this case, it is also necessary to study the computer hardware, the operating system, the network and the users behavior. This paper proposes different models for those based on simulation. First results of this work show the impact of the network and the necessary precision of communication model in fine-grained parallel applications.",2006,0, 3186,A New Intelligent Traffic Shaper for High Speed Networks,"In this paper, a new intelligent traffic shaper is proposed to obtain a reasonable utilization of bandwidth while preventing traffic overload in other part of the network and as a result, reducing total number of packet dropping in the whole network. This approach trains an intelligent agent to learn an appropriate value for token generation rate of a Token Bucket at various states of the network. This method shows satisfactory results in simulations from the aspects of keeping dropping probability low while injecting as many packets as possible into the network by minimization of used buffer size at each router in order to keep the delay occurred by packets waiting in long buffers to be sent, as small as possible",2006,0, 3187,Bootstrapping Performance and Dependability Attributes ofWeb Services,"Web services gain momentum for developing flexible service-oriented architectures. Quality of service (QoS) issues are not part of the Web service standard stack, although non-functional attributes like performance, dependability or cost and payment play an important role for service discovery, selection, and composition. A lot of research is dedicated to different QoS models, at the same time omitting a way to specify how QoS parameters (esp. the performance related aspects) are assessed, evaluated and constantly monitored. Our contribution in this paper comprises: a) an evaluation approach for QoS attributes of Web services, which works completely service-and provider independent, b) a method to analyze Web service interactions by using our evaluation tool and extract important QoS information without any knowledge about the service implementation. Furthermore, our implementation allows assessing performance specific values (such as latency or service processing time) that usually require access to the server which hosts the service. The result of the evaluation process can be used to enrich existing Web service descriptions with a set of up-to-date QoS attributes, therefore, making it a valuable instrument for Web service selection",2006,0, 3188,A Lightweight Software Design Process for Web Services Workflows,"Service-oriented computing (SOC) suggests that many open, network-accessible services will be available over the Internet for organizations to incorporate into their own processes. Developing new software systems by composing an organization's local services and externally-available Web services is conceptually different from system development supported by traditional software engineering lifecycles. Consumer organizations typically have no control over the quality and/or consistency of the external services that they incorporate, thus top-down software development lifecycles are impractical. Software architects and designers will require agile, lightweight processes to evaluate tradeoffs in system design based on the ""estimated"" responsiveness of external services coupled with known performance of local services. We introduce a model-driven software engineering approach for designing systems (i.e. workflows of Web services) under these circumstances and a corresponding simulation-based evaluation tool",2006,0, 3189,Application of a Statistical Methodology to Simplify Software Quality Metric Models Constructed Using Incomplete Data Samples,"During the construction of a software metric model, incomplete data often appear in the data sample used for the construction. Moreover, the decision on whether a particular predictor metric should be included is most likely based on an intuitive or experience-based assumption that the predictor metric has an impact on the target metric with a statistical significance. However, this assumption is usually not verifiable ""retrospectively"" after the model is constructed, leading to redundant predictor metric(s) and/or unnecessary predictor metric complexity. To solve all these problems, the authors have earlier derived a methodology consisting of the k-nearest neighbors (k-NN) imputation method, statistical hypothesis testing, and a ""goodness-of fit"" criterion. Whilst the methodology has been applied successfully to software effort metric models, it is applied only recently to software quality metric models which usually suffer from far more serious incomplete data. This paper documents the latter application based on a successful case study",2006,0, 3190,Object-Relational Database Metrics Formalization,"In this paper the formalization of a set of metrics for assessing the complexity of ORBDs is presented. An ontology for the SQL:2003 standard was produced, as a framework for representing the SQL schema definitions. It was represented using UML. The metrics were defined with OCL, upon the SQL:2003 ontology",2006,0, 3191,An Empirical Study on the Likelihood of Adoption in Practice of a Size Measurement Procedure for Requirements Specification,"Software size is one of the essential parameters of the estimation models used for project management purposes, and therefore being able to measure the size of software at an early stage of the development lifecycle is in theory of crucial importance. However, although some proposals for functional size measurement in industry do currently exist, there is as yet little validating evidence for such proposals. This paper describes an empirical study, based on the method evaluation model (MEM), of users' perceptions resulting from actual experience of use of a measurement procedure called RmFFP. This procedure was designed in accordance with the COSMIC-FFP standard method for estimating the functional size of object-oriented systems from requirements specifications obtained in the context of the OO-Method approach. In addition, an analysis of MEM relationships was also carried out using the regression analysis technique",2006,0, 3192,Quality Assessment of Mutation Operators Dedicated for C# Programs,"The mutation technique inserts faults in a program under test in order to assess or generate test cases, or evaluate the reliability of the program. Faults introduced into the source code are defined using mutation operators. They should be related to different, also object-oriented features of a program. The most research on OO mutations was devoted to Java programs. This paper describes analytical and empirical study performed to evaluate the quality of advanced mutation operators for C# programs. Experimental results demonstrate effectiveness of different mutation operators. Unit tests suites and functional tests were used in experiments. A detailed analysis was conducted on mutation operators dealing with delegates and exception handling",2006,0, 3193,Generating Optimal Test Set for Neighbor Factors Combinatorial Testing,"Combinatorial testing is a specification-based testing method, which can detect the faults triggered by interaction of factors. For one kind of software in which the interactions only exist between neighbor factors, this paper proposes the concept of neighbor factors combinatorial testing, presents the covering array generation algorithms for neighbor factors pair-wise (N=2) coverage, neighbor factors N-way (Nges2) coverage and variable strength neighbor factors coverage, and proves that the covering arrays generated by these three algorithms are optimal. Finally we analyze an application scenario, which shows that this approach is very practical",2006,0, 3194,Probabilistic Adaptive Random Testing,"Adaptive random testing (ART) methods are software testing methods which are based on random testing, but which use additional mechanisms to ensure more even and widespread distributions of test cases over an input domain. Restricted random testing (RRT) is a version of ART which uses exclusion regions and restricts test case generation to outside of these regions. RRT has been found to perform very well, but its use of strict exclusion regions (from within which test cases cannot be generated) has prompted an investigation into the possibility of modifying the RRT method such that all portions of the input domain remain available for test case generation throughout the duration of the algorithm. In this paper, we present a probabilistic approach, probabilistic ART (PART), and explain two different implementations. Preliminary empirical data supporting the methods is also examined",2006,0, 3195,Detecting Malicious Manipulation in Grid Environments,"Malicious manipulation of jobs results endangers the efficiency and performance of grid computing applications. The presence of nodes interested in depreciating jobs results may be detected and minimized with the usage of fault tolerance techniques. In order to detect this kind of nodes, this paper presents a distributed and hierarchical diagnosis model based on comparison and reputation, which can be applied to both public and private grids. This strategy defines the status of a node according to its level of confidence, measured through its behavior. The proposed model was submitted to simulations to evaluate its effectiveness under different quota of malicious nodes. The results reveals that 8 test rounds can detect practically all malicious nodes and even with less rounds, the correctness remains high without a significant overhead increase",2006,0, 3196,Hybrid Fault Detection Technique: A Case Study on Virtex-II Pro's PowerPC 405,"Hardening processor-based systems against transient faults requires new techniques able to combine high fault detection capabilities with the usual design requirements, e.g., reduced design-time, low area overhead, reduced (or null) accessibility to processor internal hardware. This paper proposes the adoption of a hybrid approach, which combines ideas from previous techniques based on software transformations with the introduction of an Infrastructure IP with reduced memory and performance overheads, to harden system based on the PowerPC 405 core available in Virtex-II Pro FPGAs. The proposed approach targets faults affecting the memory elements storing both the code and the data, independently of their location (inside or outside the processor). Extensive experimental results including comparisons with previous approaches are reported, which allow practically evaluating the characteristics of the method in terms of fault detection capabilities and area, memory and performance overheads",2006,0, 3197,Internet Traffic Classification for Scalable QOS Provision,"A new scheme that classifies the Internet traffic according to their application types for scalable QoS provision is proposed in this work. The traditional port-based classification method does not yield satisfactory performance, since the same port can be shared by multiple applications. Furthermore, asymmetric routing and errors of modern measurement tools such as PCF and NetFlow degrades the classification performance. To address these issues, the proposed classification process consists of two steps: feature selection and classification. Candidate features that can be obtained easily by ISP are examined. Then, we perform feature reduction so as to balance the performance and complexity. As to classification, the REPTree and the bagging schemes are adopted and compared. It is demonstrated by simulations with real data that the proposed classification scheme outperforms existing techniques",2006,0, 3198,Comparison and Assessment of Improved Grey Relation Analysis for Software Development Effort Estimation,"The goal of software project planning is to provide a framework that allows project manager to make reasonable estimates of the resources. In fact, software development is highly unpredictable - only 10% of projects on time and budget. Thus, it is very important for software project managers to accurately and precisely estimate software development effort since the resources are limited. One of the most widely used approaches of software effort estimation is the analogy method. Since the method of analogy is constructed on the foundation of distance-based similarity, there are still some drawbacks and restrictions for application. For example, the anomalistic and outlying values will influence the function to determine similarity. Contrarily, grey relational analysis (GRA) is a distinct measurement from the traditional distance scale and can dig out the realistic law from small-sample data. In this paper, we show how to apply GRA to evaluate the effort estimation results for different data sequences and to compare its accuracy with that of Analogy method. Experimental result shows that the GRA provides a better predictive performance than other methods. We can see that the GRA is more suitable for predicting software development effort with unbalanced dataset",2006,0, 3199,An Analysis Model for Software Project Development,"The goal of software development is to build quality software that meets customers' needs on time and on budget. However, several kinds of risks, complexity and uncertainties make software development hard to control. If no appropriate project management is applied, the software project will go nowhere. In this paper, we build an analysis model for software project development from the perspective of people, tools, and materials. The analysis model can be used to reveal project status in each development phase and this helps to investigate and manage particular attributes of software project, including cost, effort, schedule, risk, and complexity",2006,0, 3200,Predicting the Viterbi Score Distribution for a Hidden Markov Model and Application to Speech Recognition,"Hidden Markov models are used in many important applications such as speech recognition and handwriting recognition. Finding an effective HMM to fit the data is important for successful operation. Typically, the Baum-Welch algorithm is used to train an HMM, and seeks to maximize the likelihood probability of the model, which is closely related to the Viterbi score. However, random initialization causes the final model quality to vary due to locally optimum solutions. Conventionally, in speech recognition systems, models are selected from a collection of already trained models using some performance criterion. In this paper, we investigate an alternative method of selecting models using the Viterbi score distribution. A method to determine the Viterbi score distribution is described based on Viterbi score variation with respect to the number of states (N) in the model. Our tests show that the distribution is approximately Gaussian when the number of states is greater than 3. The paper also investigates the relationship between performance and the Viterbi score's percentile value and discusses several interesting implications for Baum-Welch training",2006,0, 3201,Two-Dimensional Software Reliability Models and Their Application,"In general, the software-testing time may be measured by two kinds of time scales: calendar time and test-execution time. In this paper, we develop two-dimensional software reliability models with two-time measures and incorporate both of them to assess the software reliability with higher accuracy. Since the resulting software reliability models are based on the familiar non-homogeneous Poisson processes with two-time scales, which are the natural extensions of one-dimensional models, it is possible to treat both the time data simultaneously and effectively. We investigate the dependence of test-execution time as a testing effort on the software reliability assessment, and validate quantitatively the software reliability models with two-time scales. We also consider an optimization problem when to stop the software testing in terms of two-time measurements",2006,0, 3202,A Strategy for Verification of Decomposable SCR Models,"Formal methods for verification of software systems often face the problem of state explosion and complexity. We propose a divide and conquer methodology which leads to component-based verification and analysis of formal requirements specifications expressed using software cost reduction (SCR) models. This paper presents a novel decomposition methodology which identifies components in the given SCR specification and automates related abstraction methods. Further, we propose a verification strategy for modular and decomposable software models. Efficient verification of SCR models is achieved through the use of invariants and proof compositions. Experimental validation of our methodology brought to light the importance of modularity, encapsulation, information hiding and the avoidance of global variables in the context of formal specification models. The advantages of the compositional verification strategy are demonstrated in the analysis of the personnel access control system. Our approach offers significant savings in terms of time and memory requirements needed to perform formal system verification",2006,0, 3203,An Evaluation of Similarity Coefficients for Software Fault Localization,"Automated diagnosis of software faults can improve the efficiency of the debugging process, and is therefore an important technique for the development of dependable software. In this paper we study different similarity coefficients that are applied in the context of a program spectral approach to software fault localization (single programming mistakes). The coefficients studied are taken from the systems diagnosis/automated debugging tools Pinpoint, Tarantula, and AMPLE, and from the molecular biology domain (the Ochiai coefficient). We evaluate these coefficients on the Siemens Suite of benchmark faults, and assess their effectiveness in terms of the position of the actual fault in the probability ranking of fault candidates produced by the diagnosis technique. Our experiments indicate that the Ochiai coefficient consistently outperforms the coefficients currently used by the tools mentioned. In terms of the amount of code that needs to be inspected, this coefficient improves 5% on average over the next best technique, and up to 30% in specific cases",2006,0, 3204,An OS-level Framework for Providing Application-Aware Reliability,"The paper describes the reliability microkernel framework (RMK), a loadable kernel module for providing application-aware reliability and dynamically configuring reliability mechanisms installed in RMK. The RMK prototype is implemented in Linux and supports detection of application/OS failures and transparent application checkpointing. Experiment results show that the OS hang detection, which exploits characteristics of application and system behavior, can achieve high coverage (100% in our experiments) and low false positive rate. Moreover, the performance overhead is negligible because instruction counting is performed in hardware",2006,0, 3205,A Best Practice Guide to Resources Forecasting for the Apache Webserver,"Recently, measurement based studies of software systems proliferated, reflecting an increasingly empirical focus on system availability, reliability, aging and fault tolerance. However, it is a non-trivial, error-prone, arduous, and time-consuming task even for experienced system administrators and statistical analysis to know what a reasonable set of steps should include to model and successfully predict performance variables or system failures of a complex software system. Reported results are fragmented and focus on applying statistical regression techniques to captured numerical system data. In this paper, we propose a best practice guide for building empirical models based on our experience with forecasting Apache Web server performance variables and forecasting call availability of a real world telecommunication system. To substantiate the presented guide and to demonstrate our approach step-by-step we model and predict the response time and the amount of free physical memory of an Apache Web server system. Additionally, we present concrete results for a) variable selection where we cross benchmark three procedures, b) empirical model building where we cross benchmark four techniques and c) sensitivity analysis. This best practice guide intends to assist in configuring modeling approaches systematically for best estimation and prediction results",2006,0, 3206,Early Software Reliability Prediction with ANN Models,"It is well-known that accurate reliability estimates can be obtained by using software reliability models only in the later phase of software testing. However, prediction in the early phase is important for cost-effective and timely management. Also this requirement can be achieved with information from previous releases or similar projects. This basic idea has been implemented with nonhomogeneous Poisson process (NHPP) models by assuming the same testing/debugging environment for similar projects or successive releases. In this paper we study an approach to using past fault-related data with artificial neural network (ANN) models to improve reliability predictions in the early testing phase. Numerical examples are shown with both actual and simulated datasets. Better performance of early prediction is observed compared with original ANN model with no such historical fault-related data incorporated. Also, the problem of optimal switching point from the proposed approach to original ANN model is studied, with three numerical examples",2006,0, 3207,Concave and Convex Area on Planar Curve for Shape Defect Detection and Recognition,"A shape representation based on concave and convex area along a closed curve is presented. Curvature estimation is done to the input curve and searched for its critical points. Splitting the critical points into concave and convex critical points, the concave and convex area is computed. This technique is tested on shape defect detection of starfruit and also to shape recognition. In the first case, defect is measured with concave energy and obtained a stable measure, which is proportional with the defect. In shape recognition, starfruit's stem is identified to remove it from the starfruit shape, as it will contribute to false computation in defect measurement",2006,0, 3208,Joint Analysis and Simulation of Time Synchronization-Relative Navigation Performance of a kind of wireless TDMA Network,"In this paper we analyze the joint performance of time synchronization and relative navigation of a kind of wireless TDMA network. The network is based on spread spectrum communication system and can support relative navigation service by continuously receiving and parsing the navigation information in position information (PI) messages. By the calculation of the relative position of other users according to the transmission delay of PI messages, user can estimates its own position within a reasonable bias. On the other hand, users with poor time synchronization situation can improve their synchronization quality by PI messages received from other nodes in network. With the calculation of relative position and transmission delay of PI messages by Kalman filter, users can also estimate the clock bias from other users who have better synchronization situation. So we can find out that time synchronization and relative navigation quality in this TDMA network have very close and complex relationship. Based on network simulation software OPNET, we validate the relationship between time synchronization and relative navigation and also show that geodetic reference (GR) will greatly helpful for the time synchronization and relative navigation performance of users",2006,0, 3209,Research and Establishment of Quality Cost Oriented Software Testing Model,"In a competitive market, quality is essential to the software. Software test plays an important role during the software-developing period. Any saving on software test will greatly reduce the total cost of software. The key point of this paper is to build a software test process with the cost control management and to make tradeoff between the quality and the cost. Typical software testing model are analyzed and compared. A novel QC 3-D test model is introduced. The model combined with software test process, software quality cost and testing type. The computational formula is defined to calculate software quality cost. A software quality balance law is introduced to balance control cost and failure cost. By adjusting software test method and strategy, organization can reach the objective of the best balance between software quality and cost. An enterprise case is studied and the result is analyzed to prove the basic law. The result shows an optimistic point that will prove the accurate of the model and the law",2006,0, 3210,Test Selection and Coverage Based on CTM and Metric Spaces,"One of the main open issues in testing and in particular in conformance testing is finding a good way of measuring the quality of the testing activity. With other words after performing a given quantity of tests, how much it is covered from the number of potential errors of the implementation under test (IUT). The present paper tries to build up a solution for this necessity. In (L. Feijs et al., 2002, N. Goga et al., 2004) we proposed a general framework for defining a coverage measure based on a distance measure between different behaviors of a system under test. In the current paper we apply this general theory to another domain, namely to CTM (classification tree method) that is usually used for the selection of the data for testing. As a case study we apply this technology for testing a software tool used for computer music generation",2006,0, 3211,Parity Prediction of S-Box for AES,"In this paper, we present the parity prediction approach of the S-Box for designing high performance and fault detection structures of the AES. Unlike the traditional scheme which is based on using look-up tables, we use the logical gates implementation based on the composite fields for fault detection of S-Box in AES. We find closed formulations for the output parity bits of S-Box considering the composite-field transformation matrix and its inverse in GF(28) as well as the affine transformation. To the best of our knowledge, no closed formulations for parity prediction of the S-Box have been proposed in the open literature",2006,0, 3212,"Uniform Crime Report ""SuperClean"" Data Cleaning Tool","The analysis of UCR data provides a basis for crime prevention in the United States as well as a sound decision making tool for policy makers. The decisions made with the use of UCR data range from major funding for resource allocation down to patrol distribution by local police departments. The FBI collects and maintains the database of the Uniform Crime Reports (UCR), from 18,000 reporting police agencies nationwide. However, many of these data sets have missing, incomplete, or incorrect data points that render crime analysis less effective. UCR experts have stated that in the current form UCR data is unreliable and sporadic. Efforts have previously been made to design a software application to correct these necessary problems, but the application was deemed insufficient due to limited portability and usability. Software requirements restricted potential users and the user interface was ineffective. However, this previous work describes the functions needed to effectively clean and assess UCR data. This paper describes the design of an application used to clean, process, and correct UCR data so that ideal policy decisions can be made. Erroneous portions of the data will be found using the outlier detection function that is based on a statistical model of anomalous behavior. These methods incorporate sponsor specifications and user requirements. This project builds upon the GRASP (geospatial repository for analysis and safety planning) project's goal of sharing information between law enforcement agencies. Eventually this application could be integrated with GRASP to form a single repository for UCR and spatial crime data. This paper describes how the new stand alone application will allow users to clean, correct, and process UCR data in an efficient, user-friendly manner. Formal testing provides a basis to assess the effectiveness of the application based on the metrics of time, cost, and quality. The results are the basis for improvements to the application",2006,0, 3213,Notions of Diagnosability for Timed Failure Propagation Graphs,"Timed failure propagation graphs (TFPG) are causal models that capture the temporal aspects of failure propagation in dynamic systems. This paper presents several notions of diagnosability for timed failure propagation models. Diagnosability characteristics of TFPG models are defined based on three metrics; failure detectability, distinguishability, and predictability. The paper identifies the modeling and run-time parameters of the diagnosability metrics. The application of diagnosability analysis to the problem of alarm allocation is discussed.",2006,0, 3214,Diagnositc Reasoner Model Production from Atlas Test Program Sets,"Model-based diagnostic reasoning provides improvements in fault isolation and repair time as well as cost savings provided by a reduction in false pull rates. To support legacy systems that use ATLAS test program sets for fault isolation and repair, Honeywell has developed a method for extracting diagnostic fault models from ATLAS source code. We will describe the content of the fault models and highlight model features that use data collected in a netcentric maintenance environment to optimize reasoner performance. We will then present a description of the method used to create bootstrap fault models, including a configurable ATLAS source code parser that uses regular text expressions to extract test and fault information from Test Program Sets and create fault models.",2006,0, 3215,Development of Oil Immersed Transformer Management Technologies by Using Dissolved Gas Analysis,"A newly developed diagnosis system using DGA (dissolved gas analysis) was studied for the effective management of oil-immersed transformers. It consists of the measurement system of dissolved hydrogen gas in oil and DGA software based on dissolved gases such as acetylene (C2H2), hydrogen (H2), ethylene (C2H4), methane (CH4), ethane (C2H6), carbon monoxide (CO) and carbon dioxide (CO2). The system for hydrogen gas measurement was designed using semiconductor gas sensor and polymer membrane. Artificial neural network and rule based DGA possibility in the software were used for recognition of fault causes and decision of fault",2006,0, 3216,Optimal QoS-aware Sleep/Wake Scheduling for Time-Synchronized Sensor Networks,"We study the sleep/wake scheduling problem in the context of clustered sensor networks. We conclude that the design of any sleep/wake scheduling algorithm must take into account the impact of the synchronization error. Our work includes two parts. In the first part, we show that there is an inherent tradeoff between energy consumption and message delivery performance (defined as the message capture probability in this work). We formulate an optimization problem to minimize the expected energy consumption, with the constraint that the message capture probability should be no less than a threshold. In the first part, we assume the threshold is already given. However, by investigating the unique structure of the problem, we transform the non-convex problem into a convex equivalent, and solve it using an efficient search method. In the second part, we remove the assumption that the capture probability threshold is already given, and study how to decide it to meet the quality of services (QoS) requirement of the application. We observe that in many sensor network applications, a group of sensors collaborate to perform common task(s). Therefore, the QoS is usually not decided by the performance of any individual node, but by the collective performance of all the related nodes. To achieve the collective performance with minimum energy consumption, intuitively we should provide differentiated services for the nodes and favor more important ones. We thus formulate an optimization problem, which aims to set the capture probability threshold for messages from each individual node such that the expected energy consumption is minimized, while the collective performance is guaranteed. The problem turns out to be non-convex and hard to solve exactly. Therefore, we use approximation techniques to obtain a suboptimal solution that approximates the optimum. Simulations show that our approximate solution significantly outperforms a scheme without differentiated treatment of the nodes.",2006,0, 3217,New Tools for Blackout Prevention,"Recent power system blackouts have heightened the concern for power system security and, therefore, reliability. However, potential security improvements which could be achieved through transmission system facility reinforcement and generation supply expansion are long-term, costly, and uncertain. As a result, more immediate and cost effective solutions to the security issue have been pursued including the development of specialized software tools which can assess power system security in near-real-time and assist operators in maintaining adequate security margins at all times thus lowering the risk of blackouts. Such software tools have been implemented in numerous power systems world-wide and are gaining popularity as a result of demonstrated benefits to system performance",2006,0, 3218,Extraction of Harmonics and Reactive Current for Power Quality Enhancement,"A detection technique for extraction of total harmonics, individual harmonics, and reactive current components is introduced and its performance is evaluated. The detection algorithm is then adopted as part of the control system of a single-phase active power filter (APF) to provide the required signals for harmonic filtering and reactive power compensation. Performance of the overall system is evaluated based on digital time-domain simulation studies. The APF control system including the signal processing algorithms are implemented in Matlab/Simulink fixed-point blockset to accommodate bit-length limitation which is a crucial factor in digital implementation. The power system including the APF, load and the supply system are simulated with the PSCAD/EMTDC software to which the Matlab-based control model is interfaced. The simulation results reflect the desired performance of the detection algorithm",2006,0, 3219,Harmonization of usability measurements in ISO9126 software engineering standards,"The measurement of software usability is recommended in ISO 9126-2 to assess the external quality of software by allowing the user to test its usefulness before it is delivered to the client. Later, during the operation and maintenance phases, usability should be maintained, otherwise the software will have to be retired. This then raises harmonization issues about the proper positioning of the usability characteristic: does usability really belong to the external quality view of ISO 9126-2 and should the external quality characteristic of usability be harmonized with that of the quality in use model defined in ISO 9126-1 and ISO 9126-4? This paper analyzes these two questions: first, we identify and analyze the subset of ISO 9126-2 quality subcharacteristics and measures of usability that can be useful for quality in use, and then we recommend improvements to the harmonization of these ISO 9126 models",2006,0, 3220,Distributed Anomaly Detection in Wireless Sensor Networks,"Identifying misbehaviors is an important challenge for monitoring, fault diagnosis and intrusion detection in wireless sensor networks. A key problem is how to minimize the communication overhead and energy consumption in the network when identifying misbehaviors. Our approach to this problem is based on a distributed, cluster-based anomaly detection algorithm. We minimize the communication overhead by clustering the sensor measurements and merging clusters before sending a description of the clusters to the other nodes. In order to evaluate our distributed scheme, we implemented our algorithm in a simulation based on the sensor data gathered from the Great Duck Island project. We demonstrate that our scheme achieves comparable accuracy compared to a centralized scheme with a significant reduction in communication overhead",2006,0, 3221,"""Warehouse-Sized Workloads""","
First Page of the Article
",2006,0, 3222,Implementation Aspects of Software Development Projects,"The software industry has been plagued by the staggering failure rate of projects, which have resulted in the loss of billions of dollars. The industry has realized that the solution lies in fixing the software process. Hence the processes have been built for development of the systems. The implementation process has still not gained that much importance. Hence problems are faced in moving the systems to production. This paper talks of the implementation/operational aspects. The failure of projects is now being owed to customer not having the correct vision & strategy for the developed software product. The customer blames it to the IT company for developing the 'not so good' product. The fix lies in the IT companies to handhold with the customers on this aspect, guide them throughout on the process of implementation, extend its support for implementation of the product with getting the right involvement and support from the customer during implementation. The implementation of the developed software product is a collaborative effort between the customer and the developer",2006,0, 3223,StegoBreaker: Audio Steganalysis using Ensemble Autonomous Multi-Agent and Genetic Algorithm,"The goal of steganography is to avoid drawing suspicion to the transmission of a hidden message in multi-medium. This creates a potential problem when this technology is misused for planning criminal activities. Differentiating anomalous audio document (stego audio) from pure audio document (cover audio) is difficult and tedious. Steganalytic techniques strive to detect whether an audio contains a hidden message or not. This paper investigates the use of genetic algorithm (GA) to aid autonomous intelligent software agents capable of detecting any hidden information in audio files, automatically. This agent would make up the detection agent in an architecture comprising of several different agents that collaborate together to detect the hidden information. The basic idea is that, the various audio quality metrics (AQMs) calculated on cover audio signals and on stego-audio signals vis-a-vis their denoised versions, are statistically different. GA employs these AQMs to steganalyse the audio data. The overall agent architecture will operate as an automatic target detection (ATD) system. The architecture of ATD system is presented in this paper and it is shown how the detection agent fits into the overall system. The design of ATD based audio steganalyzer relies on the choice of these audio quality measures and the construction of a GA based rule generator, which spawns a set of rules that discriminates between the adulterated and the untouched audio samples. Experimental results show that the proposed technique provides promising detection rates",2006,0, 3224,Unsupervised Contextual Keyword Relevance Learning and Measurement using PLSA,"In this paper, we have developed a probabilistic approach using PLSA for the discovery and analysis of contextual keyword relevance based on the distribution of keywords across a training text corpus. We have shown experimentally, the flexibility of this approach in classifying keywords into different domains based on their context. We have developed a prototype system that allows us to project keyword queries on the loaded PLSA model and returns keywords that are closely correlated. The keyword query is vectorized using the PLSA model in the reduce aspect space and correlation is derived by calculating a dot product. We also discuss the parameters that control PLSA performance including a) number of aspects, b) number of EM iterations c) weighting functions on TDM (pre-weighting). We have estimated the quality through computation of precision-recall scores. We have presented our experiments on PLSA application towards document classification",2006,0, 3225,Modified Smith Predictor for Network Congestion Control,"In this paper, a novel active queue management (AQM) control law using modified Smith predictor is proposed. The proposed method provides a stable response for a TCP flow dynamics which is highly oscillatory or unstable under practical conditions. The controller decouples the setpoint response from the load response. The proposed control scheme can track the desired buffer level and reject any additional load disturbance such as unknown buffer depletion rate. Simulation results show high performance for the setpoint response when the parameters of TCP flow dynamics are known",2006,0, 3226,Proactive Admission Control for IP Networks using Teletraffic Engineering Model,"The paper proposes an admission control scheme which reserves network resources on a proactive basis. The current flow arrival and departure patterns are used to determine the future demand for network resources using the telephone networks Erlang-B model. The Erlang model derives the traffic-QoS relationship in terms of the blocking probability of the network resource given the network capacity, its current utilization and future demand. The same principle is remodeled here to suit the IP network. The blocking probability thus measured is used as a flow admission decision parameter. The effectiveness of the scheme over the other flow admission control algorithms is determined here. Detailed simulations of the proposed algorithm result in a higher admission rate and higher bottleneck link utilization at a comparatively lower overhead traffic",2006,0, 3227,Range Detection Approach in Interactive Virtual Heritage Walkthrough,"Exploring virtual world will give a memorable experience to the users. Generally, they are expecting smooth navigation with high quality image drawn into their eyes. Nevertheless, when the virtual world is growing simultaneously with the number of objects, the navigation will be detained. Yet, the frame rates can become unacceptable. In this paper, we present and examine range detection approach for view frustum culling to effectively speed-up the interactive virtual heritage project namely Ancient Malacca Virtual Walkthrough. Normal six plane view frustum culling approach use the plane equation to test whether objects is within the visible region. However, range detection approach is different with normal six plane view frustum. It is based on camera referential point and tests the objects if they are in the viewing range. The results show that this approach is able to increase the rendering performance of the virtual walkthrough without sacrificing the visual quality.",2006,0, 3228,Blocking vs. Non-Blocking Coordinated Checkpointing for Large-Scale Fault Tolerant MPI,"A long-term trend in high-performance computing is the increasing number of nodes in parallel computing platforms, which entails a higher failure probability. Fault programming environments should be used to guarantee the safe execution of critical applications. Research in fault tolerant MPI has led to the development of several fault tolerant MPI environments. Different approaches are being proposed using a variety of fault tolerant message passing protocols based on coordinated checkpointing or message logging. The most popular approach is with coordinated checkpointing. In the literature, two different concepts of coordinated checkpointing have been proposed: blocking and non-blocking. However they have never been compared quantitatively and their respective scalability remains unknown. The contribution of this paper is to provide the first comparison between these two approaches and a study of their scalability. We have implemented the two approaches within the MPICH environments and evaluate their performance using the NAS parallel benchmarks",2006,0, 3229,SPDW: A Software Development Process Performance Data Warehousing Environment,"Metrics are essential in the assessment of the quality of software development processes (SDP). However, the adoption of a metrics program requires an information system for collecting, analyzing, and disseminating measures of software processes, products and services. This paper describes SPDW, an SPD data warehousing environment developed in the context of the metrics program of a leading software operation in Latin America, currently assessed as CMM Level 3. SDPW architecture encompasses: 1) automatic project data capturing, considering different types of heterogeneity present in the software development environment; 2) the representation of project metrics according to a standard organizational view; and 3) analytical functionality that supports process analysis. The paper also describes current implementations, and reports experiences on the use of SPDW by the organization",2006,0, 3230,Design Space Exploration for an Adaptive Noise Cancellation Algorithm,"The most difficult aspect of system-level design consists in make a good selection between the multiplicities of options in the task level specification. This process relies heavily on the quality of the area, execution time and power consumption measures; the usefulness of the generated partition depends on how accurate these estimation measures are. The aim of this paper is to probe a general sequence of actions to obtain these values. The approximation has consisted of the specification of the problem in ANSI C and its transformation into assembler code for several DSP platforms and into RTL VHDL synthesizable code as input to automatic synthesis tools. The outputs of the estimates measures obtained when the sequence of actions is completed feed the partitioning phase. Also, the assembler and RTL VHDL code chosen may be reused in the cosynthesis stage. A noise canceller filter as example of application is presented",2006,0, 3231,Predictive Performance Analysis of a Parallel Pipelined Synchronous Wavefront Application for Commodity Processor Cluster Systems,"This paper details the development and application of a model for predictive performance analysis of a pipelined synchronous wavefront application running on commodity processor cluster systems. The performance model builds on existing work (Cao et al.) by including extensions for modern commodity processor architectures. These extensions, including coarser hardware benchmarking, prove to be essential in countering the effects of modern superscalar processors (e.g. multiple operation pipelines and on-the-fly optimisations), complex memory hierarchies, and the impact of applying modern optimising compilers. The process of application modelling is also extended, combining static source code analysis with run-time profiling results for increased accuracy. The model is validated on several high performance SMP systems and the results show a high predictive accuracy (les 10% error). Additionally, the use of the performance model to speculate on the performance and scalability of this application on a hypothetical cluster with two different problem sizes is demonstrated. It is shown that such speculative techniques can be used to support system procurement, run-time verification and system maintenance and upgrading",2006,0, 3232,Application of Static Transfer Switch for Feeder Reconfiguration to Improve Voltage at Critical Locations,"The main objective of this work was to assess and evaluate the performance of static transfer switch (STS) for feeder reconfiguration. Two particular network feeders namely preferred and alternate were selected for simulation studies. Both feeders belong to IESCO system (Islamabad Electric Supply Company, Pakistan). The sensitive loads are fed by preferred feeder but in case of disturbances, the loads are transferred to alternate feeder. Different simulation cases were performed for optimum installation of STS to obtain the required voltage quality. The simulations are performed using the PSCAD/EMTDC package",2006,0, 3233,On the Use of Behavioral Models for the Integrated Performance and Reliability Evaluation of Fault-Tolerant Avionics Systems,"In this paper, the authors propose an integrated methodology for the reliability and performance analysis of fault-tolerant systems. This methodology uses a behavioral model of the system dynamics, similar to the ones used by control engineers when designing the control system, but incorporates additional artifacts to model the failure behavior of the system components. These artifacts include component failure modes (and associated failure rates) and how those failure modes affect the dynamic behavior of the component. The methodology bases the system evaluation on the analysis of the dynamics of the different configurations the system can reach after component failures occur. For each of the possible system configurations, a performance evaluation of its dynamic behavior is carried out to check whether its properties, e.g., accuracy, overshoot, or settling time, which are called performance metrics, meet system requirements. After all system configurations have been evaluated, the values of the performance metrics for each configuration and the probabilities of going from the nominal configuration (no component failures) to any other configuration are merged into a set of probabilistic measures of performance. To illustrate the methodology, and to introduce a tool that the authors developed in MATLAB/SIMULINKreg that supports this methodology, the authors present a case-study of a lateral-directional flight control system for a fighter aircraft",2006,0, 3234,Multiple Description Scalar Quantization Based 3D Mesh Coding,"In this paper, we address the problem of 3D model transmission over error-prone channels using multiple description coding (MDC). The objective of MDC is to encode a source into multiple bitstreams, called descriptions, supporting multiple quality levels of decoding. Compared to layered coding techniques, each description can be decoded independently to approximate the model. In the proposed approach, the mesh geometry is compressed using multiresolution geometry compression. Then multiple descriptions are obtained by applying multiple description scalar quantization (MDSQ) to the obtained wavelet coefficients. Experimental results show that, the proposed approach achieves competitive compression performance compared with existing multiple description methods.",2006,0, 3235,2D Frequency Selective Extrapolation for Spatial Error Concealment in H.264/AVC Video Coding,"The frequency selective extrapolation extends an image signal beyond a limited number of known samples. This problem arises in image and video communication in error prone environments where transmission errors may lead to data losses. In order to estimate the lost image areas, the missing pixels are extrapolated from the available correctly received surrounding area which is approximated by a weighted linear combination of basis functions. In this contribution, we integrate the frequency selective extrapolation into the H.264/AVC coder as spatial concealment method. The decoder reference software uses spatial concealment only for I frames. Therefore, we investigate the performance of our concealment scheme for I frames and its impact on following P frames caused by error propagation due to predictive coding. Further, we compare the performance for coded video sequences in TV quality against the non-normative concealment feature of the decoder reference software. The investigations are done for slice patterns causing chequerboard and raster scan losses enabled by flexible macroblock ordering (FMO).",2006,0, 3236,Development of Defect Classification Algorithm for POSCO Rolling Strip Surface Inspection System,"Surface inspection system (SIS) is an integrated hardware-software system which automatically inspects the surface of the steel strip. It is equipped with several cameras and illumination over and under the steel strip roll and automatically detects and classifies defects on the surface. The performance of the inspection algorithm plays an important role in not only quality assurance of the rolled steel product, but also improvement of the strip production process control. Current implementation of POSCO SIS has good ability to detect defects, however, classification performance is not satisfactory. In this paper, we introduce POSCO SIS and suggest a new defect classification algorithm which is based on support vector machine technique. The suggested classification algorithm shows good classification ability and generalization performance",2006,0, 3237,A New Approach for Induction Motor Broken Bar Diagnosis by Using Vibration Spectrum,"Different methods for detecting broken bars in induction motors can be found in literature. Many of these methods are based on evaluating special frequency magnitudes in machine signals spectrums. Current, power, flux, etc are among these signals. Frequencies related to broken rotor fault are dependent on slip; therefore, correct diagnosis of fault depends on accurate determination of motor velocity and slip. The traditional methods typically require several sensors that should be pre-installed in some cases. A practical diagnosis method should be easily performed in site and does not take too much time. This paper presents a diagnosing method based on only a vibration sensor. Motor velocity oscillation due to broken rotor causes frequency components at twice slip frequency difference around speed frequency in vibration spectrum. Speed frequency and its harmonics as well as twice supply frequency, can easily and accurately be found in vibration spectrum, therefore the motor slip can be computed. Now components related to rotor fault can be found. According to this method, an apparatus consisting necessary hardware and software has been designed. Experimental tests have confirmed the efficiency of the method",2006,0, 3238,Characterization of Diagnosis Algorithms for Real-time Scheduling on Distributed Processors in Support to e-Maintenance Services,"This paper presents a characterization study of agent algorithms of distributed diagnosis, for real-time scheduling and on-line implementation. These parameters will be used to optimize the strategies of real-time distributed diagnosis. The characterization of execution parameters of an agent algorithm must satisfy the diagnosis time constraints related to the availability objective of the each equipment, defined in the predictive maintenance service policy. Hence, we propose to define the time characteristics and constraints, in the case of networked distributed system architecture. The diagnosis agent algorithms are then characterized for evaluating their performances, following criteria relating to the real-time execution",2006,0, 3239,Traffic and Network Engineering in Emerging Generation IP Networks: A Bandwidth on Demand Model,"This paper assesses the performance of a network management scheme where network engineering (NE) is used to complement traffic engineering (TE) in a multi-layer setting where a data network is layered above an optical network. We present a TE strategy which is based on a multi-constraint optimization model consisting of finding bandwidth-guaranteed IP tunnels subject to contention avoidance minimization and bandwidth usage maximization constraints. The TE model is complemented by a NE model which uses a bandwidth trading mechanism to rapidly re-size and re-optimize the established tunnels (LSPs/lambdaSPs) under quality of service (QoS) mismatches between the traffic carried by the tunnels and the resources available for carrying the traffic. The resulting TE+NE strategy can be used to achieve bandwidth on demand (BoD) in emerging generation IP networks using a (G)MPLS- like integrated architecture in a cost effective way. We evaluate the performance of this hybrid strategy when routing, re-routing and re-sizing the tunnels carrying the traffic offered to a 23-node test network.",2006,0, 3240,Software Quality in Ladder Programming,"This paper aims to measure the software quality for programmable logic controllers (PLCs) especially in ladder programming. The proposed quality metrics involve the criteria of simplicity, reconfigurability, reliability, and flexibility. A fuzzy inference algorithm is developed to select the best controller design among different ladder programs for the same application. A single tone membership function is used to represent the quality metric per each controller. The fitness of each controller is represented by the minimum value of all evaluated criteria. Thereafter, a min-max fuzzy inference is applied to take the decision (which controller is the best). The developed fuzzy assessment algorithm is applied to a conveyor belt module connected to a PLC to perform a repeated sequence. The decision making to select the best ladder program is obtained using the fuzzy assessment algorithm. The obtained results affirmed the potential of the proposed algorithm to assess the quality of the designed ladder programs",2006,0, 3241,Forming Groups for Collaborative Learning in Introductory Computer Programming Courses Based on Students' Programming Styles: An Empirical Study,"This paper describes and evaluates an approach for constructing groups for collaborative learning of computer programming. Groups are formed based on students' programming styles. The style of a program is characterized by simple well known metrics, including length of identifiers, size and number of modules (functions/procedures), and numbers of indented, commented and blank lines. A tool was implemented to automatically assess the style of programs submitted by students. For evaluating the tool and approach used to construct groups, some experiments were conducted involving information systems students enrolled in a course on algorithms and data structures. The experiments showed that collaborative learning was very effective for improving the programming style of students, particularly for students that worked in heterogeneous groups (formed by students with different levels of knowledge of programming style)",2006,0, 3242,One-Way Delay Measurement: State of Art,"Accurate OWD (one-way delay) measurement has a relevant role in network performance estimation and consequently in application design. In order to estimate OWD accurately it is essential to consider some parameters which influence the measure, such as operating system (OS) loaded on the PC, the treads, and the PC synchronization in the network. Due to the large research interest towards synchronization aspects, the authors have chosen to consider this parameter first. For this reason, three synchronization methods based on NTP (network time protocol), on GPS (global positioning system), and on IEEE 1588 standard are described and compared showing the advantages and disadvantages of the analyzed methods",2006,0, 3243,A Proposal and Empirical Validation of Metrics to Evaluate the Maintainability of Software Process Models,"Software measurement is essential for understanding, defining, managing and controlling the software development and maintenance processes and it is not possible to characterize the various aspects of development in a quantitative way without having a deep understanding of software development activities and their relationships The current competitive marketplace calls for the continuous improvement of processes and as consequence companies have to change their processes in order to adapt to these new emerging needs. It implies the continuous change of their software process models and therefore, it is fundamental to facilitate the evolution of these models by evaluating its easiness of maintenance (maintainability). In this paper we introduce a set of metrics for software process models and discuss how these can be used as maintainability indicators. In particular, we report the results of a family of experiments that assess relationships between the structural properties, as measured by the metrics, of the process models and their maintainability. As a result a set of useful metrics to evaluate the software process models maintainability, have been obtained",2006,0, 3244,Parametric Models for Speech Quality Estimation in GSM Networks,"Speech quality assessment is an emergent QoS aspect in cellular network monitoring. This contribution deals with the estimation of perceived speech quality in GSM networks, by means of parametric models. Our goal is to provide a model for MOS estimation, based on parameters available in the operation and maintenance centre (OMC) and in the measurement reports from mobile stations. Specifically, based on MOS values from a large database obtained performing intrusive tests in a real GSM network, and on the correspondent measured transmission parameters, an extensive data analysis has been conducted. The correlation coefficients of GSM parameters with the objective speech quality have been then maximized, and a simple linear model has been proposed. Moreover, handover events have been considered and their impact on perceived speech quality has been proved",2006,0, 3245,A Hybrid Fault-Tolerant Algorithm for MPLS Networks,In this paper we present a novel fault-tolerant algorithm for use in MPLS based networks. The algorithm is employing both protection switching and path rerouting techniques and satisfies four selected performance criteria,2006,0, 3246,Fault Tolerant Job Scheduling in Computational Grid,"In large-scale grids, the probability of a failure is much greater than in traditional parallel systems [I]. Therefore, fault tolerance has become a crucial area in grid computing. In this paper, we address the problem of fault tolerance in term of resource failure. We devise a strategy for fault tolerant job scheduling in computational grid. Proposed strategy maintains history of the fault occurrence of resource in grid information service (GIS). Whenever a resource broker has job to schedule it uses the resource fault occurrence history information from GIS and depending on this information use different intensity of check pointing and replication while scheduling the job on resources which have different tendency towards fault. Using check pointing proposed scheme can make grid scheduling more reliable and efficient. Further, it increases the percentage of jobs executed within specified deadline and allotted budget, hence helping in making grid trustworthy. Through simulation we have evaluated the performance of the proposed strategy. The experimental results demonstrate that proposed strategy effectively schedule the grid jobs in fault tolerant way in spite of highly dynamic nature of grid",2006,0, 3247,"Grid Performance Prediction: Requirements, Framework, and Models","Grid performance prediction (GPP) is critical for grid middleware component services and grid (middleware) users to make optimized grid resource usage decisions to meet QoS requirements committed in SLAs. Other sophisticated concepts in grid like multi-criteria scheduling, grid capacity planning, and grid advance reservation are also dependant on GPP. GPP spreads over all levels of grid resources in multiple dimensions. We explore it at different coarse grain levels to discover the most wanted grid performance prediction needs of different human and service clients of grid and/or grid services, and present a comprehensive grid performance prediction framework to support at fine grain levels. We also provide a broad architecture of ALP (application level prediction) and NLP (network level prediction) for our grid middleware",2006,0, 3248,"Fault Tolerance using ""Parallel Shadow Image Servers (PSIS)"" in Grid Based Computing Environment","This paper presents a critical review of the existing fault tolerance mechanism in grid computing and the overhead involved in terms of reprocessing or rescheduling of jobs, if in case a fault arisen. For this purpose we suggested the parallel shadow image server (PSIS) copying techniques in parallel to the resource manager for having the check points for rescheduling of jobs from the nearest flag, if in case the fault is detected. The job process is to be scheduled from the resource manager node to the worker nodes and then its' submitted back by the worker nodes in serialized form to the parallel shadow image servers from the worker nodes after the pre-specified amount of time, which we call the recent spawn or the flag check point for rescheduling or reprocessing of job. If the fault is arisen then the rescheduling is done from the recent check point and submitted to the worker node from where the job was terminated. This will not only save time but will improve the performance up to major extent",2006,0, 3249,Building Scalable Failure-proneness Models Using Complexity Metrics for Large Scale Software Systems,"Building statistical models for estimating failure-proneness of systems can help software organizations make early decisions on the quality of their systems. Such early estimates can be used to help inform decisions on testing, refactoring, code inspections, design rework etc. This paper demonstrates the efficacy of building scalable failure-proneness models based on code complexity metrics across the Microsoft Windows operating system code base. We show the ability of such models to estimate failure-proneness and provide feedback on the complexity metrics to help guide refactoring and the design rework effort.",2006,0, 3250,An Approach of a Technique for Effort Estimation of Iterations in Software Projects,"The estimation of effort and cost is still one of the hardest tasks in software project management. At the moment, there are many techniques to accomplish this task, such as function points, use case points and COCOMO, but there is not much information available about how to use those techniques in non-waterfall software lifecycles such as iterative or spiral lifecycles projects. This paper shows the results obtained when applying a technique to estimate the effort of each construction iteration in software development projects that use iterative-incremental lifecycles. The results were obtained from software projects of a fourth-year course in Informatics. The technique proposes the use of Function Points and COCOMO II.",2006,0, 3251,Software-based Packet Classification in Network Intrusion Detection System using Network Processor,"As computer networking grows more important in daily usage, its security is also paramount. Network intrusion detection system (NIDS) observes network traffic for identifying malicious packets. Core to NIDS function is packet classification component. It is in charge of scanning network packet header. An improved packet classification component bears direct result for NIDS performance. This paper discusses technique to improve packet classification through the use of Bloom filter and hash table lookup. Because packet classification is an important function to other networking infrastructure, for instance firewall, quality of service, multimedia communication, an improved packet classification scheme could benefit application in related areas",2006,0, 3252,Considering Both Failure Detection and Fault Correction Activities in Software Reliability Modeling,"Software reliability is widely recognized as one of the most significant aspects of software quality and is often determined by the number of software uncorrected faults in the system. In practice, it is essential for fault correction prediction, because this correction process consumes a heavy amount of time and resources to predict whether reliability goals have been achieved. Therefore, in this paper we discuss a general framework of the modeling of the failure detection and fault correction process. Under this general framework, we not only verify the existing non-homogeneous poisson process (NHPP) models but also derive several new NHPP models. In addition, we show that these approaches cover a number of well-known models under different conditions. Finally, numerical examples are shown to illustrate the results of the integration of the detection and correction processes",2006,0, 3253,Efficient Mutant Generation for Mutation Testing of Pointcuts in Aspect-Oriented Programs,"Fault-based testing is an approach where the designed test data is used to demonstrate the absence of a set of prespecified faults, typically being frequently occurring faults. Mutation testing is a fault-based testing technique used to inject faults into an existing program, i.e., a variation of the original program and see if the test suite is sensitive enough to detect common faults. Aspect-oriented programming (AOP) provides new modularization of software systems by encapsulating crosscutting concerns. AspectJ, a language designed to support AOP uses abstractions like pointcuts, advice, and aspects to achieve AOP's primary functionality. Developers tend to write pointcut expressions with incorrect strength, thereby selecting additional events than intended to or leaving out necessary events. This incorrect strength causes aspects, the set of crosscutting concerns, to fail. Hence there is a need to test the pointcuts for their strength. Mutation testing of pointcuts includes two steps: creating effective mutants (variations) of a pointcut expression and testing these mutants using the designed test data. The number of mutants for a pointcut expression is usually large due to the usage of wildcards. It is tedious to manually identify effective mutants that are of appropriate strength and resemble closely the original pointcut expression. Our framework automatically generates mutants for a pointcut expression and identifies mutants that resemble closely the original expression. Then the developers could use the test data for the woven classes against these mutants to perform mutation testing.",2006,0, 3254,A High-Performance VLSI Architecture for Intra Prediction and Mode Decision in H.264/AVC Video Encoding,"The authors propose a high-performance hardware accelerator for intra prediction and mode decision in H.264/AVC video encoding. They use two intra prediction units to increase the performance. Taking advantage of function similarity and data reuse, the authors successfully reduce the hardware cost of the intra prediction units. Based on a modified mode decision algorithm, the design can deliver almost the same video quality as the reference software. The authors implemented the proposed architecture in Verilog and synthesized it targeting towards a TSMC 0.13mum CMOS cell library. Running at 75MHz, the 36K-gate circuit is capable of realtime encoding 720p HD (1280times720) video sequences at 30 frames per second (fps)",2006,0, 3255,Error and Rate Joint Control for Wireless Video Streaming,"In this paper, a precise error-tracking scheme for robust transmission of real-time video streaming over wireless IP network is presented. By utilizing negative acknowledgements from feedback channel, the encoder can precisely calculate and track the propagated errors by examining the backward motion dependency. With this precise tracking, the error-propagation effects can be terminated completely by INTRA refreshing the affected macroblocks. In addition, due to lots of INTRA macroblocks refresh will entail a large increase of the output bit rate of a video encoder, several bit rate reduction techniques are proposed. They can be jointly used with INTRA refresh scheme to obtain uniform video quality performance instead of only changing the quantization scale. The simulations show that both control strategies yield significant video quality improvements in error-prone environments",2006,0, 3256,Design of Media Gateway Controller in 3G Softswitch,"MGCP is a very important protocol in SG, and the quality of design of MGC has a critical impact on the performance of 3G core network, this paper studies the mechanism of MGCP, and proposes a design scheme of MGC, according to our analysis, we propose an implementation prototype of MGC",2006,0, 3257,QRP01-6: Resource Optimization Subject to a Percentile Response Time SLA for Enterprise Computing,"We consider a set of computer resources used by a service provider to host enterprise applications subject to service level agreements. We present an approach for resource optimization in such an environment that minimizes the total cost of computer resources used by a service provider for an enterprise application while satisfying the QoS metric that the response time for executing service requests is statistically bounded. That is, gamma% of the time the response time is less than a pre-defined value. This QoS metric is more realistic than the mean response time typically used in the literature. Numerical results show the applicability of the approach and validate its accuracy.",2006,0, 3258,QRP02-2: Design and Development of MPLS-based Recovery Strategies in NS2,"In a multi-service IP network, it is a key challenge to provide QoS to end-user applications while effectively using network resources. Traffic Engineering support in MPLS networks allows operators to carry traffic with strict QoS requirements by means of advanced network capabilities, such as resource reservation, fault-tolerance and optimization of transmission resources. To deliver a reliable service, MPLS exploits a set of procedures that protect the traffic flows carried on different paths. Therefore, Label Switching Routers have to support mechanisms for fault detection, notification and recovery, whereas MPLS has to enable the configuration of different recovery techniques. In this paper, we present the design and the development of some software modules for the simulation of different MPLS-based recovery strategies in the discrete event simulator NS2. Moreover, the paper discusses some simulation results that show a performance comparison between path protection and path restoration strategies in a simulation scenario matching the Geant network.",2006,0, 3259,SAT02-2: Modulation Classification of MPSK for Space Applications,"Space missions are being developed with increasing levels of autonomy, as a means to increase their science return, fault-tolerance, and longevity. In this paper, we present NASA's low-complexity techniques to autonomously identify the modulation order of an M-ary phase-shift-keying signal in the presence of additive white Gaussian noise. This is part of a larger program to develop an autonomous radio that receives a signal without a priori knowledge of its modulation type, modulation index, data rate, and pulse shape. For classification between binary and quaternary phase-shift keying, these techniques represent a lopsided trade-off in which a factor of 100 reduction in complexity results in less than a 0.2 dB loss in performance relative to the maximum likelihood modulation classifier.",2006,0, 3260,SPCp1-01: Voice Activity Detection for VoIP-An Information Theoretic Approach,"Voice enabled applications over the Internet are rapidly gaining popularity. Reducing the total bandwidth requirement can make a non-trivial difference for the subscribers having low speed connectivity. Voice activity detection algorithms for VoIP applications can save bandwidth by filtering the frames that do not contain speech. In this paper we introduce a novel technique to identify the voice and silent regions of a speech stream very much suitable for VoIP calls. We use an information theoretic measure, called spectral entropy, for differentiating the silence from the speech zones. Specifically we developed a heuristic approach that uses an adaptive threshold to minimize the miss detection in the presence of noise. The performance of our approach is compared with the relatively new 3GPP TS 26.194 (AMR-WB) standard, along with the listeners' intelligibility rating. Our algorithm yields comparatively better saving in bandwidth, yet maintaining good quality of the speech streams.",2006,0, 3261,WLC12-5: A TDMA-Based MAC Protocol for Industrial Wireless Sensor Network Applications using Link State Dependent Scheduling,"Existing TDMA-based MAC protocols for wireless sensor networks are not specifically built to consider the harsh conditions of industrial environments where the communication channel is prone to signal fading. We propose a TDMA-based MAC protocol for wireless sensor networks built for industrial applications that uses link state dependent scheduling. In our approach, nodes gather samples of the channel quality and generate prediction sets from the sample sets in independent slots. Using the prediction sets, nodes only wake up to transmit/receive during scheduled slots that are predicted to be clear and sleep during scheduled slots that may potentially cause a transmitted signal to fade. We simulate our proposed protocol and compare its performance with a general non-link state dependent TDMA protocol and a CSMA protocol. We found that our protocol significantly improves packet throughput as compared to both the general non-link state dependent TDMA protocol and CSMA protocol. We also found that in conditions which are not perfect under our assumptions, the performance of our protocol degrades gracefully.",2006,0, 3262,WLC19-4: Effective AP Selection and Load Balancing in IEEE 802.11 Wireless LANs,"IEEE 802.11 wireless LANs have been widely deployed. How to efficiently balance the traffic loads among access points (APs), which will lead to improved network utilization and higher quality of user experience, has become an important issue. A key challenge is how to select an AP to use during the handoff process in a way that will achieve overall load balance in the network. The conventional approaches typically use signal-to-noise ratio (SNR) as the criterion without considering the load of each AP. In this paper, we propose two effective new AP selection algorithms. These algorithms estimate the AP traffic loads by observing and estimating the IEEE 802.11 frame delays and use the results to determine which AP to use. The algorithms can be implemented in software on mobile stations alone. We will show that better performance can be achieved when the APs provide assistance in delay measurements; however, the improvement is not significant. There is no need to exchange information among APs. The proposed algorithms are fully compatible with the existing standards and systems and they are easy to implement. We will present extensive simulation results to show that the proposed algorithms can significantly increase overall system throughput and reduce average frame delay.",2006,0, 3263,P2D-3 Objective Performance Testing and Quality Assurance of Medical Ultrasound Equipment,"The goal of this study was to develop a test protocol that contains the minimum set of performance measurements for predicting the clinical performance of ultrasound equipment and that is based on objective assessments by computerized image analysis. The post-processing look-up-table (LUT) is measured and linearized. The elevational focus (slice thickness) of the transducer is estimated and the in plane transmit focus is positioned at the same depth. The developed tests are: echo level dynamic range (dB), contrast resolution (i.e., ""gamma"" of display, #gray levels/dB) and -sensitivity, overall system sensitivity, lateral sensitivity profile, dead zone, spatial resolution, and geometric conformity of display. The concept of a computational observer is used to define the lesion signal-to-noise ratio, SNRL (or Mahalanobis distance), as a measure for contrast sensitivity. The whole performance measurement protocol has been implemented in software. Reports are generated that contain all the information about the measurements and results, such as graphs, images and numbers. The software package may be viewed and a run-time version downloaded at the website: http://www.qa4us.eu",2006,0, 3264,An integrated wired-wireless testbed for distance learning on networking,"This paper addresses a remote testbed for distance learning designed to allow the investigation of various issues related to QoS management in wired/wireless networks used to support real-time applications. Several aspects, such as traffic handling in routers, congestion control and node mobility management, can be experimentally assessed. The testbed comprises various operating modes that the user can select to configure the traffic flows and modify the operational conditions of the network. A peculiarity of the testbed is node mobility support, which allows problems related to handoff and distance to be tackled. The testbed provides for both on-line measurements, through software modules which allow the user to monitor the network while it is operating, and off-line analysis of network behavior through log file inspection",2006,0, 3265,Measurement Error Analysis of Autonomous Decentralized Load Tracking System,"Measurement error analysis is presented for autonomous decentralized load tracking system. We propose a load tracking mechanism based on incomplete management of load information from subsystems. The system employs autonomous decentralized architecture, which enables online expansion and fault tolerance. Each subsystem asynchronously broadcasts load information to the data field shared by the subsystems. Load information in the data field is measured for a limited period of time to make the tracking ability efficient. The number of measurements and the estimation of total load from the measurements are stochastic variables. This paper shows the statistical model of measurement noise disturbing the tracking mechanism",2006,0, 3266,Development of Automated Fault Location Software for Distribution Network,"This paper presents on an ongoing development of automated fault location software for distribution network. The proposed method of the software is able to identify the most possible faulty section based on voltage sags feature i.e. magnitude and phase shift. The actual voltage sags caused by fault is compares with simulated voltage sags stored in a database. The matching will give all possible faulty sections that have been stored in the database. The software is developed by integrating various software modules. The module will be developed into a component type using component based development (CBD) approach. By developing the software from component, replacement or modification can be done to any of the module without affecting the whole software. The test results of the proposed method show satisfactory. Future works in order to improve the software and method are also discussed in this paper.",2006,0, 3267,Hiddenness Control of Hidden Markov Models and Application to Objective Speech Quality and Isolated-Word Speech Recognition,"Markov models are a special case of hidden Markov models (HMM). In Markov models the state sequence is visible, whereas in a hidden Markov model the underlying state sequence is hidden and the sequence of observations is visible. Previous research on objective techniques for output-based speech quality (OBQ) showed that the state transition probability matrix A of a Markov model is capable of capturing speech quality information. On the other hand similar experiments using HMMs showed that the observation symbol probability matrix B is more effective at capturing the speech quality information. This shows that the speech quality information in A matrix of a Markov model shifts to the B matrix of an HMM. An HMM can have varying degrees of hiddenness, which can be intuitively guessed from the entries of its observation probability matrix B for the discrete models. In this paper, we propose a visibility measure to assess the hiddenness of a given HMM, and also a method to control the hiddenness of a discrete HMM. We test the advantage of implementing hiddenness control in output-based objective speech quality (OBQ) and isolated-word speech recognition. Our test results suggest that hiddenness control improves the performance of HMM-based OBQ and might be useful for speech-recognition as well.",2006,0, 3268,Low Complexity Scalable Video Coding,"In this paper, we consider scalable video coding (SVC) which has higher complexity than H.264/AVC since it has spatial, temporal and quality scalability in addition to H.264/AVC functionality. Furthermore, inter-layer prediction and layered coding for spatial scalability make motion estimation and mode decision more complex. Therefore, we propose low complexity SVC schemes by using current developing SVC standard. It is archived by prediction method such as skip, direct, inter-layer MV prediction with fast mode and motion vector (MV) estimation at enhancement layer. In order to increase the performance of inter-layer MV prediction, combined MV interpolation is applied with adjustment of prediction direction. Additionally, fast mode and MV estimation are proposed from structural properties of motion-compensated temporal filtering (MCTF) to elaborate predicted macro block (MB) mode and MV. From the experimental results, proposed method has comparable performance to reference software model with significant lower complexity.",2006,0, 3269,Specific nearby electronic scheme for multiplexed excitation and detection of piezoelectric silicon-based micromembranes resonant frequencies using FPGA technology,"A new concept for a precise compensation of the static capacitance of piezoelectric silicon-based micromembranes is proposed. Combining analogue and digital (FPGA) hardware elements with specific software treatment, this system enables the parallel excitation and detection of the resonant frequencies (and the quality factors) of piezoelectric silicon-based micromembranes integrated on the same chip. The frequency measurement stability is less than 1 ppm (1-2 Hz) with a switching capability of 4 membranes/sec and a measurement bandwidth of 1.5 Mhz.",2006,0, 3270,The RPC system for the CMS experiment,"The CMS experiment at the CERN Large Hadron Collider (LHC) is equipped with a redundant muon trigger system based on drift tubes chambers (DT) and cathode strip chamber (CSC), for the precise position measurement, and resistive plate chambers (RPC), for the bunch crossing identification and a preliminary fast measurement of the muon transverse momentum. The CMS RPC system is constituted by 480 chambers in the barrel and 756 chambers in the forward corresponding to a total surface of about 3500 m2. The design and construction has involved many laboratories scattered all over the word. An accurate quality control procedure has been established at different levels of the production to achieve final detectors with operation parameters well inside the CMS specifications. In the summer 2006 a preliminary slice test involving a fraction of the CMS detector was performed. The performance of the RPC system with cosmic rays was studied versus the magnetic field using final running hardware and software protocols. Results on the RPC main working parameters are reported.",2006,0, 3271,Probabilistic ISOCS Uncertainty Estimator: Application to the Segmented Gamma Scanner,"Traditionally, high resolution gamma-ray spectroscopy (HRGS) has been used as a very powerful tool to determine the radioactivity of various items, such as samples in the laboratory, waste assay containers, or large items in-situ. However, in order to properly interpret the quality of the result, an uncertainty estimate must be made. This uncertainty estimate should include the uncertainty in the efficiency calibration of the instrument, as well as many other operational and geometrical parameters. Efficiency calibrations have traditionally been made using traceable radioactive sources. More recently, mathematical calibration techniques have become increasingly accurate and more convenient in terms of time and effort, especially for complex or unusual configurations. Whether mathematical or source-based calibrations are used, any deviations between the as-calibrated geometry and the as-measured geometry contribute to the total measurement uncertainty (TMU). Monte Carlo approaches require source, detector, and surrounding geometry inputs. For non-trivial setups, the Monte Carlo approach is time consuming both in terms of geometry input and CPU processing. Canberra Industries has developed a tool known as In-Situ Object Calibration Software (ISOCS) that utilizes templates for most common real life setups. With over 1000 detectors in use with this product, the ISOCS software has been well validated and proven to be much faster and acceptably accurate for many applications. A segmented gamma scanner (SGS) template is available within ISOCS and we use it here to model this assay instrument for the drummed radioactive waste. Recently, a technique has been developed which uses automated ISOCS mathematical calibrations to evaluate variations between reasonably expected calibration conditions and those that might exist during the actual measurement and to propagate them into an overall uncertainty on the final efficiency. This includes variations in container wall thickness - and diameter, sample height and density, sample non-uniformity, sample-detector geometry, and many other variables, which can be specified according to certain probability distributions. The software has a sensitivity analysis mode which varies one parameter at a time and allows the user to identify those variables that have the largest contribution to the uncertainty. There is an uncertainty mode which uses probabilistic techniques to combine all the variables and compute the average efficiency and the uncertainty in that efficiency, and then to propagate those values with the gamma spectroscopic analysis into the final result. In the areas of waste handling and environmental protection, nondestructive assay by gamma ray scanning can provide a fast, convenient, and reliable way of measuring many radionuclides in closed items. The SGS is designed to perform accurate quantitative assays on gamma emitting nuclides such as fission products, activation products, and transuranic nuclides. For the SGS, this technique has been applied to understand impacts of the geometry variations during calibration on the efficiency and to estimate the TMU.",2006,0, 3272,On-line Monitoring of Mechanical Characteristics for Vacuum Circuit Breaker,"A novel scheme combined the intelligent monitoring and fault diagnosis of mechanical characteristics for vacuum circuit breakers (VCBs) is proposed. The main principle of monitoring and diagnostic is stated and the design of hardware and software has been realized. By applying the developed on-line monitoring and diagnostic system, the mechanical characteristics of VCBs can be obtained and the detectable faults have been clearly identified, located, displayed and stored. The less maintenance and lower cost can be realized. Furthermore, the availability and reliability of the power system can be improved",2006,0, 3273,Comparative Study of Various Artificial Intelligence Techniques to Predict Software Quality,"Software quality prediction models are used to identify software modules that may cause potential quality problems. These models are based on various metrics available during the early stages of software development life cycle like product size, software complexity, coupling and cohesion. In this survey paper, we have compared and discussed some software quality prediction approaches based on Bayesian belief network, neural networks, fuzzy logic, support vector machine, expectation maximum likelihood algorithm and case-based reasoning. This study gives better comparative insight about these approaches, and helps to select an approach based on available resources and desired level of quality.",2006,0, 3274,Analytical Hierarchy Process Approach to Rank Measures for Structural Complexity of Conceptual Models,"This paper presents the result of a controlled experiment conducted to determine the relative importance of some measures, identified in research, for the structural complexity of entity-relationship (ER) models. The relative importance amongst these measures is calculated by applying the analytical hierarchy process approach. The results reveal that the number of relations in an ER diagram are of the highest importance in measuring the structural complexity in terms of understandability, analyzability and modifiability; whereas, the number of attributes do not play an important role. The study presented here can lead to developing quantitative metrics for comparing the quality of alternative conceptual models of the same problem",2006,0, 3275,Challenges in System on Chip Verification,"The challenges of system on a chip (SoC) verification is becoming increasingly complex as submicron process technology shrinks die size, enabling system architects to include more functionality in a single chip solution. A functional defect refers to the feature sets, protocols or performance parameters not conforming to the specifications of the SoC. Some of the functional defects can be solved by software workarounds but some require revisions of silicon. The revision of silicon not only costs millions of dollars but also impacts time to market, quality, customer commitments. Working silicon for the first revision of the SoC requires a robust module, chip and system verification strategy to uncover the logical and timing defects before tapeout. Different techniques are needed at each level (module, chip and system) to complete verification. In addition verification should quantify with a metric at every hierarchy to assess functional holes and address it. Verification metric can be a combination of code coverage, functional coverage, assertion coverage, protocol coverage, interface coverage and system coverage. A successful verification strategy also requires the test bench to be scalable, configurable, support reuse of functional tests, integration with tools and finally linkage to validation. The scope of this paper will discuss the verification strategy and pitfalls used in verification strategy and finally make recommendations for successful strategy.",2006,0, 3276,Debug Support for Scalable System-on-Chip,"On-chip debug is an important technique to detect and locate the faults in the practical software applications. Scalability and reusability are the essential features of system-on-chip (SoC). Therefore, the debug architecture should meet the requirement of those features. Furthermore, it is necessary for applications developers to communication with the SoC chip on-line. In this paper, we present the novel debug architecture to solve above problems. The debug architecture has been implemented in a typical SoC chip. The results of performance analysis show that the debug architecture has high performance at the cost of few resources and area.",2006,0, 3277,Transient Error Detection in Embedded Systems Using Reconfigurable Components,"In this paper, a hardware control flow checking technique is presented and evaluated. This technique uses re configurable of the shelf FPGA in order to concurrently check the execution flow of the target micro processor. The technique assigns signatures to the main program in the compile time and verifies the signatures using a FPGA as a watchdog processor to detect possible violation caused by the transient faults. The main characteristic of this technique is its ability to be applied to any kind of processor architecture and platforms. The low imposed hardware and performance overhead by this technique makes it suitable for those applications in which cost is a major concern, such as industrial applications. The proposed technique is experimentally evaluated on an 8051 microcontroller using software implemented fault injection (SWIFI). The results show that this technique detects about 90% of the injected control flow errors. The watchdog processor occupied 26% of an Altera Max-7000 FPGA chip logic cells. The performance overhead varies between 42% and 82% depending on the workload used.",2006,0, 3278,Micropatterned non-invasive dry electrodes for Brain-Computer Interface,"Partially or completely paralyzed patients can benefit from advanced neuro-prostheses in which a continuous recording of electroencephalogram (EEG) is required, operating some processing and classification to control a computer (BCI, brain-computer interfaces). Patients are so allowed to control external devices or to communicate simple messages through the computer, just concentrating their attention on codified movements or on a letter or icon on a digital keyboard. Conventional electrodes usually require skin preparation and application of electrolytic gel for high quality low amplitude biopotentials recordings and are not suitable for being easily used by patient or caregivers at home in BCI or equivalent systems. In this report we describe the fabrication and characterization of dry (gel not required), non-invasive, user-friendly biopotential electrodes. The electrodes consist of a bidimensional array of micro-needles designed to pierce the first dielectric skin layer (stratum corneum) and establishing a direct contact with the living and electrical conducting cells in the epidermis (no blood vessels and nerve terminations). The easy and immediate application of the spiked electrodes makes them also attractive for every surface long-term biosignal measurements, even at patient's home (EEG, electrocardiogram, etc).",2006,0, 3279,Improving Software Quality Classification with Random Projection,"Improving the quality of software products is one of the principal objectives of software engineering. Software metrics are the key tool in software quality management. In this paper we propose to use naive Bayes and RIPPER for software quality classification and use random projection to improve the performance of classifiers. Feature extraction via random projection has attracted considerable attention in recent years. The approach has interesting theoretical underpinnings and offers computational advantages. Results on benchmark dataset MIS, using Accuracy and Recall as performance measures, indicate that random projection can improve the classification performance of all four learners we investigate: Naive Bayes, RIPPER, MLP and IB1. With the help of random projection, naive Bayes and RIPPER are better than MLP and IB1 in finding fault-high software modules, which can be sought as potentially highly faulty modules where most of our testing and maintenance effort should be focused",2006,0, 3280,The Application of the System Parameter Fusion Principle to Assessing University Electronic Library Performance,"Modern technology provides a great amount of information. But for computer monitoring systems or computer control systems, in order to have the situation in hand, we need to reduce the number of variables to one or two parameters, which express the quality and/or security of the whole system. In this paper, the authors introduce the system parameter fusion principle put forward by the third author and present how to apply it to assessing university electronic library performance combining with the Delphi technique and AHP",2006,0, 3281,Predicting Qualitative Assessments Using Fuzzy Aggregation,"Given the complexity and sophistication of many contemporary software systems, it is often difficult to gauge the effectiveness, maintainability, extensibility, and efficiency of their underlying software components. A strategy to evaluate the qualitative attributes of a system's components is to use software metrics as quantitative predictors. We present a fusion strategy that combines the predicted qualitative assessments from multiple classifiers with the anticipated outcome that the aggregated predictions are superior to any individual classifier prediction. Multiple linear classifiers are presented with different, randomly selected, subsets of software metrics. In this study, the software components are from a sophisticated biomedical data analysis system, while the external reference test is a thorough assessment of both complexity and maintainability, by a software architect, of each system component. The fuzzy integration results are compared against the best individual classifier operating on a software metric subset",2006,0, 3282,Neural Model Predictive Controller for Multivariable Process,"In this paper, control of a non-minimal quadruple tank process, which is non linear and multivariable is reported. A nonlinear Model Predictive Controller (NMPC) is developed using a Recurrent Neural Network (RNN) as a predictor. The process data is obtained from the laboratory scale experimental setup, which is used in training the RNN. The network trained is used in controlling the quadruple tank, solving the least square optimization problem with a quadratic performance objective. The control system is implemented in real-time on a laboratory scale plant using dSPACE interface card and Matlab software. The quality of controller using NMPC is compared with dynamic matrix control (DMC) for reference tracking and external disturbance rejection.",2006,0, 3283,Optimal Design of Distributed Databases,"The physical expansion of enterprises with the current trends in globalization has had a positive influence on the technologies of distributed systems. At the forefront of this technological revolution are distributed database systems. For a distributed database to be at optimal performance and thus provide an efficient service it needs to be designed appropriately. The significance of the perfect design is only emphasized by the multiple dimensions required in generating a design. The purpose of this paper is to suggest an approach to generate optimal designs for such distributed database systems and to develop a prototype to demonstrate the said approach. The approach emphasizes on the accuracy of inputs as it largely determines the quality of the final solution. Hence the extraction of network information, a key dimension, is automated to ensure precision. The global schema is fragmented considering data requirements as well as connectivity of each site. Allocation of fragments is treated as a combinatorial optimization problem and assigned to a memetic algorithm. An estimation of distribution algorithm complements the search effort of this memetic algorithm. Site options for replication server environments are investigated based on a shortest path algorithm. Usability of the system in an object oriented development environment, through conditional object-relational mapping, is also explored. The prototype was developed using an evolutionary prototyping approach. It was evaluated by several experts in the relevant fields of application. The results of which, confirmed the practicality of the suggested approach.",2006,0, 3284,Fuzzyness And Imprecision In Software Engineering,"In this paper, we study how fuzziness in software engineering emerges and how to reflect this fuzziness in measuring software qualities. Principal means in these processes are software metrics with values in categorical data represented by software metrics with values in fuzzy sets and linguistic variables. This study is aimed to support the development of high quality software. The process of program design as a transition from a problem to a program is studied. A classification of software metrics is developed with the aim of better structuring and optimization of the software fuzzy metric design. Processes of constructing new measures from existing ones often use aggregation operations. Here we study aggregation operations for fuzzy set based software metrics.",2006,0, 3285,Intra- and Inter-Processors Memory Size Estimation for Multithreaded MPSoC Modeled in Simulink,"Target architectures for Simulink-based flows have been generally fixed and partitioned manually. The quality of the results or the final performance of the MPSoC depends heavily on the adaptation of the architecture to the needs in terms of processing performance as well as the efficiency of the communication protocols. The processing horse power determination is not limited to the selection of appropriate processors, but also to the amount of memory needed to run the software tasks. Moreover, inter-processors buffer memories and their sizes are as critical and require attention and focus to avoid communication loss or bottlenecks.",2006,0, 3286,TDR Single Ended and Differential Measurement Methodology,"As the frequency of the high speed interfaces in computer platforms and systems continue to ramp due to architectural advancement and strong market demands, electronic packages have been stretched to its limit to support these interfaces. Furthermore, due to increase in the complexities in the package, signal integrity, manufacturing concern and package size reduced, designers are facing a lot of signal quality issue caused by impedance mismatch. To ensure signal integrity, it is necessary to understand and control impedance in the transmission environment through which the signals travel. Mismatches and variations can cause reflections that decrease signal quality as a whole. TDR (Time Domain Reflectometry) is the most common tools for verification and analysis of the transmission properties of high-speed systems and components. It is able to locate signal path discontinuities that cause reflections, verify traces characteristic impedance and estimate traces length. Hence, this paper is to discuss about TDR set up and type of TDR measurement procedure and methodology for single ended and differential pair interface in the package. There are several key elements that determine the precision of TDR measurement such as the edge speed of the stimulus pulse produced by the TDR, bandwidth of the channel used to received the pulse from the DUT, calibration and deskew process. Correlation between measurement data vs simulation data is performed in order to ensure the accuracy of the data collected. The simulation method is done by using Ansoft 3D and 2D simulation tools to characterized package interconnect such as traces, wirebond, VIA/PTH and ball structures. The output file (Spice File, .sp) will be important to Ansoft Designer software for TDR simulations.",2006,0, 3287,Hardware assisted pre-emptive control flow checking for embedded processors to improve reliability,"Reliability in embedded processors can be improved by control flow checking and such checking can be conducted using software or hardware. Proposed software-only approaches suffer from significant code size penalties, resulting in poor performance. Proposed hardware-assisted approaches are not scalable and therefore cannot be implemented in real embedded systems. This paper presents a scalable, cost effective and novel fault detection technique, to ensure proper control flow of a program. This technique includes architectural changes to the processor and software modifications. While architectural refinement incorporates additional instructions, the software transformation utilizes these instructions into the program flow. Applications from an embedded systems benchmark suite are used for testing and evaluation. The overheads are compared with the state of the art approach that performs the same error coverage using software-only techniques. Our method has greatly reduced overheads compared to the state of the art. Our approach increased code size by between 3.85-11.2% and reduced performance by just 0.24-1.47% for eight different industry standard applications. The additional hardware (gates) overhead in this approach was just 3.59%. In contrast, the state of the art software- only approach required 50-150% additional code, and reduced performance by 53.5-99.5% when error detection was inserted.",2006,0, 3288,Multi-processor system design with ESPAM,"For modern embedded systems, the complexity of embedded applications has reached a point where the performance requirements of these applications can no longer be supported by embedded system architectures based on a single processor. Thus, the emerging embedded System-on-Chip platforms are increasingly becoming multiprocessor architectures. As a consequence, two major problems emerge, i.e., how to design and how to program such multiprocessor platforms in a systematic and automated way in order to reduce the design time and to satisfy the performance needs of applications executed on these platforms. Unfortunately, most of the current design methodologies and tools are based on Register Transfer Level (RTL) descriptions, mostly created by hand. Such methodologies are inadequate, because creating RTL descriptions of complex multiprocessor systems is error-prone and time consuming. As an efficient solution to these two problems, in this paper we propose a methodology and techniques implemented in a tool called ESPAM for automated multiprocessor system design and implementation. ESPAM moves the design specification from RTL to a higher, so called system level of abstraction. We explain how starting from system level platform, application, and mapping specifications, a multiprocessor platform is synthesized and programmed in a systematic and automated way. Furthermore, we present some results obtained by applying our methodology and ESPAM tool to automatically generate multiprocessor systems that execute a real-life application, namely a Motion-JPEG encoder.",2006,0, 3289,INS/GPS Integrated Navigation Uncertain System State Estimation Based on Minimal Variance Robust Filtering,"INS/GPS integrated navigation systems are used for positioning and attitude determination in a wide range of applications. Combining INS and GPS organically, we can get an integrated navigating system which can overcome the respective defects of the two systems and develop their advantages to form a complementary structure at the same time. GPS measurements can be used correct the INS and sensor errors to provide high accuracy real-time navigation (Farrell, 1998). The integration of GPS and INS measurements is usually achieved using a Kalman filter. But, usually uncertainties exist in INS/GPS integrated real systems, which may cause filtering to divergence for classical Kalman filter. In order to solve this problem, a robust filter with minimal variance is addressed by this paper",2006,0, 3290,Evolving GA Classifiler for Audio Steganalysis based on Audio Quality Metrics,"Differentiating anomalous audio document (Stego audio) from pure audio document (cover audio) is difficult and tedious. Steganalytic techniques strive to detect whether an audio contains a hidden message or not. This paper presents a genetic algorithm (GA) based approach to audio steganalysis, and the software implementation of the approach. The basic idea is that, the various audio quality metrics calculated on cover audio signals and on stego-audio signals vis-a-vis their denoised versions, are statistically different. GA is employed to derive a set of classification rules from audio data using these audio quality metrics, and fitness function is used to judge the quality of each rule. The generated rules are then used to detect or classify the audio documents in a real-time environment. Unlike most existing GA-based approaches, because of the simple representation of rules and the effective fitness function, the proposed method is easier to implement while providing the flexibility to generally detect any new steganography technique. The implementation of the GA based audio steganalyzer relies on the choice of these audio quality metrics and the construction of a two-class classifier, which will discriminate between the adulterated and the untouched audio samples. Experimental results show that the proposed technique provides promising detection rates.",2006,0, 3291,A Fuzzy-Inference System Based Approach for the Prediction of Quality of Reusable Software Components,"The requirement to improve software productivity has promoted the research on software metric technology. There are metrics for identifying the quality of reusable components. These metrics if identified in the design phase or even in the coding phase can help us to reduce the rework by improving quality of reuse of the component and hence improve the productivity due to probabilistic increase in the reuse level. A suit of metrics can be used to obtain the reusability in the modules. And the reusability can be obtained with the help of Fuzzy Logic. An algorithm has been proposed in which the inputs can be given to fuzzy system in form of Cyclometric Complexity, Volume, Regularity, Reuse-Frequency & Coupling, and output can be obtained in terms of reusability.",2006,0, 3292,Polarisation of Electromagnetic Waves Analysis for Application in Mobile Communication Systems,"The polarisation property of electromagnetic waves for applications in mobile communication systems has not been examined into detail yet. Due to multi-path effects and changes in the underlying coordinate systems of the involved antennas, the polarisation state of an electromagnetic wave at the receiving antenna of a mobile station is very likely to change. The presented project involves a data measurement and detailed visual investigation of these polarisation properties in the field of GSM mobile communication at the frequency band 1800 MHz. The measuring system mounted on a special car moves along a predefined trajectory around a GSM telecommunication cell. The collected data, which are received from a dual polarisation antenna, together with the information of the location, orientation and velocity of the system, which is provided by other sensors, is subsequently processed by the standard signal analysis hardware and the digital signal processing software, so that the polarisation behaviour of the received wave can be identified and visualized. The visualisation system is based on a digitized map into which the data received from GPS is inserted specifying the actual position of the measuring system. A particular polarisation state is simultaneously assigned to the position data in the digital map. The data analysis and signal filtering is performed to estimate multi-path propagation effects and to reduce the influence of them on the quality of the link. The results of this project could be for instance used in future generations of wireless mobile communication systems, where the polarisation dependent effects may contribute to increase reliability and capacity of the transmission service. This research was performed within the AMPER project, which is supported by the EU Commission.",2006,0, 3293,Ensuring Numerical Quality in Grid Computing,"Certain numerically intensive applications executed within a grid computing environment crucially depend on the properties of floating-point arithmetic implemented on the respective platform. Differences in these properties may have drastic effects. This paper identifies the central problems related to this situation. We propose an approach which gives the user valuable information on the various platforms available in a grid computing environment in order to assess the numerical quality of an algorithm run on each of these platforms. In this manner, the user will at least have very strong hints whether a program will perform reliably in a grid before actually executing it. Our approach extends the existing IeeeCC754 test suite by two ""grid-enabled"" modes: The first mode calculates a ""numerical checksum"" on a specific grid host and executes the job only if the checksum is identical to a locally generated one. The second mode provides the user with information on the reliability and IEEE 754-conformity of the underlying floating-point implementation of various platforms. Furthermore, it can help to find a set of compiler options to optimize the application's performance while retaining numerical stability.",2006,0, 3294,Pathological Voice Assessment,"While there are number of guidelines and methods used in practice, there is no standard universally agreed upon system for assessment of pathological voices. Pathological voices are primarily labeled based on the perceptual judgments of specialists, a process that may result in different label(s) being assigned to a given voice sample. This paper focuses on the recognition of five specific pathologies. The main goal is to compare two different classification methods. The first method considers single label classification by assigning a new label (single label) to the ensembles to which they most likely belong. The second method employs all labels originally assigned to the voice samples. Our results show that the pathological voice assessment performance in the second method is improved with respect to the first method",2006,0, 3295,Clinical Evaluation of Watermarked Medical Images,"Digital watermarking medical images provides security to the images. The purpose of this study was to see whether digitally watermarked images changed clinical diagnoses when assessed by radiologists. We embedded 256 bits watermark to various medical images in the region of non-interest (RONI) and 480K bits in both region of interest (ROI) and RONI. Our results showed that watermarking medical images did not alter clinical diagnoses. In addition, there was no difference in image quality when visually assessed by the medical radiologists. We therefore concluded that digital watermarking medical images were safe in terms of preserving image quality for clinical purposes",2006,0, 3296,The preliminary study on the clinical application of the WHAM (Wearable Heart Activity Monitor),"In this paper, we investigated the validity of the WHAM (wearable heart activity monitor) in the clinical applications, which has been implemented as a wearable ambulatory device for continuously and long-term monitoring user's cardiac conditions. To this end, using the WHAM and the conventional Holter monitor the ECG signals over 24 hours were recorded during daily activities. The signal from the WHAM was compared with that from the conventional Holter monitor in terms of the readability of the signal, the quality of the signal, and the accuracy of arrhythmia detection. The performance of the WHAM was a little lower as compared with the conventional Holter monitor, although showing no significant difference (the readability of the signal: 97.2% vs 99.3%; the quality of the signal: 0.97 vs 0.98; the accuracy of arrhythmia detection: 96.2% vs 98.1%). From these results, it is likely that the WHAM shows the performance enough to be used in the clinical application as a wearable ambulatory monitoring device",2006,0, 3297,Fragility Issues of Medical Video Streaming over 802.11e-WLAN m-health Environments,"This paper presents some of the fragility issues of a medical video streaming over 802.11e-WLAN in m-health applications. In particular, we present a medical channel-adaptive fair allocation (MCAFA) scheme for enhanced QoS support for IEEE 802.11 (WLAN), as a modification for the standard 802.11e enhanced distributed coordination function (EDCF) is proposed for enhanced medical data performance. The medical channel-adaptive fair allocation (MCAFA) proposed extends the EDCF, by halving the contention window (CW) after zeta consecutive successful transmissions to reduce the collision probability when channel is busy. Simulation results show that MCAFA outperforms EDCF in-terms of overall performance relevant to the requirements of high throughput of medical data and video streaming traffic in 3G/WLAN wireless environments",2006,0, 3298,A Systematic Review of Technical Evaluation in Telemedicine Systems,"We conducted a systematic review of the literature to critically analyse the evaluation and assessment frameworks that have been applied to telemedicine systems. Subjective methods were predominantly used for technical evaluation (59 %), e.g. Likert scale. Those including objective measurements (41%) were restricted to simple metrics such as network time delays. Only three papers included a rigorous standards based objective approach. Our investigation has been unable to determine a definitive standards-based telemedicine evaluation framework that exists in the literature that may be applied systematically to assess and compare telemedicine systems. We conclude that work needs to be done to address this deficiency. We have therefore developed a framework that has been used to evaluate videoconferencing systems telemedicine applications. Our method seeks to be simple to allow relatively inexperienced users to make measurements, is objective and repeatable, is standards based, is inexpensive and requires little specialist equipment. We use the EIA 1956 broadcast test card to assess resolution, grey scale and for astigmatism. Colour discrimination is assessed with the TE 106 and Ishihara 24 colour scale chart. Network protocol analysis software is used to assess network performance (throughput, delay, jitter, packet loss)",2006,0, 3299,Improved automated quantification of left ventricular size and function from cardiac magnetic resonance images,"Assessment of left ventricular (LV) size and function from cardiac magnetic resonance (CMR) images requires manual tracing of LV borders on multiple 2D slices, which is subjective, experience dependent, tedious and time-consuming. We tested a new method for automated dynamic segmentation of CMR images based on a modified region-based model, in which a level set function minimizes a functional containing information regarding the probability density distribution of the gray levels. Images (GE 1.5T FIESTA) obtained in 9 patients were analyzed to automatically detect LV endocardial boundaries and calculate LV volumes and ejection fraction (EF). These measurements were validated against manual tracing. The automated calculation ofLV volumes and EF was completed in each patient in <3 min and resulted in high level of agreement with no significant bias and narrow limits of agreement with the reference technique. The proposed technique allows fast automated detection of endocardial boundaries as a basis for accurate quantification of LV size and function from CMR images.",2006,0, 3300,An Ontology-Based Approach for Domain Requirements Elicitation and Analysis,"Domain requirements are fundamental for software reuse and are the product of domain analysis. This paper presents an ontology based approach to elicit and analyze domain requirements. An ontology definition is given out. Problem domain is decomposed into several sub problem domains by using subjective decomposition method. The top-down refinement method is used to refine each sub problem domain into primitive requirements. Abstract stakeholders are used instead of real ones when decomposing problem domain and domain primitive requirements are represented by ontology. Not only domain commonality, variability and qualities are presented, but also reasoning logic is used to detect and handle incompleteness and inconsistency of domain requirements. In addition, a case of 'spot and futures transaction' domain is used to illustrate the approach",2006,0, 3301,Grid Connection to Stand Alone Transitions of Slip Ring Induction Generator During Grid Faults,"A grid connected power generation systems based on the superior controllers of an active and reactive power are useless during a grid failures like grid short-circuit or line braking. Therefore the change of operation mode from grid connection to stand alone allows for uninterruptible supply of a selected part of grid connected load. However, in the stand alone operation mode the superior controllers should provide fixed amplitude and frequency of the generated voltage in spite of the load nature. Moreover, a soft transition from grid connection mode to stand alone operation requires that, the mains outage detection method must be applied. A grid voltage recovery requires change of the generator operational mode from stand alone to grid connection. However, the protection of a load from rapid change of the supply voltage phase is necessary. This may be achieved by synchronization of the generated and grid voltages and controllable soft connection of the generator to the grid. The paper presents the transients of controllable soft connection and disconnection to the grid of the variable speed doubly fed induction generator (DFIG) power system. A description of the mains outage detection methods for the DFIG is based on the grid voltage amplitude and frequency measurement and comparison with a standard values. Also an angle controller, between generated and grid voltages, for synchronization process is described. The short description of the sensorless direct voltage control of the autonomous doubly fed induction generator (ADFIG) is presented. All the presented methods are proved based on PSIM simulation software and in a laboratorial conditions and the oscillograms with a test results are presented in the paper. A 2.2 kW slip-ring induction machine was applied as a generator and 3.5 kW DC motor was used as a primary mover to speed adjusting. A switching and sampling frequencies are equal to 8 kHz. For filtering the switching frequency distortions in the output volta- - ge external capacitances equal to 21 muF per phase are connected to the stator. The control algorithm is implemented in a DSP controller build on a floating point ADSP-21061 with an Altera/FPGA support",2006,0, 3302,Operational Fault Detection in cellular wireless base-stations,"The goal of this work is to improve availability of operational base-stations in a wireless mobile network through non-intrusive fault detection methods. Since revenue is generated only when actual customer calls are processed, we develop a scheme to minimize revenue loss by monitoring real-time mobile user call processing activity. The mobile user call load profile experienced by a base-station displays a highly non-stationary temporal behavior with time-of-day, day-of-the-week and time-of-year variations. In addition, the geographic location also impacts the traffic profile, making each base-station have its own unique traffic patterns. A hierarchical base-station fault monitoring and detection scheme has been implemented in an IS-95 CDMA Cellular network that can detect faults at - base station level, sector level, carrier level, and channel level. A statistical hypothesis test framework, based on a combination of parametric, semi-parametric and non-parametric test statistics are defined for determining faults. The fault or alarm thresholds are determined by learning expected deviations during a training phase. Additionally, fault thresholds have to adapt to spatial and temporal mobile traffic patterns that slowly changes with seasonal traffic drifts over time and increasing penetration of mobile user density. Feedback mechanisms are provided for threshold adaptation and self-management, which includes automatic recovery actions and software reconfiguration. We call this method, Operational Fault Detection (OFD). We describe the operation of a few select features from a large family of OFD features in Base Stations; summarize the algorithms, their performance and comment on future work.",2006,0, 3303,On effective use of reliability models and defect data in software development,"In software technology today, several development methodologies such as extreme programming and open source development increasingly use feedback from customer testing. This makes the customer defect data become more readily available. This paper proposes an effective use of reliability models and defect data to help managers make software release decisions by applying a strategy for selecting a suitable reliability model, which best fits the customer defect data as testing progresses. We validate the proposed approach in an empirical study using a dataset of defect reports obtained from testing of three releases of a large medical system. The paper describes detailed results of our experiments and concludes with suggested guidelines on the usage of reliability models and defect data.",2006,0, 3304,Empirical Validation of Three Software Metrics Suites to Predict Fault-Proneness of Object-Oriented Classes Developed Using Highly Iterative or Agile Software Development Processes,"Empirical validation of software metrics suites to predict fault proneness in object-oriented (OO) components is essential to ensure their practical use in industrial settings. In this paper, we empirically validate three OO metrics suites for their ability to predict software quality in terms of fault-proneness: the Chidamber and Kemerer (CK) metrics, Abreu's Metrics for Object-Oriented Design (MOOD), and Bansiya and Davis' Quality Metrics for Object-Oriented Design (QMOOD). Some CK class metrics have previously been shown to be good predictors of initial OO software quality. However, the other two suites have not been heavily validated except by their original proposers. Here, we explore the ability of these three metrics suites to predict fault-prone classes using defect data for six versions of Rhino, an open-source implementation of JavaScript written in Java. We conclude that the CK and QMOOD suites contain similar components and produce statistical models that are effective in detecting error-prone classes. We also conclude that the class components in the MOOD metrics suite are not good class fault-proneness predictors. Analyzing multivariate binary logistic regression models across six Rhino versions indicates these models may be useful in assessing quality in OO classes produced using modern highly iterative or agile software development processes.",2007,1, 3305,Predicting Defects for Eclipse,"We have mapped defects from the bug database of eclipse (one of the largest open-source projects) to source code locations. The resulting data set lists the number of pre- and post-release defects for every package and file in the eclipse releases 2.0, 2.1, and 3.0. We additionally annotated the data with common complexity metrics. All data is publicly available and can serve as a benchmark for defect prediction models.",2007,1, 3306,Empirical Analysis of Software Fault Content and Fault Proneness Using Bayesian Methods,"We present a methodology for Bayesian analysis of software quality. We cast our research in the broader context of constructing a causal framework that can include process, product, and other diverse sources of information regarding fault introduction during the software development process. In this paper, we discuss the aspect of relating internal product metrics to external quality metrics. Specifically, we build a Bayesian network (BN) model to relate object-oriented software metrics to software fault content and fault proneness. Assuming that the relationship can be described as a generalized linear model, we derive parametric functional forms for the target node conditional distributions in the BN. These functional forms are shown to be able to represent linear, Poisson, and binomial logistic regression. The models are empirically evaluated using a public domain data set from a software subsystem. The results show that our approach produces statistically significant estimations and that our overall modeling method performs no worse than existing techniques.",2007,1, 3307,Using Software Dependencies and Churn Metrics to Predict Field Failures: An Empirical Case Study,"Commercial software development is a complex task that requires a thorough understanding of the architecture of the software system. We analyze the Windows Server 2003 operating system in order to assess the relationship between its software dependencies, churn measures and post-release failures. Our analysis indicates the ability of software dependencies and churn measures to be efficient predictors of post-release failures. Further, we investigate the relationship between the software dependencies and churn measures and their ability to assess failure-proneness probabilities at statistically significant levels.",2007,1, 3308,Fault Prediction using Early Lifecycle Data,"The prediction of fault-prone modules in a software project has been the topic of many studies. In this paper, we investigate whether metrics available early in the development lifecycle can be used to identify fault-prone software modules. More precisely, we build predictive models using the metrics that characterize textual requirements. We compare the performance of requirements-based models against the performance of code-based models and models that combine requirement and code metrics. Using a range of modeling techniques and the data from three NASA projects, our study indicates that the early lifecycle metrics can play an important role in project management, either by pointing to the need for increased quality monitoring during the development or by using the models to assign verification and validation activities.",2007,1, 3309,Data Mining Static Code Attributes to Learn Defect Predictors,"The value of using static code attributes to learn defect predictors has been widely debated. Prior work has explored issues like the merits of ""McCabes versus Halstead versus lines of code counts"" for generating defect predictors. We show here that such debates are irrelevant since how the attributes are used to build predictors is much more important than which particular attributes are used. Also, contrary to prior pessimism, we show that such defect predictors are demonstrably useful and, on the data studied here, yield predictors with a mean probability of detection of 71 percent and mean false alarms rates of 25 percent. These predictors would be useful for prioritizing a resource-bound exploration of code that has yet to be inspected",2007,1, 3310,Inverse wave field extrapolation: a different NDI approach to imaging defects,"Nondestructive inspection (NDI) based on ultrasound is widely used. A relatively recent development for industrial applications is the use of ultrasonic array technology. Here, ultrasonic beams generated by array transducers are controlled by a computer. This makes the use of arrays more flexible than conventional single-element transducers. However, the inspection techniques have principally remained unchanged. As a consequence, the properties of these techniques, as far as characterization and sizing are concerned, have not improved. For further improvement, in this paper we apply imaging theory developed for seismic exploration of oil and gas fields on the NDI application. Synthetic data obtained from finite difference simulations is used to illustrate the principle of imaging. Measured data is obtained with a 64-element linear array (4 MHz) on a 20-mm thick steel block with a bore hole to illustrate the imaging approach. Furthermore, three examples of real data are presented, representing a lack of fusion defect, a surface breaking crack, and porosity",2007,0, 3311,Integration of Ground-Penetrating Radar and Laser Position Sensors for Real-Time 3-D Data Fusion,"Full-resolution 3-D ground-penetrating radar (GPR) imaging of the near surface should be simple and efficient. Geoscientists, archeologists, and engineers need a tool capable of generating interpretable subsurface views at centimeter-to-meter resolution of field sites ranging from smooth parking lots to rugged terrain. The authors have integrated novel rotary laser positioning technology with GPR into such a 3-D imaging system. The laser positioning enables acquisition of centimeter accurate x, y, and z coordinates from multiple small detectors attached to moving GPR antennas. Positions streaming with 20 updates/s from each detector are fused in real time with the GPR data. The authors developed software for automated data acquisition and real time 3-D GPR data quality control on slices at selected depths. Industry standard (SEGY) format data cubes and animations are generated within an hour after the last trace has been acquired. Such instant 3-D GPR can be used as an on-site imaging tool supporting field work, hypothesis testing, and optimized excavation and sample collection in the exploration of the static and dynamic nature of the shallow subsurface",2007,0, 3312,Reliable Effects Screening: A Distributed Continuous Quality Assurance Process for Monitoring Performance Degradation in Evolving Software Systems,"Developers of highly configurable performance-intensive software systems often use in-house performance-oriented ""regression testing"" to ensure that their modifications do not adversely affect their software's performance across its large configuration space. Unfortunately, time and resource constraints can limit in-house testing to a relatively small number of possible configurations, followed by unreliable extrapolation from these results to the entire configuration space. As a result, many performance bottlenecks escape detection until systems are fielded. In our earlier work, we improved the situation outlined above by developing an initial quality assurance process called ""main effects screening"". This process 1) executes formally designed experiments to identify an appropriate subset of configurations on which to base the performance-oriented regression testing, 2) executes benchmarks on this subset whenever the software changes, and 3) provides tool support for executing these actions on in-the-field and in-house computing resources. Our initial process had several limitations, however, since it was manually configured (which was tedious and error-prone) and relied on strong and untested assumptions for its accuracy (which made its use unacceptably risky in practice). This paper presents a new quality assurance process called ""reliable effects screening"" that provides three significant improvements to our earlier work. First, it allows developers to economically verify key assumptions during process execution. Second, it integrates several model-driven engineering tools to make process configuration and execution much easier and less error prone. Third, we evaluate this process via several feasibility studies of three large, widely used performance-intensive software frameworks. Our results indicate that reliable effects screening can detect performance degradation in large-scale systems more reliably and with significantly less resources than conventional - echniques",2007,0, 3313,An Effective PSO-Based Memetic Algorithm for Flow Shop Scheduling,"This paper proposes an effective particle swarm optimization (PSO)-based memetic algorithm (MA) for the permutation flow shop scheduling problem (PFSSP) with the objective to minimize the maximum completion time, which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem. In the proposed PSO-based MA (PSOMA), both PSO-based searching operators and some special local searching operators are designed to balance the exploration and exploitation abilities. In particular, the PSOMA applies the evolutionary searching mechanism of PSO, which is characterized by individual improvement, population cooperation, and competition to effectively perform exploration. On the other hand, the PSOMA utilizes several adaptive local searches to perform exploitation. First, to make PSO suitable for solving PFSSP, a ranked-order value rule based on random key representation is presented to convert the continuous position values of particles to job permutations. Second, to generate an initial swarm with certain quality and diversity, the famous Nawaz-Enscore-Ham (NEH) heuristic is incorporated into the initialization of population. Third, to balance the exploration and exploitation abilities, after the standard PSO-based searching operation, a new local search technique named NEH_1 insertion is probabilistically applied to some good particles selected by using a roulette wheel mechanism with a specified probability. Fourth, to enrich the searching behaviors and to avoid premature convergence, a simulated annealing (SA)-based local search with multiple different neighborhoods is designed and incorporated into the PSOMA. Meanwhile, an effective adaptive meta-Lamarckian learning strategy is employed to decide which neighborhood to be used in SA-based local search. Finally, to further enhance the exploitation ability, a pairwise-based local search is applied after the SA-based search. Simulation results based on benchmarks demonstrate the effectiveness of- - the PSOMA. Additionally, the effects of some parameters on optimization performances are also discussed",2007,0, 3314,Wrapper–Filter Feature Selection Algorithm Using a Memetic Framework,"This correspondence presents a novel hybrid wrapper and filter feature selection algorithm for a classification problem using a memetic framework. It incorporates a filter ranking method in the traditional genetic algorithm to improve classification performance and accelerate the search in identifying the core feature subsets. Particularly, the method adds or deletes a feature from a candidate feature subset based on the univariate feature ranking information. This empirical study on commonly used data sets from the University of California, Irvine repository and microarray data sets shows that the proposed method outperforms existing methods in terms of classification accuracy, number of selected features, and computational efficiency. Furthermore, we investigate several major issues of memetic algorithm (MA) to identify a good balance between local search and genetic search so as to maximize search quality and efficiency in the hybrid filter and wrapper MA",2007,0, 3315,A Multipurpose Code Coverage Tool for Java,"Most test coverage analyzers help in evaluating the effectiveness of testing by providing data on statement and branch coverage achieved during testing. If made available, the coverage information can be very useful for many other related activities, like, regression testing, test case prioritization, test-suite augmentation, test-suite minimization, etc. In this paper, we present a Java-based tool JavaCodeCoverage for test coverage reporting. It supports testing and related activities by recording the test coverage for various code-elements and updating the coverage information when the code being tested is modified. The tool maintains the test coverage information for a set of test cases on individual as well as test suite basis and provides effective visualization for the same. Open source database support of the tool makes it very useful for software testing research",2007,0, 3316,Inside Architecture Evaluation: Analysis and Representation of Optimization Potential,"The share of software in embedded systems has been growing permanently in the recent years. Thus, software architecture as well as its evaluation has become an important part of embedded systems design to define, assess, and assure architecture and system quality. Furthermore, design space exploration can be based on architecture evaluation. To achieve an efficient exploration process, architectural decisions need to be well considered. In this paper, analysis of architecture evaluation is performed to uncover dependencies of the quality attributes which are the first class citizens of architecture evaluation. With an explicit representation of such dependencies, valuable changes of an architecture can be calculated. Next to the exploration support, the analysis results help to document architecture knowledge and make architectural decisions explicit and traceable. The development process can now be based on dependable and well documented architectural decisions. Effects of changes become more predictable. Time and costs can be saved by avoiding suboptimal changes.",2007,0, 3317,Constructing a Reading Guide for Software Product Audits,"Architectural knowledge is reflected in various artifacts of a software product. In the case of a software product audit this architectural knowledge needs to be uncovered and its effects assessed, in order to evaluate the quality of the software product. A particular problem is to find and comprehend the architectural knowledge that resides in the software product documentation. The amount of documents, and the differences in for instance target audience and level of abstraction, make it a difficult job for the auditors to find their way through the documentation. This paper discusses how the use of a technique called latent semantic analysis can guide the auditors through the documentation to the architectural knowledge they need. Using latent semantic analysis, we effectively construct a reading guide for software product audits.",2007,0, 3318,Special Issue on Critical Reliability Challenges and Practices [Guest Editorial],"The six papers in this special issue address various research challenges in the reliability-related areas including optimization, software reliability and measurements, network reliability, software quality, semisupervised learning, Bayesian approach, statistical anomaly detection, flooding attacks, genetic algorithms, multistate systems, system redundancy, service reliability, multiparadigm modeling, heuristic algorithms, and grid and distributed computing. The selected papers are briefly summarized.",2007,0, 3319,Generalized Discrete Software Reliability Modeling With Effect of Program Size,"Generalized methods for software reliability growth modeling have been proposed so far. But, most of them are on continuous-time software reliability growth modeling. Many discrete software reliability growth models (SRGM) have been proposed to describe a software reliability growth process depending on discrete testing time such as the number of days (or weeks); the number of executed test cases. In this paper, we discuss generalized discrete software reliability growth modeling in which the software failure-occurrence times follow a discrete probability distribution. Our generalized discrete SRGMs enable us to assess software reliability in consideration of the effect of the program size, which is one of the influential factors related to the software reliability growth process. Specifically, we develop discrete SRGMs in which the software failure-occurrence times follow geometric and discrete Rayleigh distributions, respectively. Moreover, we derive software reliability assessment measures based on a unified framework for discrete software reliability growth modeling. Additionally, we also discuss optimal software release problems based on our generalized discrete software reliability growth modeling. Finally, we show numerical examples of software reliability assessment by using actual fault-counting data",2007,0, 3320,Software Quality Analysis of Unlabeled Program Modules With Semisupervised Clustering,"Software quality assurance is a vital component of software project development. A software quality estimation model is trained using software measurement and defect (software quality) data of a previously developed release or similar project. Such an approach assumes that the development organization has experience with systems similar to the current project and that defect data are available for all modules in the training data. In software engineering practice, however, various practical issues limit the availability of defect data for modules in the training data. In addition, the organization may not have experience developing a similar system. In such cases, the task of software quality estimation or labeling modules as fault prone or not fault prone falls on the expert. We propose a semisupervised clustering scheme for software quality analysis of program modules with no defect data or quality-based class labels. It is a constraint-based semisupervised clustering scheme that uses k-means as the underlying clustering algorithm. Software measurement data sets obtained from multiple National Aeronautics and Space Administration software projects are used in our empirical investigation. The proposed technique is shown to aid the expert in making better estimations as compared to predictions made when the expert labels the clusters formed by an unsupervised learning algorithm. In addition, the software quality knowledge learnt during the semisupervised process provided good generalization performance for multiple test data sets. An analysis of program modules that remain unlabeled subsequent to our semisupervised clustering scheme provided useful insight into the characteristics of their software attributes",2007,0, 3321,Estimating Error Rate in Defective Logic Using Signature Analysis,"As feature size approaches molecular dimensions and the number of devices per chip reaches astronomical values, VLSI manufacturing yield significantly decreases. This motivates interests in new computing models. One such model is called error tolerance. Classically, during the postmanufacturing test process, chips are classified as being bad (defective) or good. The main premise in error-tolerant computing is that some bad chips that fail classical go/no-go tests and do indeed occasionally produce erroneous results actually provide acceptable performance in some applications. Thus, new test techniques are needed to classify bad chips according to categories based upon their degree, of acceptability with respect to predetermined applications. One classification criterion is error rate. In this paper, we first describe a simple test structure that is a minor extension to current scan-test and built-in self-test structures and that can be used to estimate the error rate of a circuit. We then address three theoretical issues. First, we develop an elegant mathematical model that describes the key parameters associated with this test process and incorporates bounds on the error in estimating error rate and the level of confidence in this estimate. Next, we present an efficient testing procedure for estimating the error rate of a circuit under test. Finally, we address the problem of assigning bad chips to bins based on their error rate. We show that this can be done in an efficient, hence cost-effective, way and discuss the quality of our results in terms of such concepts as increase effective yield, yield loss, and test escape",2007,0, 3322,Search Algorithms for Regression Test Case Prioritization,"Regression testing is an expensive, but important, process. Unfortunately, there may be insufficient resources to allow for the reexecution of all test cases during regression testing. In this situation, test case prioritization techniques aim to improve the effectiveness of regression testing by ordering the test cases so that the most beneficial are executed first. Previous work on regression test case prioritization has focused on greedy algorithms. However, it is known that these algorithms may produce suboptimal results because they may construct results that denote only local minima within the search space. By contrast, metaheuristic and evolutionary search algorithms aim to avoid such problems. This paper presents results from an empirical study of the application of several greedy, metaheuristic, and evolutionary search algorithms to six programs, ranging from 374 to 11,148 lines of code for three choices of fitness metric. The paper addresses the problems of choice of fitness metric, characterization of landscape modality, and determination of the most suitable search technique to apply. The empirical results replicate previous results concerning greedy algorithms. They shed light on the nature of the regression testing search space, indicating that it is multimodal. The results also show that genetic algorithms perform well, although greedy approaches are surprisingly effective, given the multimodal nature of the landscape",2007,0, 3323,Early Software Reliability Prediction Using Cause-effect Graphing Analysis,"Early prediction of software reliability can help organizations make informed decisions about corrective actions. To provide such early prediction, we propose practical methods to: 1) systematically identify defects in a software requirements specification document using a technique derived from cause-effect graphing analysis (CEGA); 2) assess the impact of these defects on software reliability using a recursive algorithm based on binary decision diagram (BDD) technique. Using a numerical example, we show how predicting software reliability at the requirement analysis stage could be greatly facilitated by the use of the method presented in this paper. The acronyms used throughout this paper are alphabetically listed as follows: ACEG-actually implemented cause effect graph; BCEG-benchmark cause effect graph; BDD-binary decision diagram; CEGA-cause effect graphing analysis; PACS-personal access control system; SRS-software requirements specification document",2007,0, 3324,An Efficient Algorithm To Analyze New Imperfect Fault Coverage Models,"Fault tolerance has been an essential architectural attribute for achieving high reliability in many critical applications of digital systems. Automatic recovery and reconfiguration mechanisms play a crucial role in implementing fault tolerance because an uncovered fault may lead to a system or subsystem failure even when adequate redundancy exists. In addition, an excessive level of redundancy may even reduce the system reliability. Therefore, an accurate analysis must account for not only the system structure but also the system fault and error handling behavior. The models that capture the fault and error handling behavior are called coverage models. The appropriate coverage modeling approach depends on the type of fault tolerant techniques used. Recent research emphasizes the importance of two new categories of coverage models: Fault Level Coverage (FLC) models and one-on-one level coverage (OLC) models. However, the methods for solving FLC and OLC models are much more limited, primarily because of the complex nature of the dependency introduced by the reconfiguration mechanisms. In this paper, we propose an efficient algorithm for solving FLC and OLC models.",2007,0, 3325,Failure Time Based Reliability Growth in Product Development and Manufacturing,"The failure-in-time (FIT) rate is widely used to quantify the reliability of a electronic component. It fails to indicate the portion of the failures due to either environmental or electrical stresses or issues that are related to process/handing, manufacturing and applications. To meet this end, FIT-based corrective action driven metrics are proposed to link the failure mode (FM) with components and non-component faults. First the conventional failure mode pareto is reviewed and its deficiency is discussed. Then a new index called the failure mode rate (FMR) is introduced to monitor the FM trend and evaluate the effectiveness of corrective actions (C/As). Based on the FMR, the FIT rate is extended to non-component failure mode and further to individual failure mode in predicting the reliability of electronic products. The extended FIT rate enables product designers to narrow down the root-cause of the failure, identify the C/A ownership, and estimate the MTBF improvement. The new metrics provide a guideline for prioritizing resources to attack the critical failures with the minimum cost.",2007,0, 3326,High Computational Performance in Code Exited Linear Prediction Speech Model Using Faster Codebook Search Techniques,"The most commonly used coder for producing good quality speech at rates below 10 kbits/s is code excited linear prediction (CELP). In this time domain analysis-by-synthesis coder, the excitation signal is chosen by attempting to match the reconstructed speech waveform as closely as possible to the original speech waveform by searching through a large vector quantizer codebook. The closest matching excitation signal (taken from code book) along with pitch, linear prediction coefficients (LPC) and pitch lag are typically transmitted from the encoder side in order to reconstruct the input speech signal. Codebook search is identified as the computationally most complex module and several methods to implement the same are described along with complexity reduction due to each method",2007,0, 3327,"Saying, ""I Am Testing,"" is Enough to Improve the Product: An Empirical Study","We report on an empirical study performed in an industrial environment in which the impact of modifying test practices is measured. First, the efficiency of the existing test practices is measured, by analyzing the software defects using a modified version of Orthogonal Defect Classification (ODC). The study was conducted in two phases lasting seven months each. In the first phase (Phase A), defects were recorded using existing implicit testing practices. Developers were then required to make those testing activities explicit based on a simple scheme. After a few months of employing this explicit testing practice, a new defect recording phase (Phase B), which also lasted seven months, was conducted. The impacts of explicit test practices are presented in terms of the ratios of defects found by the users and the nature of the testing activities performed. Results show that even a small improvement, such as requiring testing activities to be explicit, can have a measurable positive impact on the quality of the product.",2007,0, 3328,RPB in Software Testing,"Software testing is one of the crucial activities in the system development life cycle. The purpose of testing can be quality assurance, verification and validation, or reliability estimation. In Motorolareg, Automation Testing is a methodology TM uses by GSG-iSGT (Global Software Group - iDENtrade Subcriber Group Testing) to increase the testing volume, productivity and reduce test cycle-time in cell phone software testing. In addition, automated stress testing can make the products become more robust before release to the market. In this paper, the author discuss one of the automation stress testing tools that iSGT use, which is RPB (Record and PlayBack). RPB is a platform specific desktop rapid test-case creation tool. This tool is able to capture all the information on the phone in order to have highest accurately in testing. Furthermore, RPB allows user to do testing at anytime, anywhere provided there is an Internet connection. The authors also discuss the value that automation has bought to iDENtrade phone testing together with the metrics. We will also look into the advantages of the proposed system and some discussion of the future work.",2007,0, 3329,Thread-Sensitive Pointer Analysis for Inter-Thread Dataflow Detection,"Inter-thread dataflows are of great importance in analyzing concurrent programs. Suffering from the complexity of currency, it is still difficult to identify them from Java programs precisely and efficiently. A fundamental problem in inter-thread dataflow analysis is pointer analysis. Currently, pointer analyses can be classified into context-sensitive ones and context-insensitive ones. They are either too time consuming or too imprecise and hence not suitable for scalable and high precision inter-thread dataflow analysis. To change the situation, this paper proposes a thread-sensitive pointer analysis algorithm which only uses sequences of thread starts as calling contexts. As in inter-thread dataflow analysis, the major objective is analyzing interthread dataflows not intra-thread ones, the coarse-grained context-sensitivity can not only effectively speed up the analysis, but also preserve the analyze precision as much as possible",2007,0, 3330,XCML: A Runtime Representation for the Context Modelling Language,"The context modelling language (CML), derived from object role modeling (ORM), is a powerful approach for capturing the pertinent object types and relationships between those types in context-aware applications. Its support for data quality metrics, context histories and fact type classifications make it an ideal design tool for context-aware systems. However, CML currently lacks a suitable representation for exchanging context models and instances in distributed systems. A runtime representation can be used by context-aware applications and supporting infrastructure to exchange context information and models between distributed components, and it can be used as the storage representation when relational database facilities are not present. This paper shows the benefits of using CML for modelling context as compared to commonly used RDF/OWL-based context models, shows that translations of CML to RDF or OWL are lossy, discusses existing techniques for serialising ORM models, and presents an alternative XML-based representation for CML called XCML",2007,0, 3331,A Probabilistic Approach to Predict Changes in Object-Oriented Software Systems,"Predicting the changes in the next release of a software system has become a quest during its maintenance phase. Such a prediction can help managers to allocate resources more appropriately which results in reducing costs associated with software maintenance activities. A measure of change-proneness of a software system also provides a good understanding of its architectural stability. This research work proposes a novel approach to predict changes in an object oriented software system. The rationale behind this approach is that in a well-designed software system, feature enhancement or corrective maintenance should affect a limited amount of existing code. The goal is to quantify this aspect of quality by assessing the probability that each class will change in a future generation. Our proposed probabilistic approach uses the dependencies obtained from the UML diagrams, as well as other data extracted from source code of several releases of a software system using reverse engineering techniques. The proposed systematic approach has been evaluated on a multi-version medium size open source project namely JFlex, the fast scanner generator for Java. The obtained results indicate the simplicity and accuracy of our approach in the comparison with existing methods in the literature",2007,0, 3332,NAES: A Natural Adaptive Exponential Smoothing for Channel Prediction in WLANs,"The advent of wireless networks has increased the demand for research greatly. Context-aware applications must adapt to the environment in which they are inserted, and, for this, information on both device's hardware and the characteristics of the environment is crucial. In this work, we propose a method - called natural adaptive exponential smoothing (NAES) - to describe and forecast, in real time, the channel behavior of IEEE 802.11 WLAN networks. The NAES method uses a variation of the exponential smoothing technique to compute the channel quality indicators, namely the received signal strength (RSS) and the link quality. A comparison with the results obtained by the Trigg and Leach (1967)(TL) method shows that NAES outperforms TL method.",2007,0, 3333,System Level Performance Assessment of SOC Processors with SystemC,"This paper presents a system level methodology for modeling, and analyzing the performance of system-on-chip (SOC) processors. The solution adopted focuses on minimizing assessment time by modeling processors behavior only in terms of the performance metrics of interest. Formally, the desired behavior is captured through a C/C++ executable model, which uses finite state machines (FSM) as the underlying model of computation (MOC). To illustrate and validate our methodology we applied it to the design of a 16-bit reduced instruction set (RISC) processor. The performance metrics used to assess the quality of the design considered are power consumption and execution time. However, the methodology can be extended to any performance metric. The results obtained demonstrate the robustness of the proposed method both in terms of assessment time and accuracy",2007,0, 3334,Evaluating the Quality of Models Extracted from Embedded Real-Time Software,"Due to the high cost of modeling, model-based techniques are yet to make their impact in the embedded systems industry, which still persist on maintaining code-oriented legacy systems. Re-engineering existing code-oriented systems to fit model-based development is a risky endeavor due to the cost and efforts required to maintain correspondence between the code and model. We aim to reduce the cost of modeling and model maintenance by automating the process, thus facilitating model-based techniques. We have previously proposed the use of automatic model extraction from recordings of existing embedded real-time systems. To estimate the quality of the extracted models of timing behavior, we need a framework for objective evaluation. In this paper, we present such a framework to empirically test and compare extracted models, and hence obtain an implicit evaluation of methods for automatic model extraction. We present a set of synthetic benchmarks to be used as test cases for emulating timing behaviors of diverse systems with varying architectural styles, and extract automatic models out of them. We discuss the difficulties in comparing response time distributions, and present an intuitive and novel approach along with associated algorithms for performing such a comparison. Using our empirical framework, and the comparison algorithms, one could objectively determine the correspondence between the model and the system being modeled",2007,0, 3335,Design of a Window Comparator with Adaptive Error Threshold for Online Testing Applications,This paper presents a novel window comparator circuit whose error threshold can be adaptively adjusted according to its input signal levels. It is ideal for analog online testing applications. Advantages of adaptive comparator error thresholds over constant or relative error thresholds in analog testing applications are discussed. Analytical equations for guiding the design of proposed comparator circuitry are derived. The proposed comparator circuit has been designed and fabricated using a CMOS 0.18mu technology. Measurement results of the fabricated chip are presented,2007,0, 3336,Optimization of Behavioral Modeling for Codesign of Embedded System,"Behavioral modeling for codesign system is transformed into internal model known as control/data flow graph and produces a register-transfer-level (RTL) model of the hardware implementation for a given schedule. The internal model for codesign system is partitioned after scheduling and its communication cost is evaluated. The communication between components is through the buffered channels, the size of the buffer is estimated by its edge cut-set and system delay for different models are achieved to measure the quality of partitioning as opposed to general partitioning approaches that use number of nodes in each partition as constraint. In this paper scheduling and allocation algorithm (SAA) discusses helpful optimization method for resource-constrained and time constrained system in high level synthesis tool. The approach is based on data path for CDFG model that capture the design information from the source file specified by VHDL language from its equivalent separate Control flow graph and data flow graph. The proposed algorithm is also compared with other algorithm through estimation of schedules with a benchmark example. The buffer size is calculated with different objectives in partitioning and optimum partitioning is proposed",2007,0, 3337,Web Service Recommendation Based on Client-Side Performance Estimation,"Quality of service (QoS) is one of the important factors to be considered when choosing a Web service (WS). This paper focuses on performance which is one of the most important QoS attributes. Currently WS recommendation and selection are based on the advertised server-side performance. Since WS clients reside in a heterogeneous environment, performance experienced by clients for the same WS can vary widely. Therefore, WS recommendation based on server-side performance may not be very effective. We propose a WS analysis and recommendation framework that takes into account environment heterogeneity. The analysis framework helps establish WS profiles and client profiles, which are used in the recommendation process to estimate the client-side performance and to determine the most suitable WS provider. By carrying out experiments in the real world (Internet) environment, we demonstrate the fundamental concepts related to client-side performance estimation. Some recommendation scenarios which utilize these estimation techniques are also demonstrated.",2007,0, 3338,Obstacles to Comprehension in Usage Based Reading,"Usage based reading (UBR) is a recent approach to object oriented software inspections. Like other scenario based reading (SBR) techniques it proposes a prescriptive reading procedure. However, the impact of such procedures upon comprehension is not well known, and consideration has not been given to established software cognition theories. This paper describes a study examining software comprehension in UBR inspections. Participants traced the events of a UML sequence diagram through Java source code while thinking aloud. An electronic interface collected real-time data, allowing the identification of ""points of interest"", which were categorised according to issues affecting participants' performance. Together with indicators of participants' cognitive processes, this suggests that adherence to UBR scenarios is non-trivial. While UBR can detect more critical defects, we argue that a re-think of its prescriptive nature, including the use of cognition support, is required before it can become a practical reading technique.",2007,0, 3339,Enhancing Adaptive Random Testing through Partitioning by Edge and Centre,"Random testing (RT) is a simple but widely used software testing method. Recently, an approach namely adaptive random testing (ART) was proposed to enhance the fault-detection effectiveness of RT. The basic principle of ART is to enforce random test cases as evenly spread over the input domain as possible. A variety of ART methods have been proposed, and some research has been conducted to compare them. It was found that some ART methods have a preference of selecting test cases from edges of the input domain over from the centre. As a result, these methods may not perform very well under some situations. In this paper, we propose an approach to alleviating the edge preference. We also conducted some simulations and the results confirm that our new approach can improve the effectiveness of these ART methods.",2007,0, 3340,Measuring the Strength of Indirect Coupling,"It is widely accepted that coupling plays an important role in software quality, particularly in the areas of software maintenance, so effort should be made to keep coupling levels to a minimum in order to reduce the complexity of the system. We have previously introduced the concept of ""indirect"" coupling - coupling formed by relationships/dependencies that are not directly evident - with the belief that high levels of indirect coupling can constitute greater costs to maintenance as it is harder to detect. In this paper we extend our previous studies by proposing metrics that can advance our understanding of the exact relationship between indirect coupling and maintainability. In particulars the metrics focus on the reflection of ""strength"" as it is a fundamental component of coupling. We present our observations on the results of applying the metrics to existing Java applications.",2007,0, 3341,Coupling Metrics for Predicting Maintainability in Service-Oriented Designs,"Service-oriented computing (SOC) is emerging as a promising paradigm for developing distributed enterprise applications. Although some initial concepts of SOC have been investigated in the research literature, and related technologies are in the process of adoption by an increasing number of enterprises, the ability to measure the structural attributes of service-oriented designs thus predicting the quality of the final software product does not currently exist. Therefore, this paper proposes a set of metrics for quantifying the structural coupling of design artefacts in service-oriented systems. The metrics, which are validated against previously established properties of coupling, are intended to predict the quality characteristic of maintainability of service-oriented software. This is expected to benefit both research and industrial communities as existing object-oriented and procedural metrics are not readily applicable to the implementation of service-oriented systems.",2007,0, 3342,Revisiting Hot Passive Replication,"Passive replication has been extensively studied in the literature. However, there is no comprehensive study yet with regard to its degree of communication synchrony. Therefore, we propose a new, detailed classification of hot passive replication protocols, including a survey of the fault tolerance and performance of each class",2007,0, 3343,Automating the Pluto Experience: An Examination of the New Horizons Autonomous Operations Subsystem,"New Horizons is a NASA sponsored mission to explore Pluto and its largest moon Charon. The New Horizons spacecraft, designed, built and operated by the Johns Hopkins University Applied Physics Laboratory (APL), was successfully launched in January 2006. New Horizon's closest encounter with Pluto will occur in the summer of 2015. Upon completion of its primary science objectives at the Pluto encounter, the spacecraft is expected to visit one or more Kuiper Belt objects in the outermost region of the solar system. This long duration mission requires high reliability and imposes some demanding fault management requirements upon the spacecraft. The spacecraft is highly redundant with onboard software that provides a rule based expert system for performing autonomous fault detection and recovery. Generally referred to as Autonomy, this software's design was largely driven by two factors, the concept of operations for the mission and the level of redundancy in the spacecraft hardware. This paper examines the unique mission requirements that drove the Autonomy design. It discusses how the Autonomy system supports all of the various phases of the mission and provides examples of how unique mission requirements of the New Horizons mission were implemented.",2007,0, 3344,Fault-Tolerant 2D Fourier Transform with Checksum Encoding,"Space-based applications increasingly require more computational power to process large volumes of data and alleviate the downlink bottleneck. In addressing these demands, commercial-off-the-shelf (COTS) systems can serve a vital role in achieving performance requirements. However, these technologies are susceptible to radiation effects in the harsh environment of space. In order to effectively exploit high-performance COTS systems in future spacecraft, proper care must be taken with hardware and software architectures and algorithms that avoid or overcome the data errors that can lead to erroneous results. One of the more common kernels in space-based applications is the 2D fast Fourier transform (FFT). Many papers have investigated fault-tolerant FFT, but no algorithm has been devised that would allow for error correction without re-computation from original data. In this paper, we present a new method of applying algorithm-based fault tolerance (ABFT) concepts to the 2D-FFT that will not only allow for error detection but also error correction within memory-constrained systems as well as ensure coherence of the data after the computation. To further improve reliability of this ABFT approach, we propose use of a checksum encoding scheme that addresses issues related to numerical precision and overflow. The performance of the fault-tolerant 2D-FFT will be presented and featured as part of a dependable range Doppler processor, which is a subcomponent of synthetic-aperture radar algorithms. This work is supported by the Dependable Multiprocessor project at Honeywell and the University of Florida, one of the experiments in the Space Technology 8 (ST-8) mission of NASA's New Millennium Program.",2007,0, 3345,"Using Parallel Processing Tools to Predict Rotorcraft Performance, Stability, and Control","This paper discusses the development of the High Performance Computing (HPC) Collaborative Simulation and Test (CST) portfolio CST-03 program, one of the projects in the Common HPC Software Support Initiative (CHSSI) portfolio. The objective of this development was to provide computationally scalable tools to predict rotorcraft performance, stability, and control. The ability to efficiently predict and optimize vehicle performance, stability, and control from high fidelity computer models would greatly enhance the design and testing process and improve the quality of systems acquisition. Through this CHSSI development, the US Navy Test Pilot School performance, stability, and control test procedures were fully implemented in a high performance parallel computing environment. These Navy flight test support options were parallelized, implemented, and validated in the FLIGHTLAB comprehensive, multidisciplinary modeling environment. These tools were designed to interface with other CST compatible models and a standalone version of the tools (FLIGHTLAB-ASPECT) was delivered for use independent of the FLIGHTLAB development system. Tests on the MAUI Linux cluster indicated that there was over 25 times speedup using 32 CPUs. The tests also met the accuracy criteria as defined for the Beta trial.",2007,0, 3346,Rate Control for H.264 Video With Enhanced Rate and Distortion Models,"A new rate control scheme for H.264 video encoding with enhanced rate and distortion models is proposed in this work. Compared with existing H.264 rate control schemes, our scheme has offered several new features. First, the inter-dependency between rate-distortion optimization (RDO) and rate control in H.264 is resolved via quantization parameter estimation and update. Second, since the bits of the header information may occupy a larger portion of the total bit budget, which is especially true when being coded at low bit rates, a rate model for the header information is developed to estimate header bits more accurately. The number of header bits is modeled as a function of the number of nonzero motion vector (MV) elements and the number of MVs. Third, a new source rate model and a distortion model are proposed. For this purpose, coded 4 times 4 blocks are identified and the number of source bits and distortion are modeled as functions of the quantization stepsize and the complexity of coded 4 times 4 blocks. Finally, a R-D optimized bit allocation scheme among macroblocks (MBs) is proposed to improve picture quality. Built upon the above ideas, a rate control algorithm is developed for the H.264 baseline-profile encoder under the constant bit rate constraint. It is shown by experimental results that the new algorithm can control bit rates accurately with the R-D performance significantly better than that of the rate control algorithm implemented in the H.264 software encoder JM8.1a",2007,0, 3347,ESTIMATING LEUKOCYTE VELOCITIES FROM HIGH-SPEED 1D LINE SCANS ORIENTED ORTHOGONAL TO BLOOD FLOW,"The fast and direct measurement of leukocyte behaviour in microvessels in situ is possible by labelling them with fluorescent compounds and subsequently imaging them by confocal laser scanning microscopy. Using a high-speed single line scanning technique a time dependent trace of leukocytes in microvessels can be recorded (x-t scan). These x-t traces, which are orientated orthogonal to the blood flow, are characterized by distinct, but often noise-affected elliptical streaks, which carry information on speed and frequency of leukocytes crossing the scan line. Here we describe the combination of a confocal laser scanning system and an image analysis software package that allows rapid and reproducible estimation of leukocyte velocities in microvessels",2007,0, 3348,Design Opportunity Tree for Requirement Management and Software Process Improvement,"There are many risk items of the requirement phase that cause the defect occurring in the latter phase and the quality problems during process management and project progress. This paper designs the opportunity tree that requirement manage the defects and their problems solution as well. For the similar projects, we can estimate defects and prepare to solve them by using domain expert knowledge and the Opportunity Tree, which can greatly improve the software process. And this paper identifies risk items to produce reliable software and analyzes in the requirements phase. Also, this paper is intended to develop the relationship between defects and their causes to introduce. Therefore, using defect causes, we understand the associated relationship between defect triggers and design Opportunity Tree to manage the defects.",2007,0, 3349,Scientific programming with Java classes supported with a scripting interpreter,"jLab environment provides a Matlab/Scilab like scripting language that is executed by an interpreter, implemented in the Java language. This language supports all the basic programming constructs and an extensive set of built in mathematical routines that cover all the basic numerical analysis tasks. Moreover, the toolboxes of jLab can be easily implemented in Java and the corresponding classes can be dynamically integrated to the system. The efficiency of the Java compiled code can be directly utilised for any computationally intensive operations. Since jLab is coded in pure Java, the build from source process is much cleaner, faster, platform independent and less error prone than the similar C/C++/Fortran-based open source environments (e.g. Scilab and Octave). Neuro-Fuzzy algorithms can require enormous computation resources and at the same time an expressive programming environment. The potentiality of jLab is demonstrated by describing the implementation of a Support Vector Machine toolkit and by comparing its performance with a C/C++ and a Matlab version and across different computing platforms (i.e. Linux, Sun/Solaris and Windows XP)",2007,0, 3350,Composite Event Detection in Wireless Sensor Networks,"Sensor networks can be used for event alarming applications. To date, in most of the proposed schemes, the raw or aggregated sensed data is periodically sent to a data consuming center. However, with this scheme, the occurrence of an emergency event such as a fire is hardly reported in a timely manner which is a strict requirement for event alarming applications. In sensor networks, it is also highly desired to conserve energy so that the network lifetime can be maximized. Furthermore, to ensure the quality of surveillance, some applications require that if an event occurs, it needs to be detected by at least k sensors where k is a user-defined parameter. In this work, we examine the timely energy-efficient k-watching event detection problem (TEKWEO). A topology-and-routing-supported algorithm is proposed which constructs a set of detection sets that satisfy the short notification time, energy conservation, and tunable quality of surveillance requirements for event alarming applications. Simulation results are shown to validate the proposed algorithm.",2007,0, 3351,Accurate Software-Related Average Current Drain Measurements in Embedded Systems,"Performing accurate average current drain measurements of digital programmable components (e.g., microcontrollers, digital signal processors, System-on-Chip, or wireless modules) is a critical and error-prone measurement problem for embedded system manufacturers due to the impulsive time-varying behavior of the current waveforms drawn from a battery in real operating conditions. In this paper, the uncertainty contributions affecting the average current measurements when using a simple and inexpensive digital multimeter are analyzed in depth. Also, a criterion to keep the standard measurement uncertainty below a given threshold is provided. The theoretical analysis is validated by means of meaningful experimental results",2007,0, 3352,Automatic Instruction-Level Software-Only Recovery,"Software-only reliability techniques protect against transient faults without the overhead of hardware techniques. Although existing low-level software-only fault-tolerance techniques detect faults, they offer no recovery assistance. This article describes three automatic, instruction-level, software-only recovery techniques representing different trade-offs between reliability and performance",2007,0, 3353,QoS Management of Real-Time Data Stream Queries in Distributed Environments,"Many emerging applications operate on continuous unbounded data streams and need real-time data services. Providing deadline guarantees for queries over dynamic data streams is a challenging problem due to bursty stream rates and time-varying contents. This paper presents a prediction-based QoS management scheme for real-time data stream query processing in distributed environments. The prediction-based QoS management scheme features query workload estimators, which predict the query workload using execution time profiling and input data sampling. In this paper, we apply the prediction-based technique to select the proper propagation schemes for data streams and intermediate query results in distributed environments. The performance study demonstrates that the proposed solution tolerates dramatic workload fluctuations and saves significant amounts of CPU time and network bandwidth with little overhead",2007,0, 3354,Independent Model-Driven Software Performance Assessments of UML Designs,"In many software development projects, performance requirements are not addressed until after the application is developed or deployed, resulting in costly changes to the software or the acquisition of expensive high-performance hardware. To remedy this, researchers have developed model-driven performance analysis techniques for assessing how well performance requirements are being satisfied early in the software lifecycle. In some cases, companies may not have the expertise to perform such analysis on their software; therefore they have an independent assessor perform the analysis. This paper describes an approach for conducting independent model-driven software performance assessments of UML 2.0 designs and illustrates this approach using a real-time signal generator as a case study",2007,0, 3355,An Analysis of Performance Interference Effects in Virtual Environments,"Virtualization is an essential technology in modern datacenters. Despite advantages such as security isolation, fault isolation, and environment isolation, current virtualization techniques do not provide effective performance isolation between virtual machines (VMs). Specifically, hidden contention for physical resources impacts performance differently in different workload configurations, causing significant variance in observed system throughput. To this end, characterizing workloads that generate performance interference is important in order to maximize overall utility. In this paper, we study the effects of performance interference by looking at system-level workload characteristics. In a physical host, we allocate two VMs, each of which runs a sample application chosen from a wide range of benchmark and real-world workloads. For each combination, we collect performance metrics and runtime characteristics using an instrumented Ken hypervisor. Through subsequent analysis of collected data, we identify clusters of applications that generate certain types of performance interference. Furthermore, we develop mathematical models to predict the performance of a new application from its workload characteristics. Our evaluation shows our techniques were able to predict performance with average error of approximately 5%",2007,0, 3356,Automatic Application Specific Floating-point Unit Generation,"This paper describes the creation of custom floating point units (FPUs) for application specific instruction set processors (ASIPs). ASIPs allow the customization of processors for use in embedded systems by extending the instruction set, which enhances the performance of an application or a class of applications. These extended instructions are manifested as separate hardware blocks, making the creation of any necessary floating point instructions quite unwieldy. On the other hand, using a predefined FPU includes a large monolithic hardware block with considerable number of unused instructions. A customized FPU will overcome these drawbacks, yet the manual creation of one is a time consuming, error prone process. This paper presents a methodology for automatically generating floating-point units (FPUs) that are customized for specific applications at the instruction level. Generated FPUs comply with the IEEE754 standard, which is an advantage over FP format customization. Custom FPUs were generated for several Mediabench applications. Area savings over a fully-featured FPU without resource sharing of 26%-80% without resource sharing and 33%-87% with resource sharing, were obtained. Clock period increased in some cases by up to 9.5% due to resource sharing",2007,0, 3357,Design Fault Directed Test Generation for Microprocessor Validation,"Functional validation of modern microprocessors is an important and complex problem. One of the problems in functional validation is the generation of test cases that has higher potential to find faults in the design. We propose a model based test generation framework that generates tests for design fault classes inspired from software validation. There are two main contributions in this paper. Firstly, we propose a microprocessor modeling and test generation framework that generates test suites to satisfy modified condition decision coverage (MCDC), a structural coverage metric that detects most of the classified design faults as well as the remaining faults not covered by MCDC. Secondly, we show that there exists good correlation between types of design faults proposed by software validation and the errors/bugs reported in case studies on microprocessor validation. We demonstrate the framework by modeling and generating tests for the microarchitecture of VESPA, a 32-bit microprocessor. In the results section, we show that the tests generated using our framework's coverage directed approach detects the fault classes with 100% coverage, when compared to model-random test generation",2007,0, 3358,Microarchitectural Support for Program Code Integrity Monitoring in Application-specific Instruction Set Processors,"Program code in a computer system can be altered either by malicious security attacks or by various faults in microprocessors. At the instruction level, all code modifications are manifested as bit flips. In this work, we present a generalized methodology for monitoring code integrity at run-time in application-specific instruction set processors (ASIPs), where both the instruction set architecture (ISA) and the underlying micro architecture can be customized for a particular application domain. We embed monitoring microoperations in machine instructions, thus the processor is augmented with a hardware monitor automatically. The monitor observes the processor's execution trace of basic blocks at run-time, checks whether the execution trace aligns with the expected program behavior, and signals any mismatches. Since microoperations are at a lower software architecture level than processor instructions, the microarchitectural support for program code integrity monitoring is transparent to upper software levels and no recompilation or modification is needed for the program. Experimental results show that our microarchitectural support can detect program code integrity compromises with small area overhead and little performance degradation",2007,0, 3359,Algorithms for BER-Constrained Variable-Length Equalizers driven by Channel Response Knowledge over Frequency-Selective Radio Channel,"In mobile radio systems, transmission conditions actually encounter large variations depending on the effective environmental configurations. Training sequences are periodically inserted into the transmitted messages so that the receiver can estimate the channel response. Based on this knowledge, we study the problem of adapting the equalizer filter length to a given channel impulse response under bit error rate constraint. In this paper, both linear equalizer (LE) and decision feedback equalizer (DFE) are studied. Accurate control of the equalizer length can improve the system's performance while reducing the handset power consumption or reducing software load in a software radio context. Besides, in a multi-service system, it allows to manage variable quality of service requirements. Optimal and suboptimal criteria for determining the equalizer length are discussed in order to optimize the overall complexity of the receiver including training phase and decoding phases.",2007,0, 3360,Reliability Analysis of Self-Healing Network using Discrete-Event Simulation,"The number of processors embedded on high performance computing platforms is continuously increasing to accommodate user desire to solve larger and more complex problems. However, as the number of components increases, so does the probability of failure. Thus, both scalable and fault-tolerance of software are important issues in this field. To ensure reliability of the software especially under the failure circumstance, the reliability analysis is needed. The discrete-event simulation technique offers an attractive a ternative to traditional Markovian-based analytical models, which often have an intractably large state space. In this paper, we analyze reliability of a self-healing network developed for parallel runtime environments using discrete-event simulation. The network is designed to support transmission of messages across multiple nodes and at the same time, to protect against node and process failures. Results demonstrate the flexibility of a discrete-event simulation approach for studying the network behavior under failure conditions and various protocol parameters, message types, and routing algorithms.",2007,0, 3361,A Comprehensive Empirical Study of Count Models for Software Fault Prediction,"Count models, such as the Poisson regression model, and the negative binomial regression model, can be used to obtain software fault predictions. With the aid of such predictions, the development team can improve the quality of operational software. The zero-inflated, and hurdle count models may be more appropriate when, for a given software system, the number of modules with faults are very few. Related literature lacks quantitative guidance regarding the application of count models for software quality prediction. This study presents a comprehensive empirical investigation of eight count models in the context of software fault prediction. It includes comparative hypothesis testing, model selection, and performance evaluation for the count models with respect to different criteria. The case study presented is that of a full-scale industrial software system. It is observed that the information obtained from hypothesis testing, and model selection techniques was not consistent with the predictive performances of the count models. Moreover, the comparative analysis based on one criterion did not match that of another criterion. However, with respect to a given criterion, the performance of a count model is consistent for both the fit, and test data sets. This ensures that, if a fitted model is considered good based on a given criterion, then the model will yield a good prediction based on the same criterion. The relative performances of the eight models are evaluated based on a one-way anova model, and Tukey's multiple comparison technique. The comparative study is useful in selecting the best count model for estimating the quality of a given software system",2007,0, 3362,An Assessment of Testing-Effort Dependent Software Reliability Growth Models,"Over the last several decades, many Software Reliability Growth Models (SRGM) have been developed to greatly facilitate engineers and managers in tracking and measuring the growth of reliability as software is being improved. However, some research work indicates that the delayed S-shaped model may not fit the software failure data well when the testing-effort spent on fault detection is not a constant. Thus, in this paper, we first review the logistic testing-effort function that can be used to describe the amount of testing-effort spent on software testing. We describe how to incorporate the logistic testing-effort function into both exponential-type, and S-shaped software reliability models. The proposed models are also discussed under both ideal, and imperfect debugging conditions. Results from applying the proposed models to two real data sets are discussed, and compared with other traditional SRGM to show that the proposed models can give better predictions, and that the logistic testing-effort function is suitable for incorporating directly into both exponential-type, and S-shaped software reliability models",2007,0, 3363,Count Models for Software Quality Estimation,"Identifying which software modules, during the software development process, are likely to be faulty is an effective technique for improving software quality. Such an approach allows a more focused software quality & reliability enhancement endeavor. The development team may also like to know the number of faults that are likely to exist in a given program module, i.e., a quantitative quality prediction. However, classification techniques such as the logistic regression model (lrm) cannot be used to predict the number of faults. In contrast, count models such as the Poisson regression model (prm), and the zero-inflated Poisson (zip) regression model can be used to obtain both a qualitative classification, and a quantitative prediction for software quality. In the case of the classification models, a classification rule based on our previously developed generalized classification rule is used. In the context of count models, this study is the first to propose a generalized classification rule. Case studies of two industrial software systems are examined, and for each we developed two count models, (prm, and zip), and a classification model (lrm). Evaluating the predictive capabilities of the models, we concluded that the prm, and the zip models have similar classification accuracies as the lrm. The count models are also used to predict the number of faults for the two case studies. The zip model yielded better fault prediction accuracy than the prm. As compared to other quantitative prediction models for software quality, such as multiple linear regression (mlr), the prm, and zip models have a unique property of yielding the probability that a given number of faults will occur in any module",2007,0, 3364,A Multi-Objective Software Quality Classification Model Using Genetic Programming,"A key factor in the success of a software project is achieving the best-possible software reliability within the allotted time & budget. Classification models which provide a risk-based software quality prediction, such as fault-prone & not fault-prone, are effective in providing a focused software quality assurance endeavor. However, their usefulness largely depends on whether all the predicted fault-prone modules can be inspected or improved by the allocated software quality-improvement resources, and on the project-specific costs of misclassifications. Therefore, a practical goal of calibrating classification models is to lower the expected cost of misclassification while providing a cost-effective use of the available software quality-improvement resources. This paper presents a genetic programming-based decision tree model which facilitates a multi-objective optimization in the context of the software quality classification problem. The first objective is to minimize the ""Modified Expected Cost of Misclassification"", which is our recently proposed goal-oriented measure for selecting & evaluating classification models. The second objective is to optimize the number of predicted fault-prone modules such that it is equal to the number of modules which can be inspected by the allocated resources. Some commonly used classification techniques, such as logistic regression, decision trees, and analogy-based reasoning, are not suited for directly optimizing multi-objective criteria. In contrast, genetic programming is particularly suited for the multi-objective optimization problem. An empirical case study of a real-world industrial software system demonstrates the promising results, and the usefulness of the proposed model",2007,0, 3365,Efficient Analysis of Systems with Multiple States,A multistate system is a system in which both the system and its components may exhibit multiple performance levels (or states) varying from perfect operation to complete failure. Examples abound in real applications such as communication networks and computer systems. Analyzing the probability of the system being in each state is essential to the design and tuning of dependable multistate systems. The difficulty in analysis arises from the non-binary state property of the system and its components as well as dependence among those multiple states. This paper proposes a new model called multistate multivalued decision diagrams (MMDD) for the analysis of multistate systems with multistate components. The computational complexity of the MMDD-based approach is low due to the nature of the decision diagrams. An example is analyzed to illustrate the application and advantages of the approach.,2007,0, 3366,Sentient Networks: A New Dimension in Network Capability,"As computer networks evolve to the next generation, they offer flexible infrastructures for communication as well as high vulnerability. In order to sustain the reliability, to prevent misuse, and to provide radically new services, these networks need new capabilities in non-traditional domains of operation such as perception and consciousness. Therefore, we propose Sentient Networks, as an early approach towards realizing them. There are several applications for such networking capability. For one, sentient networks can enable packet detection in a blackbox manner, detect encryption of data transferred, detecting and classifying data types such as voice, video, and data. Some of the applications of such capabilities, for example, include the use of emotion content of the voice packets through a network and classifying these packets into normal, moderately panicked, and extremely panicked. Such classifications can be used to provide better QoS schemes for panicked callers. We obtained a packet detection accuracy of about 65-70% for data packets and about 98% accuracy for detecting encrypted TCP packets. The emotion-aware QoS provision mechanism provided approximately 60% improvement in delay performance for voice packets generated by panicked sources.",2007,0, 3367,Detecting VLIW Hard Errors Cost-Effectively through a Software-Based Approach,"Research indicates that as technology scales, hard errors such as wear-out errors are increasingly becoming a critical challenge for microprocessor design. While hard errors in memory structures can be efficiently detected by error correction code, detecting hard errors for functional units cost-effectively is a challenging problem. In this paper, we propose to exploit the idle cycles of the under-utilized VLIW functional units to run test instructions for detecting wear-out errors without increasing the hardware cost or significantly impacting performance. We also explore the design space of this software-based approach to balance the error detection latency and the performance for VLIW architectures. Our experimental results indicate that such a software-based approach can effectively detect hard errors with minimum impact on performance for VLIW processors, which is particularly useful for reliable embedded applications with cost constraints.",2007,0, 3368,Developing Metrics for Evolving a Musically Satisfying Result from a Population of Computer Generated Music Compositions,"Creating a computational metric of the quality and musicality of a computer generated music composition is a challenging problem. This paper describes a possible way to develop numerically based metrics for standardizing the fitness evaluation of a population of computer generated music compositions. These metrics are based on assertions by music theorists about the dynamic forces within music that provides the music with a sense of motion and phrasing. The actions of several musical parameters are ranked according to the sense of intensity they produce. This process is applied to an acoustic music composition, and the values are then plotted on a graph. The composite curve from these values give the evaluator a sense of the patterns of lower and higher degrees of intensity, thus providing a sense of how well the music provides a sense of motion and phrasing",2007,0, 3369,On the Automatic Generation of Test Programs for Path-Delay Faults in Microprocessor Cores,"Delay testing is mandatory for guaranteeing the correct behavior of today's high-performance microprocessors. Several methodologies have been proposed to tackle this issue resorting to additional hardware or to software self test techniques. Software techniques are particularly promising as they resort to Assembly programs in normal mode of operation, without requiring circuit modifications; however, the problem of generating effective and efficient test programs for path- delay fault detection is still open. This paper presents an innovative approach for the generation of path-delay self-test programs for microprocessors, based on an evolutionary algorithm and on ad-hoc software simulation/hardware emulation heuristic techniques. Experimental results show how the proposed methodology allows generating suitable test programs in reasonable times.",2007,0, 3370,The Future of Software Performance Engineering,"Performance is a pervasive quality of software systems; everything affects it, from the software itself to all underlying layers, such as operating system, middleware, hardware, communication networks, etc. Software Performance Engineering encompasses efforts to describe and improve performance, with two distinct approaches: an early-cycle predictive model-based approach, and a late-cycle measurement-based approach. Current progress and future trends within these two approaches are described, with a tendency (and a need) for them to converge, in order to cover the entire development cycle.",2007,0, 3371,Assessment of Package Cohesion and Coupling Principles for Predicting the Quality of Object Oriented Design,"In determining the quality of design two factors are important, namely coupling and cohesion. This paper highlights the principles of package architecture from cohesion and coupling point of view and discusses the method for extracting metric associated with them. The method is supported with the help of case study. The results arrived at from the case study are discussed further for utilizing them for predicting the quality of software.",2007,0, 3372,Company-Wide Implementation of Metrics for Early Software Fault Detection,"To shorten time-to-market and improve customer satisfaction, software development companies commonly want to use metrics for assessing and improving the performance of their development projects. This paper describes a measurement concept for assessing how good an organization is at finding faults when most cost-effective, i. e. in most cases early. The paper provides results and lessons learned from applying the measurement concept widely at a large software development company. A major finding was that on average, 64 percent of all faults found would have been more cost effective to find during unit tests. An in-depth study of a few projects at a development unit also demonstrated how to use the measurement concept for identifying which parts in the fault detection process that needs to be improved to become more efficient (e.g. reduce the amount of time spent on rework).",2007,0, 3373,On Sufficiency of Mutants,"Mutation is the practice of automatically generating possibly faulty variants of a program, for the purpose of assessing the adequacy of a test suite or comparing testing techniques. The cost of mutation often makes its application infeasible. The cost of mutation is usually assessed in terms of the number of mutants, and consequently the number of ""mutation operators"" that produce them. We address this problem by finding a smaller subset of mutation operators, called ""sufficient"", that can model the behaviour of the full set. To do this, we provide an experimental procedure and adapt statistical techniques proposed for variable reduction, model selection and nonlinear regression. Our preliminary results reveal interesting information about mutation operators.",2007,0, 3374,Interaction Analysis of Heterogeneous Monitoring Data for Autonomic Problem Determination,"Autonomic systems require continuous self-monitoring to ensure correct operation. Available monitoring data exists in a variety of formats, including log files, performance counters, traces, and state and configuration parameters. Such heterogeneity, together with the extremely large volume of data that could be collected, makes analysis very complex. To allow for more-effective problem determination, there is a need for a comprehensive integration of management data. In addition, monitoring should be adaptive to the current perceived operation of the system. In this paper we present an architecture to meet the above goals. We leverage an open-source XML-based format for data integration and describe an approach to automatically adjust monitoring for diagnosis when anomalies are detected. We have implemented a partial prototype using an Eclipse-based open-source platform. We show the effectiveness of our prototype based on fault-injection experiments. We also study issues of disparity of data formats, information overload, scalability, and automated problem determination.",2007,0, 3375,A Multi-Agent Multi-Tiered Approach To Information Fusion,"The amount of information available to decision makers today is astounding. To manage this, decision-aid software is needed that can organize and clarify large amounts of data. In this paper, we propose a multi-level, multi-agent architecture that accomplishes data fusion through layers encompassing initial data input and classification, secondary classification and grouping, and finally human-agent interaction. Small, limited-scope agents cooperate to complete the analysis/fusion process. This cooperation is mediated by corresponding opinions in the mode of belief calculus and subjective logic utilizing our evidential reasoning network (ERN). Thus, the user gains the ability to control and filter fusion activities based on the quality and pedigree of the underlying data. We present an initial set of agents along with a preliminary testing scenario and discuss the results of running the system through this scenario",2007,0, 3376,Fast Failure Detection in a Process Group,"Failure detectors represent a very important building block in distributed applications. The speed and the accuracy of the failure detectors is critical to the performance of the applications built on them. In a common implementation of failure detector based on heartbeats, there is a tradeoff between speed and accuracy so it is difficult to be both fast and accurate. Based on the observation that in many distributed applications, one process takes a special role as the leader, we propose a fast failure detection (FFD) algorithm that detects the failure of the leader both fast and accurately. Taking advantage of spatial multiple timeouts, FFD detects the failure of the leader within a time period of just a little more than one heartbeat interval, making it almost the fastest detection algorithm possible based on heartbeat messages. FFD could be used stand alone in a static configuration where the leader process is fixed at one site. In a dynamic setting, where the role of leader has to be assumed by another site if the current leader fails, FFD could be used in collaboration with a leader election algorithm to speed up the process of electing a new leader.",2007,0, 3377,Annotation Integration and Trade-off Analysis for Multimedia Applications,"Multimedia applications for mobile devices, such as video/audio streaming, process streams of incoming data in a regular, predictable way. Content-aware optimizations through annotations allow us to highly improve the power savings at the various levels of abstraction: hardware/OS, network, application. However, in a typical system there is a continuous interaction between the components of the system at all levels, which requires a careful analysis of the combined effect of the aforementioned techniques. We investigate such an interaction and we describe metrics for estimating the effect various trade-off have on power and quality. By applying our metrics at the various abstraction levels we show how better energy savings can be achieved with lower quality degradations, through power-quality trade-offs and cross-layer interaction.",2007,0, 3378,Management of Virtual Machines on Globus Grids Using GridWay,"Virtual machines are a promising technology to overcome some of the problems found in current grid infrastructures, like heterogeneity, performance partitioning or application isolation. In this work, we present straightforward deployment of virtual machines in globus grids. This solution is based on standard services and does not require additional middleware to be installed. Also, we assess the suitability of this deployment in the execution of a high throughput scientific application, the XMM-Newton scientific analysis system.",2007,0, 3379,Incorporating Latency in Heterogeneous Graph Partitioning,"Parallel applications based on irregular meshes make use of mesh partitioners for efficient execution. Some mesh partitioners can map a mesh to a heterogeneous computational platform, where processor and network performance may vary. Such partitioners generally model the computational platform as a weighted graph, where the weight of a vertex gives relative processor performance, and the weight of a link indicates the relative transmission rate of the link between two processors. However, the performance of a network link is typically characterized by two parameters, bandwidth and latency, which cannot be captured in a single weight. We show that taking into account the network heterogeneity of a computational resource can significantly improve the quality of a domain decomposition obtained using graph partitioning. Furthermore, we show that taking into account bandwidth and latency of the network links is significantly better than just considering the former. This work is presented as an extension to the PaGridpartitioner, and includes a model for estimated execution time, which is used as a cost function by the partitioner but could also be used for performance prediction by application-oriented schedulers.",2007,0, 3380,A Probabilistic Approach to Measuring Robustness in Computing Systems,"System builders are becoming increasingly interested in robust design. We believe that a methodology for generating robustness metrics helps the robust design research efforts and, in general, is an important step in the efforts to create robust computing systems. The purpose of the research in this paper is to quantify the robustness of a resource allocation, with the eventual objective of setting a standard that could easily be instantiated for a particular computing system to generate robustness metric. We present our theoretical foundation for robustness metric and give its instantiation for a particular system.",2007,0, 3381,Implementing Adaptive Performance Management in Server Applications,"Performance and scalability are critical quality attributes for server applications in Internet-facing business systems. These applications operate in dynamic environments with rapidly fluctuating user loads and resource levels, and unpredictable system faults- Adaptive (autonomic) systems research aims to augment such server applications with intelligent control logic that can detect and react to sudden environmental changes. However, developing this adaptive logic is complex in itself. In addition, executing the adaptive logic consumes processing resources, and hence may (paradoxically) adversely effect application performance. In this paper we describe an approach for developing high-performance adaptive server applications and the supporting technology. The Adaptive Server Framework (ASF) is built on standard middleware services, and can be used to augment legacy systems with adaptive behavior without needing to change the application business logic. Crucially, ASF provides built-in control loop components to optimize the overall application performance, which comprises both the business and adaptive logic. The control loop is based on performance models and allows systems designers to tune the performance levels simply by modifying high level declarative policies. We demonstrate the use of ASF in a case study.",2007,0, 3382,An Evolutionary Approach to Software Modularity Analysis,"Modularity determines software quality in terms of evolvability, changeability, maintainability, etc. and a module could be a vertical slicing through source code directory structure or class boundary. Given a modularized design, we need to determine whether its implementation realizes the designed modularity. Manually comparing source code modular structure with abstracted design modular structure is tedious and error-prone. In this paper, we present an automated approach to check the conformance of source code modularity to the designed modularity. Our approach uses design structure matrices (DSMs) as a uniform representation; it uses existing tools to automatically derive DSMs from the source code and design, and uses a genetic algorithm to automatically cluster DSMs and check the conformance. We applied our approach to a small canonical software system as a proof of concept experiment. The results supported our hypothesis that it is possible to check the conformance between source code structure and design structure automatically, and this approach has the potential to be scaled for use in large software systems.",2007,0, 3383,Assessing Module Reusability,"We propose a conceptual framework for assessing the reusability of modules. To do so, we define reusability of a module as the product of its functionality and its applicability. We then generalize the framework to the assessment of modularization techniques.",2007,0, 3384,Protein Attributes Microtuning System (PAMS): an effective tool to increase protein structure prediction by data purification,"Given the expense of more direct determinations, using machine-learning schemes to predict a protein secondary structure from the sequence alone remains an important methodology. To achieve significant improvements in prediction accuracy, the authors have developed an automated tool to prepare very large biological datasets, to be used by the learning network. By focusing on improvements in data quality and validation, our experiments yielded a highest prediction accuracy of protein secondary structure of 90.97%. An important additional aspect of this achievement is that the predictions are based on a template-free statistical modeling mechanism. The performance of each different classifier is also evaluated and discussed. In this paper a protein set of 232 protein chains are proposed to be used in the prediction. Our goal is to make the tools discussed available as services in part of a digital ecosystem that supports knowledge sharing amongst the protein structure prediction community.",2007,0, 3385,The Reduction of Simulation Software Execution Time for Models of Integrated Electric Propulsion Systems through Partitioning and Distribution,"Software time-domain simulation models are useful to the naval engineering community both for the system design of future vessels and for the in-service support of existing vessels. For future platforms, the existence of a model of the vessel's electrical power system provides a means of assessing the performance of the system against defined requirements. This could be at the stage of requirements definition, bid assessment or any subsequent stage in the design process. For in-service support of existing platforms, the existence of a model of the vessel's electrical power system provides a means of assessing the possible cause and effect of operational defects reported by ship's staff, or of assessing the possible future implications of some change in the equipment line-up or operating conditions for the vessel. Detailed high fidelity time-domain simulation of systems, however, can be problematic due to extended execution time. This arises from the model's mathematically stiff nature: models of Integrated Electric Propulsion systems can also require significant computational resource. A conventional time-domain software simulation model is only able to utilize a single computer processor at any one time. The duration of time required to obtain results from a software model could be significantly reduced if more computer processors were utilized simultaneously. This paper details the development of a distributed simulation environment. This environment provides a mechanism for partitioning a time-domain software simulation model and running it on a cluster of computer processors. The number of processors utilized in the cluster ranges between four and sixteen nodes. The benefit of this approach is that reductions in simulation duration are achievable by an appropriate choice of model partitioning. From an engineering perspective, any net timing reduction translates to an increase in the availability of data, from which more efficient analysis and design follows.",2007,0, 3386,"On the Role of Numerical Preciseness for Generalization, Classification, Type-1, and Type-2 Fuzziness","When performing data analysis on a computing device no mathematically idealized real number set IR is available. A basic resolution is given, so that a fuzzy model is in fact always a discrete model and not a continuous one. Due to the limited preciseness the computing device offers only a limited number of decimals in a limited discrete number space OR. This contribution considers effects on the generalization and fuzziness of data when replacing IR by IIR The effects are studied for data, that are numerically rounded or when intervals are considered. Often part of the data is missing or is of limited quality, so that it is of practical interest to consider the exact underlying space IIR and not the hypothetical space IR. We calculate precisely type-1 and type-2 fuzzy membership functions under preciseness assumptions of the elements in IIR.",2007,0, 3387,Software Reliability Models,"The reliability is one very important parameter of electronic devices, hardware, and applications software. We review in this paper basic reliability terms, software reliability models -predictive models, assessment models. We try to explain basic principles of software reliability models, their practical using in technical activities. We show also key assumptions on which software reliability models are based, specific assumptions related to each specific software reliability model and some examples of concrete software reliability models. In conclusions there are summary of knowledge about software reliability models.",2007,0, 3388,A Divergence-measure Based Classification Method for Detecting Anomalies in Network Traffic,"We present 'D-CAD,' a novel divergence-measure based classification method for anomaly detection in network traffic. The D-CAD method identifies anomalies by performing classification on features drawn from software sensors that monitor network traffic. We compare the performance of the D-CAD method with two classifier based anomaly detection methods implemented using supervised Bayesian estimation and supervised maximum-likelihood estimation. Results show that the area under receiver operating characteristic curve (AUC) of the D-CAD method is as high as 0.9524, compared to an AUC value of 0.9102 of the supervised maximum-likelihood estimation based anomaly detection method and to an AUC value of 0.8887 of the supervised Bayesian estimation based anomaly detection method.",2007,0, 3389,Fault Tolerant Control in NCS Medium Access Constraints,"This paper deals with the problem of fault-tolerant control of a Network Control System (NCS) for the case in which the sensors, actuators and controller are inter-connected via various Medium Access Control protocols which define the access scheduling and collision arbitration policies in the network and employing the so-called periodic communication sequence. A new procedure for controlling a system over a network using the concept of an NCS-Information-Packet is described which comprises an augmented vector consisting of control moves and fault flags. The size of this packet is used to define a Completely Fault Tolerant NCS. The fault-tolerant behaviour and control performance of this scheme is illustrated through the use of a process model and controller. The plant is controlled over a network using Model-based Predictive Control and implemented via MATLABcopy and LABVIEWcopy software.",2007,0, 3390,Intelligent Monitoring System Based on the Embedded Technology: A Case Study,"Intelligent systems for condition monitoring, alarm diagnosis and fault detection have been widely used in modern industrial manufacturing, as they are capable of providing more efficient decision making support. This paper proposes a distributed intelligent monitoring system based on the embedded technology. Using advanced RISC machines microprocessor embedded with a real-time multi-task micro kernel, the developed system possesses the functions such as data acquisition, signal preprocessing, mass storage, realtime monitoring, and remote intelligent control. In virtue of a wireless CDMA communication network, it is especially appropriate to be applied to the circumstance where building a wired network is not feasible. Design and implementation about the hardware and software of the system are presented in detail. The high performance, reliability and security of the system have been validated in the case of remote intelligent monitoring system in industrial oil exploration and production. Due to its flexible structures of both software and hardware, the developed system is easy to be adapted to other applications.",2007,0, 3391,An Improved CFCSS Control Flow Checking Algorithm,"Satellite-borne embedded systems require the properties of low-powered and reliability in the spatial radiation environment. The control flow checking is an effective way for the running systems to prevent the broken-down caused by single event upsets. Control flow checking by software signatures (CFCSS) is a representative of pure software method that checks the control flow of a program using assigned signatures. Because of the existence of multiple-branch-in (MBI) nodes, the fault detection coverage may decrease in this algorithm. To overcome this shortcoming, an improved algorithm called improved control flow checking by software signatures (ICFCSS) that eliminates the MBI node by modifying the control flow graph (CFG) is presented in this paper. Fault injection experiments show that ICFCSS incurs higher fault detection coverage than CFCSS techniques, without significant performance decreasing.",2007,0, 3392,Fault Tolerant Signal Processing for Masking Transient Errors in VLSI Signal Processors,"This paper proposes fault tolerant signal processing strategies for achieving reliable performance in VLSI signal processors that are prone to transient errors due to increasingly smaller feature dimensions and supply voltages. The proposed methods are based on residue number system (RNS) coding, involving either hardware redundancy or multiple execution redundancy (MER) strategies designed to identify and overcome transient errors. RNS techniques provide powerful low-redundancy fault tolerance properties that must be introduced at VLSI design levels, whereas MER strategies generally require higher degrees of redundancy that can be introduced at software programming levels.",2007,0, 3393,A Comparative Analysis of the Influence of Methods for Outliers Detection on the Performance of Data Driven Models,"In this paper we describe, test, and compare the performance of a number of techniques used for outlier detection to improve modeling capabilities of soft sensors on the basis of the quality of available data. We analyze methods based on standard deviation of population, on residuals of a linear input-output regression, on the structure correlation of the data, on principal components and partial least squares (both linear and nonlinear) in multi dimensional space (2D, 3D, 4D), on Q and T2 statistics, on the distance of each observation from the mean of the data, and on the Mahalanobis distance. We apply techniques for outlier detection both on a fictitious model data and on real data acquired from a sulfur recovery unit of a refinery. We show that outlier removal almost always improves modeling capabilities of considered techniques.",2007,0, 3394,Ground Penetrating Radar: A Smart Sensor for the Evaluation of the Railway Trackbed,"Ground Penetrating Radar (GPR) has become an increasingly attractive method for the engineering community, in particular for shallow high-resolution applications such as railway trackbed evaluation. It is a non-destructive smart sensing technique, which can be applied dynamically to achieve a continuous profile of the trackbed structure. Due to recent hardware and software improvements, real time cursory analysis can be performed in the field. Based on collected field data, the present paper investigates the applicability of the GPR smart sensor system in terms of the railways trackbed assessment and concludes on the capability of the GPR sensing technique to assess adequately the ballast quality and the trackbed formation.",2007,0, 3395,A new FPGA-based edge detection system for the gridding of DNA microarray images,"A deoxyribonucleic acid (DNA) microarray is a collection of microscopic DNA spots attached to a solid surface, such as glass, plastic or silicon chip forming an array. DNA microarray technologies are an essential part of modern biomedical research. The analysis of DNA microarray images allows the identification of gene expressions in order to draw biologically meaningful conclusions for applications that ranges from the genetic profiling to the diagnosis of oncology diseases. Unfortunately, DNA microarray technology has a high variation of data quality. Therefore, in order to obtain reliable results, complex and extensive image analysis algorithms should be applied before actual DNA microarray information can be used for biomedical purpose. In this paper, we present a novel hardware acceleration architecture specifically designed to process and measure DNA microarray images. The proposed architecture uses several units working in a single instruction-multiple data fashion managed by a microprocessor core. An FPGA-based prototypal implementation of the developed architecture is presented. Experimental results on several realistic DNA microarray images show a reduction of the computation time of one order of magnitude if compared with previously developed software-based approach.",2007,0, 3396,Open Architecture Software Design for Online Spindle Health Monitoring,"This paper presents an open systems architecture-based software design for an online spindle health monitoring system. The software is implemented using the graphical programming language of LabVIEW, and presents the spindle health status in two types of windows: simplified spindle condition display and warning window for standard machine operators (operator window) and advanced diagnosis window for machine experts (expert window). The capability of effective and efficient spindle defect detection and localization has been realized using the analytic wavelet-based envelope spectrum algorithm. The software provides a user-friendly human-machine interface and contributes directly to the development of a new generation of smart machine tools.",2007,0, 3397,A digital signal processing approach for modulation quality assessment in WiMAX systems,"Modulation quality in WiMAX systems that rely on OFDM modulation is dealt with. A digital signal processing approach is, in particular, proposed, aimed at assessing the performance of transmitters in terms of standard parameters. At present, WiMAX technology deployment is at the very beginning. Also, measurement instrumentation mandated to assist WiMAX apparatuses production and installation is not completely mature, and entitled to be significantly improved in terms of functionality and performance. In particular, no dedicated instrument is already present on the market, but the available solutions are arranged complementing existing hardware, such as real-time spectrum analyzers and vector signal analyzers, with a proper analysis software. Differently from the aforementioned solutions, the proposed approach is independent of the specific hardware platform mandated to the demodulation of the incoming WiMAX signal. It can operate, in fact, with any hardware capable of achieving and delivering the baseband I and Q components of the signal under analysis. Moreover, being open-source it can be improved or upgraded according to future needs.",2007,0, 3398,Toward Globally Optimal Event Monitoring & Aggregation For Large-scale Overlay Networks,"Overlay networks have emerged as a powerful and flexible platform for developing new disruptive network applications. The performance and reliability of overlay applications depend on the capability of overlay networks to dynamically adapt to various factors such as link/node failures, overlay link quality, and overlay node characteristics. In order to achieve this, the overlay applications require scalable and open overlay monitoring services to monitor, aggregate globally distributed events and take appropriate control actions. In this paper, we propose the techniques and algorithms to create an optimal event monitoring and aggregation infrastructure (called MOON) that minimizes the monitoring latency (i.e., event retrival/detection time) and event aggregation cost (i.e., intrusiveness) considering the large-scale geographical and network distribution of overlay nodes. The proposed monitoring infrastructure, MOON, clusters and organizes overlay nodes efficiently such that overlay applications can globally monitor and query correlated events in an overlay network with minimum latency and monitoring cost. Our simulations and experimental studies show the evaluation of MOON under many various topological structures, network sizes, and event aggregation volumes.",2007,0, 3399,Reducing Complexity of Software Deployment with Delta Configuration,"Deploying a modern software service usually involves installing several software components, and configuring these components properly to realize the complex interdependencies between them. This process, which accounts for a significant portion of information technology (IT) cost, is complex and error-prone. In this paper, we propose delta configuration - an approach that reduces the cost of software deployment by eliminating a large number of choices on parameter values that administrators have to make during deployment. In delta configuration, the complex software stack of a distributed service is first installed and tested in a test environment. The resulting software images are then captured and used for deployment in production environments. To deploy a software service, we only need to copy these pre-configured software images into a production environment and modify them to account for the difference between the test environment and a production environment. We have implemented a prototype system that achieves software deployment using delta configuration of the configuration state captured inside virtual machines. We perform a case study to demonstrate that our scheme leads to substantial reduction in complexity for the customer, over the traditional software deployment method.",2007,0, 3400,Accelerated 65nm Yield Ramp through Optimization of Inspection on Process-Design Sensitive Test Chips,"This paper describes an integrated methodology that combines short-flow test chips useful for exploring process-design systematic as well as random failure modes and an advanced inspection tool platform to characterize and monitor key Defects-of-Interest for accelerated defect-based yield learning at the 65 nm technology node. Utilization of a unique fast electrical testing scheme, rapid analysis software along with optimized inspection facilitated shorter learning cycles for accelerated process development. Knowledge derived from the CVreg- based inspection setup in a leading 300 mm fab was successfully transferred to manufacturing to facilitate inspection optimization for key Defects-of-Interest on product wafers.",2007,0, 3401,Definition of Metric Dependencies for Monitoring the Impact of Quality of Services on Quality of Processes,"Service providers have to monitor the quality of offered services and to ensure the compliance of service levels provider and requester agreed on. Thereby, a service provider should notify a service requester about violations of service level agreements (SLAs). Furthermore, the provider should point to impacts on affected processes in which services are invoked. For that purpose, a model is needed to define dependencies between quality of processes and quality of invoked services. In order to measure quality of services and to estimate impacts on the quality of processes, we focus on measurable metrics related to functional elements of processes, services as well as components implementing services. Based on functional dependencies between processes and services of a service-oriented architecture (SOA), we define metric dependencies for monitoring the impact of quality of invoked services on quality of affected processes. In this paper we discuss how to derive metric dependency definitions from functional dependencies by applying dependency patterns, and how to map metric and metric dependency definitions to an appropriate monitoring architecture.",2007,0, 3402,Probabilistic Field Coverage using a Hybrid Network of Static and Mobile Sensors,"Providing field coverage is a key issue in many sensor network applications. For a field with unevenly distributed static sensors, a quality coverage with acceptable network lifetime is often difficult to achieve. We propose a hybrid network that consists of both static and mobile sensors, and we suggest that it can be a cost-effective solution for held coverage. The main challenges of designing such a hybrid network are, first, determining necessary coverage contributions from each type of sensors; and second, scheduling the sensors to achieve the desired coverage contributions, which includes activation scheduling for static sensors and movement scheduling for mobile sensors. In this paper, we offer an analytical study on the above problems, and the results also lead to a practical system design. Specifically, we present an optimal algorithm for calculating the contributions from different types of sensors, which fully exploits the potentials of the mobile sensors and maximizes the network lifetime. We then present a random walk model for the mobile sensors. The model is distributed with very low control overhead. Its parameters can be fine-tuned to match the moving capability of different mobile sensors and the demands from a broad spectrum of applications. A node collaboration scheme is then introduced to further enhance the system performance. We demonstrate through analysis and simulation that, in our hybrid design, a small set of mobile sensors can effectively address the uneven distribution of the static sensors and significantly improve the coverage quality.",2007,0, 3403,Performance metrics and configuration strategies for group network communication,"There is an increasing number of group-based multimedia applications over the Internet, for example, voice conference or multi-player games. For these applications, it is often necessary to select a strategy to distribute the multimedia streams or mixing the multimedia stream data so as to provide better quality of service (QoS) guarantees. However, there is no appropriate metrics to evaluate the QoS of a group multimedia session, despite abundant literature on how to evaluate the QoS for two-party communication (e.g. MOS, E-Model). In this paper, we propose a new measure which is called the group mean opinion score (GMOS). To leverage on existing work, our definition of GMOS is based on two-party MOS, hence, it can be estimated via measurement of network parameters and fitting these data into the E-Model. We conduct large scale experiments using the latest SKYPE conference software. We first calibrate the GMOS based on the subjective scores of our experiments, then for individual conference sessions, we check whether our approach can pick a server configuration strategy to achieve the best GMOS. The study shows our proposed methodology is very promising and the potential of applying to other group-based applications.",2007,0, 3404,Image Quality and Performance Based on Henry Classification and Finger Location,"This paper presents an analysis of the quality of the image, minutiae count, and overall performance of fingerprint images based on the Henry system of fingerprint classification and the finger's relative location on the presenting hand. To do this, 50 users submitted 3 images from four fingers (index, middle, ring, and little). The National Institute of Standards and Technology (NIST) Fingerprint Image Software, release 2 (NFIS2) was used to analyze image quality and minutiae count. Neurotechnologija Ltd.'s VeriFinger was used to produce receiver operating characteristics (ROCs) in order to analyze performance. Our results show differences not only in image quality and minutiae count, but also matching performance based on Henry system classification and finger location.",2007,0, 3405,Novel Distortion Estimation Technique for Hardware-Based JPEG2000 Encoder System,"Optimal rate-control is an important feature of JPEG2000 which allows simple truncation of compressed bit stream to achieve best image quality at a given target bit rate. Accurate distortion estimation (DE) with respect to the allowed bit stream truncation points, is essential for the rate-control performance. In this paper, we address the issues involved in accurate DE for the hardware oriented implementation of JPEG2000 encoding systems. We propose a novel hardware efficient DE technique. Rate control based on the proposed technique results in a average peak signal-to-noise ratio degradation of only 0.02 dB with respect to the optimal DE technique used in the software implementations of JPEG2000. This is the best performance reported in comparison to existing techniques. The proposed technique requires only an additional 4096 memory bits per block coder which is 80% less than the memory requirements of optimal technique.",2007,0, 3406,Distance Relay With Out-of-Step Blocking Function Using Wavelet Transform,Out-of-step blocking function in distance relays is required to distinguish between a power swing and a fault. Speedy and reliable detection of symmetrical faults during power swings presents a challenge. This paper introduces wavelet transform to reliably and quickly detect power swings as well as detect any fault during a power swing. The total number of dyadic wavelet levels of voltage/current waveforms and the choice of particular levels for such detection are carefully studied. A logic block based on the wavelet transform is developed. The output of this block is combined with the output of the conventional digital distance relay to achieve desired performance during power swings. This integrated relay is extensively tested on a simulated system using PSCAD/ EMTDCreg software.,2007,0, 3407,Prony-Based Optimal Bayes Fault Classification of Overcurrent Protection,"The development of deregulation and demand for high-quality electrical energy has lead to a new requirement in different fields of power systems. In the protection field, this means that high sensitivity and fast operation during the fault are required while maltripping of relay protection is not acceptable. One case that may lead to a maltrip of the high-sensitive overcurrent relay is the starting current of the induction motor or inrush current of the transformer. This transient current has the potential to affect the correct operation of protection relays close to the component being switched. In the case of switching events, such transients must not lead to overcurrent relay operation; therefore, a reliable and secure relay response becomes a critical matter. Meanwhile, proper techniques must be used to prevent maltripping of such relays, due to transient currents in the network. In this paper, the optimal Bayes classifier is utilized to develop a method for discriminating the fault from nonfault events. The proposed method has been designed based on extracting the modal parameters of the current waveform using the Prony method. By feeding the fundamental frequency damping and ratio of the 2nd harmonic amplitude over the fundamental harmonic amplitude to the classifier, the fault case is discriminated from the switching case. The suitable performance of this algorithm is demonstrated by simulation of different faults and switching conditions on a power system using PSCAD/EMTDC software.",2007,0, 3408,Classification of Electrical Disturbances in Real Time Using Neural Networks,"Power-quality (PQ) monitoring is an essential service that many utilities perform for their industrial and larger commercial customers. Detecting and classifying the different electrical disturbances which can cause PQ problems is a difficult task that requires a high level of engineering knowledge. This paper presents a novel system based on neural networks for the classification of electrical disturbances in real time. In addition, an electrical pattern generator has been developed in order to generate common disturbances which can be found in the electrical grid. The classifier obtained excellent results (for both test patterns and field tests) thanks in part to the use of this generator as a training tool for the neural networks. The neural system is integrated on a software tool for a PC with hardware connected for signal acquisition. The tool makes it possible to monitor the acquired signal and the disturbances detected by the system.",2007,0, 3409,Expert System for Power Quality Disturbance Classifier,"Identification and classification of voltage and current disturbances in power systems are important tasks in the monitoring and protection of power system. Most power quality disturbances are non-stationary and transitory and the detection and classification have proved to be very demanding. The concept of discrete wavelet transform for feature extraction of power disturbance signal combined with artificial neural network and fuzzy logic incorporated as a powerful tool for detecting and classifying power quality problems. This paper employes a different type of univariate randomly optimized neural network combined with discrete wavelet transform and fuzzy logic to have a better power quality disturbance classification accuracy. The disturbances of interest include sag, swell, transient, fluctuation, and interruption. The system is modeled using VHSIC hardware description language (VHDL), a hardware description language, followed by extensive testing and simulation to verify the functionality of the system that allows efficient hardware implementation of the same. This proposed method classifies, and achieves 98.19% classification accuracy for the application of this system on software-generated signals and utility sampled disturbance events.",2007,0, 3410,An Efficient Network Anomaly Detection Scheme Based on TCM-KNN Algorithm and Data Reduction Mechanism,"Network anomaly detection plays a vital role in securing network security and infrastructures. Current research focuses concentrate on how to effective reduce high false alarm rate and usually ignore the fact that the poor quality data for the modeling of normal patterns as well as the high computational cost make the current anomaly detection methods not act as well as we expect. Based on these, we first propose a novel data mining scheme for network anomaly detection in this paper. Moreover, we adopt data reduction mechanisms (including genetic algorithm (GA) based instance selection and filter based feature selection methods) to boost the detection performance, meanwhile reduce the computational cost of TCM-KNN. Experimental results on the well-known KDD Cup 1999 dataset demonstrate the proposed method can effectively detect anomalies with high detection rates, low false positives as well as with high confidence than the state-of-the-art anomaly detection methods. Furthermore, the data reduction mechanisms would greatly improve the performance of TCM-KNN and make it be a good candidate for anomaly detection in practice.",2007,0, 3411,Quality-Based Fusion of Multiple Video Sensors for Video Surveillance,"In this correspondence, we address the problem of fusing data for object tracking for video surveillance. The fusion process is dynamically regulated to take into account the performance of the sensors in detecting and tracking the targets. This is performed through a function that adjusts the measurement error covariance associated with the position information of each target according to the quality of its segmentation. In this manner, localization errors due to incorrect segmentation of the blobs are reduced thus improving tracking accuracy. Experimental results on video sequences of outdoor environments show the effectiveness of the proposed approach.",2007,0, 3412,A Middleware Architecture for Replica Voting on Fuzzy Data in Dependable Real-time Systems,"Majority voting among replicated data collection devices enhances the trust-worthiness of data flowing from a hostile external environment. It allows a correct data fusion and dissemination by the end-users, in the presence of content corruptions and/or timing failures that may possibly occur during data collection. In addition, a device may operate on fuzzy inputs, thereby generating a data that occasionally deviates from the reference datum in physical world. In this paper, we provide a QoS-oriented approach to manage the data flow through various system elements. The application-level QoS parameters we consider are timeliness and accuracy of data. The underlying protocol-level parameters that influence data delivery performance are the data sizes, network bandwidth, device asynchrony, and data fuzziness. A replica voting protocol takes into account the interplay between these parameters as the faulty behavior of malicious devices unfolds in various forms during data collection. Our QoS-oriented approach casts the well-known fault-tolerance techniques, namely, 2-phase voting, with control mechanisms that adapt the data delivery to meet the end-to-end constraints - such as latency, data integrity, and resource cost. The paper describes a middleware architecture to realize our QoS-oriented approach to the management of replicated data flows.",2007,0, 3413,Differentiation of Wireless and Congestion Losses in TCP,"TCP is the most commonly used data transfer protocol. It assumes every packet loss to be congestion loss and reduces the sending rate. This will decrease the sender's throughput when there is an appreciable rate of packet loss due to link error and not due to congestion. This issue is significant for wireless links. We present an extension of TCP-Casablanca, which improves TCP performance over wireless links. A new discriminator is proposed that not only differentiates congestion and wireless losses, but also identifies the congestion level in the network, i.e., whether the network is lightly congested or heavily congested and throttles the sender's rate according to the congestion level in the network.",2007,0, 3414,Use of a Genetic Algorithm to Identify Source Code Metrics Which Improves Cognitive Complexity Predictive Models,"In empirical software engineering predictive models can be used to classify components as overly complex. Such modules could lead to faults, and as such, may be in need of mitigating actions such as refactoring or more exhaustive testing. Source code metrics can be used as input features for a classifier, however, there exist a large number of measures that capture different aspects of coupling, cohesion, inheritance, complexity and size. In a large dimensional feature space some of the metrics may be irrelevant or redundant. Feature selection is the process of identifying a subset of the attributes that improves a classifier's discriminatory performance. This paper presents initial results of a genetic algorithm as a feature subset selection method that enhances a classifier's ability to discover cognitively complex classes that degrade program understanding.",2007,0, 3415,Robust Metric Reconstruction from Challenging Video Sequences,"Although camera self-calibration and metric reconstruction have been extensively studied during the past decades, automatic metric reconstruction from long video sequences with varying focal length is still very challenging. Several critical issues in practical implementations are not adequately addressed. For example, how to select the initial frames for initializing the projective reconstruction? What criteria should be used? How to handle the large zooming problem? How to choose an appropriate moment for upgrading the projective reconstruction to a metric one? This paper gives a careful investigation of all these issues. Practical and effective approaches are proposed. In particular, we show that existing image-based distance is not an adequate measurement for selecting the initial frames. We propose a novel measurement to take into account the zoom degree, the self-calibration quality, as well as image-based distance. We then introduce a new strategy to decide when to upgrade the projective reconstruction to a metric one. Finally, to alleviate the heavy computational cost in the bundle adjustment, a local on-demand approach is proposed. Our method is also extensively compared with the state-of-the-art commercial software to evidence its robustness and stability.",2007,0, 3416,Real-Time Model-Based Fault Detection and Diagnosis for Alternators and Induction Motors,"This paper describes a real-time model-based fault detection and diagnosis software. The electric machines diagnosis system (EMDS) covers field winding shorted-turns fault in alternators and stator windings shorted-turns fault in induction motors. The EMDS has a modular architecture. The modules include: acquisition and data treatment; well-known parameters estimation algorithms, such as recursive least squares (RLS) and extended Kalman filter (EKF); dynamic models for faults simulation; faults detection and identification tools, such as M.L.P. and S.O.M. neural networks and fuzzy C-means (FCM) technique. The modules working together detect possible faulty conditions of various machines working in parallel through routing. A fast, safe and efficient data manipulation requires a great DataBase managing system (DBMS) performance. In our experiment, the EMDS real-time operation demonstrated that the proposed system could efficiently and effectively detect abnormal conditions resulting in lower-cost maintenance for the company.",2007,0, 3417,Applying Novel Resampling Strategies To Software Defect Prediction,"Due to the tremendous complexity and sophistication of software, improving software reliability is an enormously difficult task. We study the software defect prediction problem, which focuses on predicting which modules will experience a failure during operation. Numerous studies have applied machine learning to software defect prediction; however, skewness in defect-prediction datasets usually undermines the learning algorithms. The resulting classifiers will often never predict the faulty minority class. This problem is well known in machine learning and is often referred to as learning from unbalanced datasets. We examine stratification, a widely used technique for learning unbalanced data that has received little attention in software defect prediction. Our experiments are focused on the SMOTE technique, which is a method of over-sampling minority-class examples. Our goal is to determine if SMOTE can improve recognition of defect-prone modules, and at what cost. Our experiments demonstrate that after SMOTE resampling, we have a more balanced classification. We found an improvement of at least 23% in the average geometric mean classification accuracy on four benchmark datasets.",2007,0, 3418,Iterative Cross Section Sequence Graph for Handwritten Character Segmentation,"The iterative cross section sequence graph (ICSSG) is an algorithm for handwritten character segmentation. It expands the cross section sequence graph concept by applying it iteratively at equally spaced thresholds. The iterative thresholding reduces the effect of information loss associated with image binarization. ICSSG preserves the characters' skeletal structure by preventing the interference of pixels that causes flooding of adjacent characters' segments. Improving the structural quality of the characters' skeleton facilitates better feature extraction and classification, which improves the overall performance of optical character recognition (OCR). Experimental results showed significant improvements in OCR recognition rates compared to other well-established segmentation algorithms.",2007,0, 3419,Statistical QoS Guarantee and Energy-Efficiency in Web Server Clusters,"In this paper we study the soft real-time web cluster architecture needed to support e-commerce and related applications. Our testbed is based on an industry standard, which defines a set of Web interactions and database transactions with their deadlines, for generating real workload and bench-marking e-commerce applications. In these soft real-time systems, the quality of service (QoS) is usually defined as the fraction of requests that meet the deadlines. When this QoS is measured directly, regardless of whether the request missed the deadline by an epsilon amount of time or by a large difference, the result is always the same. For this reason, only counting the number of missed requests in a period avoids the observation of the real state of the system. Our contributions are theoretical propositions of how to control the QoS, not measuring the QoS directly, but based on the probability distribution of the tardiness in the completion time of the requests. We call this new QoS metric tardiness quantile metric (TQM). The proposed method provides fine-grained control over the QoS so that we can make a closer examination of the relation between QoS and energy efficiency. We validate the theoretical results showing experiments in a multi-tiered e-commerce web cluster implemented using only open-source software solutions.",2007,0, 3420,WCET-Directed Dynamic Scratchpad Memory Allocation of Data,"Many embedded systems feature processors coupled with a small and fast scratchpad memory. To the difference with caches, allocation of data to scratchpad memory must be handled by software. The major gain is to enhance the predictability of memory accesses latencies. A compile-time dynamic allocation approach enables eviction and placement of data to the scratchpad memory at runtime. Previous dynamic scratchpad memory allocation approaches aimed to reduce average-case program execution time or the energy consumption due to memory accesses. For real-time systems, worst-case execution time is the main metric to optimize. In this paper, we propose a WCET-directed algorithm to dynamically allocate static data and stack data of a program to scratchpad memory. The granularity of placement of memory transfers (e.g. on function, basic block boundaries) is discussed from the perspective of its computation complexity and the quality of allocation.",2007,0, 3421,Toward the Use of Automated Static Analysis Alerts for Early Identification of Vulnerability- and Attack-prone Components,"Extensive research has shown that software metrics can be used to identify fault- and failure-prone components. These metrics can also give early indications of overall software quality. We seek to parallel the identification and prediction of fault- and failure-prone components in the reliability context with vulnerability- and attack-prone components in the security context. Our research will correlate the quantity and severity of alerts generated by source code static analyzers to vulnerabilities discovered by manual analyses and testing. A strong correlation may indicate that automated static analyzers (ASA), a potentially early technique for vulnerability identification in the development phase, can identify high risk areas in the software system. Based on the alerts, we may be able to predict the presence of more complex and abstract vulnerabilities involved with the design and operation of the software system. An early knowledge of vulnerability can allow software engineers to make informed risk management decisions and prioritize redesign, inspection, and testing efforts. This paper presents our research objective and methodology.",2007,0, 3422,Efficient Probing Techniques for Fault Diagnosis,"Increase in the network usage and the widespread application of networks for more and more performance critical applications has caused a demand for tools that can monitor network health with minimum management traffic. Adaptive probing holds a potential to provide effective tools for end-to-end monitoring and fault diagnosis over a network. In this paper we present adaptive probing tools that meet the requirements to provide an effective and efficient solution for fault diagnosis. In this paper, we propose adaptive probing based algorithms to perform fault localization by adapting the probe set to localize the faults in the network. We compare the performance and efficiency of the proposed algorithms through simulation results.",2007,0, 3423,Evaluating the Combined Effect of Vulnerabilities and Faults on Large Distributed Systems,"On large and complex distributed systems hardware and software faults, as well as vulnerabilities, exhibit significant dependencies and interrelationships. Being able to assess their actual impact on the overall system dependability is especially important. The goal of this paper is to propose a unifying way of describing a complex hardware and software system, in order to assess the impact of both vulnerabilities and faults by means of the same underlying reasoning mechanism, built on a standard Prolog inference engine. Some preliminary experimental results show that a prototype tool based on these techniques is both feasible and able to achieve encouraging performance levels on several synthetic test cases.",2007,0, 3424,Evaluation of MDA/PSM database model quality in the context of selected non-functional requirements,"Conceptual, logical, and physical database models can be regarded as PIM, PSM1, and PSM2 data models within MDA architecture, respectively. Many different logical database models can be derived from a given conceptual database model by applying a set of transformations rules. To choose a logical database model for further transformation (to physical database model at PSM2 level) some selection criteria based on quality demands, e.g. database efficiency, easy maintainability or portability, should be established. To evaluate quality of the database models some metrics should be provided. We present metrics for measuring two selected, conflicted quality characteristics (efficiency and maintainability), next we analyse correlation between the metrics, and finally propose how to assess quality of logical database models.",2007,0, 3425,An Artificial Immune System Approach for Fault Prediction in Object-Oriented Software,"The features of real-time dependable systems are availability, reliability, safety and security. In the near future, real-time systems will be able to adapt themselves according to the specific requirements and real-time dependability assessment technique will be able to classify modules as faulty or fault-free. Software fault prediction models help us in order to develop dependable software and they are commonly applied prior to system testing. In this study, we examine Chidamber-Kemerer (CK) metrics and some method-level metrics for our model which is based on artificial immune recognition system (AIRS) algorithm. The dataset is a part of NASA Metrics Data Program and class-level metrics are from PROMISE repository. Instead of validating individual metrics, our mission is to improve the prediction performance of our model. The experiments indicate that the combination of CK and the lines of code metrics provide the best prediction results for our fault prediction model. The consequence of this study suggests that class-level data should be used rather than method-level data to construct relatively better fault prediction models. Furthermore, this model can constitute a part of real-time dependability assessment technique for the future.",2007,0, 3426,Failure Resilience for Device Drivers,"Studies have shown that device drivers and extensions contain 3-7 times more bugs than other operating system code and thus are more likely to fail. Therefore, we present a failure-resilient operating system design that can recover from dead drivers and other critical components - primarily through monitoring and replacing malfunctioning components on the fly - transparent to applications and without user intervention. This paper focuses on the post-mortem recovery procedure. We explain the working of our defect detection mechanism, the policy-driven recovery procedure, and post-restart reintegration of the components. Furthermore, we discuss the concrete steps taken to recover from network, block device, and character device driver failures. Finally, we evaluate our design using performance measurements, software fault-injection experiments, and an analysis of the reengineering effort.",2007,0, 3427,Using Process-Level Redundancy to Exploit Multiple Cores for Transient Fault Tolerance,"Transient faults are emerging as a critical concern in the reliability of general-purpose microprocessors. As architectural trends point towards multi-threaded multi-core designs, there is substantial interest in adapting such parallel hardware resources for transient fault tolerance. This paper proposes a software-based multi-core alternative for transient fault tolerance using process-level redundancy (PLR). PLR creates a set of redundant processes per application process and systematically compares the processes to guarantee correct execution. Redundancy at the process level allows the operating system to freely schedule the processes across all available hardware resources. PLR's software-centric approach to transient fault tolerance shifts the focus from ensuring correct hardware execution to ensuring correct software execution. As a result, PLR ignores many benign faults that do not propagate to affect program correctness. A real PLR prototype for running single-threaded applications is presented and evaluated for fault coverage and performance. On a 4-way SMP machine, PLR provides improved performance over existing software transient fault tolerance techniques with 16.9% overhead for fault detection on a set of optimized SPEC2000 binaries.",2007,0, 3428,Experimental Risk Assessment and Comparison Using Software Fault Injection,"One important question in component-based software development is how to estimate the risk of using COTS components, as the components may have hidden faults and no source code available. This question is particularly relevant in scenarios where it is necessary to choose the most reliable COTS when several alternative components of equivalent functionality are available. This paper proposes a practical approach to assess the risk of using a given software component (COTS or non-COTS). Although we focus on comparing components, the methodology can be useful to assess the risk in individual modules. The proposed approach uses the injection of realistic software faults to assess the impact of possible component failures and uses software complexity metrics to estimate the probability of residual defects in software components. The proposed approach is demonstrated and evaluated in a comparison scenario using two real off-the-shelf components (the RTEMS and the RTLinux real-time operating system) in a realistic application of a satellite data handling application used by the European Space Agency.",2007,0, 3429,On the Quality of Service of Crash-Recovery Failure Detectors,"In this paper, we study and model a crash-recovery target and its failure detector's probabilistic behavior. We extend quality of service (QoS) metrics to measure the recovery detection speed and the proportion of the detected failures of a crash-recovery failure detector. Then the impact of the dependability of the crash-recovery target on the QoS bounds for such a crash-recovery failure detector is analysed by adopting general dependability metrics such as MTTF and MTTR. In addition, we analyse how to estimate the failure detector's parameters to achieve the QoS from a requirement based on Chen's NFD-S algorithm. We also demonstrate how to execute the configuration procedure of this crash-recovery failure detector. The simulations are based on the revised NFD-S algorithm with various MTTF and MTTR. The simulation results show that the dependability of a recoverable monitored target could have significant impact on the QoS of such a failure detector and match our analysis results.",2007,0, 3430,Performability Models for Multi-Server Systems with High-Variance Repair Durations,"We consider cluster systems with multiple nodes where each server is prone to run tasks at a degraded level of service due to some software or hardware fault. The cluster serves tasks generated by remote clients, which are potentially queued at a dispatcher. We present an analytic queueing model of such systems, represented as an M/MMPP/1 queue, and derive and analyze exact numerical solutions for the mean and tail-probabilities of the queue-length distribution. The analysis shows that the distribution of the repair time is critical for these performability metrics. Additionally, in the case of high-variance repair times, the model reveals so-called blow-up points, at which the performance characteristics change dramatically. Since this blowup behavior is sensitive to a change in model parameters, it is critical for system designers to be aware of the conditions under which it occurs. Finally, we present simulation results that demonstrate the robustness of this qualitative blow-up behavior towards several model variations.",2007,0, 3431,Using Economics as Basis for Modelling and Evaluating Software Quality,"The economics and cost of software quality have been discussed in software engineering for decades now. There is clearly a relationship and a need to manage cost and quality in combination. Moreover, economics should be the basis of any quality analysis. However, this implies several issues that have not been addressed to an extent so that managing the economics of software quality is common practice. This paper discusses these issues, possible solutions, and research directions.",2007,0, 3432,Fault-Adaptive Control for Robust Performance Management of Computing Systems,"This paper introduces a fault-adaptive control approach for the robust and reliable performance management of computing systems. Fault adaptation involves the detection and isolation of faults, and then taking appropriate control actions to mitigate the fault effects and maintain control.",2007,0, 3433,Adequate and Precise Evaluation of Quality Models in Software Engineering Studies,"Many statistical techniques have been proposed and introduced to predict fault-proneness of program modules in software engineering. Choosing the ""best"" candidate among many available models involves performance assessment and detailed comparison. But these comparisons are not simple due to varying performance measures and the related verification and validation cost implications. Therefore, a methodology for precise definition and evaluation of the predictive models is still needed. We believe the procedure we outline here, if followed, has a potential to enhance the statistical validity of future experiments.",2007,0, 3434,Global Sensitivity Analysis of Predictor Models in Software Engineering,"Predictor models are an important tool in software projects for quality and cost control as well as management. There are various models available that can help the software engineer in decision-making. However, such models are often difficult to apply in practice because of the amount of data needed. Sensitivity analysis offers provides means to rank the input factors w.r.t. their importance and thereby reduce and optimise the measurement effort necessary. This paper presents an example application of global sensitivity analysis on a software reliability model used in practice. It describes the approach and the possibilities offered.",2007,0, 3435,Make the Most of Your Time: How Should the Analyst Work with Automated Traceability Tools?,"Several recent studies employed traditional information retrieval (IR) methods to assist in the mapping of elements of software engineering artifacts to each other. This activity is referred to as candidate link generation because the final say in determining the final mapping belongs to the human analyst. Feedback techniques that utilize information from the analyst (on whether the candidate links are correct or not) have been shown to improve the quality of the mappings. Yet the analyst is making an investment of time in providing the feedback. This leads to the question of whether or not guidance can be provided to the analyst on how to best utilize that time. This paper simulates a number of approaches an analyst might take to evaluating the same candidate link list, and discovers that more structured and organized approaches appear to save time/effort of the analyst.",2007,0, 3436,Modeling the Effect of Size on Defect Proneness for Open-Source Software,"Quality is becoming increasingly important with the continuous adoption of open-source software. Previous research has found that there is generally a positive relationship between module size and defect proneness. Therefore, in open-source software development, it is important to monitor module size and understand its impact on defect proneness. However, traditional approaches to quality modeling, which measure specific system snapshots and obtain future defect counts, are not well suited because open-source modules usually evolve and their size changes over time. In this study, we used Cox proportional hazards modeling with recurrent events to study the effect of class size on defect-proneness in the Mozilla product. We found that the effect of size was significant, and we quantified this effect on defect proneness.",2007,0, 3437,Complexity Measures for Secure Service-Oriented Software Architectures,"As software attacks become widespread, the ability for a software system to resist malicious attacks has become a key concern in software quality engineering. Software attack ability is a concept proposed recently in the research literature to measure the extent to which a software system or service could be the target of successful attacks. Like most external attributes, attack ability is to some extent disconnected from the internal of software products. To mitigate software attack ability, we need to identify and manipulate related internal software attributes. Our goal in this paper is to study software complexity as one such internal attribute. We apply the User System Interaction Effect (USIE) model, a security measurement abstraction paradigm proposed in previous research, to define and validate a sample metric for service complexity. We thereby establish the usefulness of our sample metric through empirical investigation using open source software system as target application.",2007,0, 3438,Authoring Tool for Business Performance Monitoring and Control,"To monitor and warrant the performance of concerned business operations, formal representation for business performance models are needed so business analysts can describe what they expect from business service providers. This paper proposes a metamodel and an authoring tool for describing business performance for business services in the context of SOA. A business performance model explicitly captures the intentions of the consumers of the target business services. The authoring tool simplifies the modeling of the performance models. With our proposed mechanism, computational models for business performance management are generated. The separation of concerns in terms of business performance models and computational models allows for seamless transitions from business level performance models to runtime.",2007,0, 3439,Integrated Management of Company Processes and Standard Processes: A Platform to Prepare and Perform Quality Management Appraisals,"Business processes have been introduced in many companies during the last years. But it was not clear how to measure the quality of these processes. ISO/IEC 15504 and CMMI have filled this gap and provide measurement frameworks to assess the maturity of processes. However, introducing and adapting processes to comply with these standards is difficult and error-prone. Especially the integration of the requirements of the standards into the company processes is hard to accomplish. In this paper we propose an integrated process modeling approach that is able to bridge the gap between business processes and requirements of the standards. Building on this integrated model we are able to produce reports that systematically uncover and display weaknesses in the process.",2007,0, 3440,Evaluating the Post-Delivery Fault Reporting and Correction Process in Closed-Source and Open-Source Software,"Post-delivery fault reporting and correction are important activities in the software maintenance process. It is worthwhile to study these activities in order to understand the difference between open-source and closed-source software products from the maintenance perspective. This paper proposes three metrics to evaluate the post-delivery fault reporting and correction process, the average fault hidden time, the average fault pending time, and the average fault correction time. An empirical study is further performed to compare the fault correction processes of NASA Ames (closed-source) projects and three open-source projects: Apache Tomcat, Apache Ant, and Gnome Panel.",2007,0, 3441,Refactoring--Does It Improve Software Quality?,"Software systems undergo modifications, improvements and enhancements to cope with evolving requirements. This maintenance can cause their quality to decrease. Various metrics can be used to evaluate the way the quality is affected. Refactoring is one of the most important and commonly used techniques of transforming a piece of software in order to improve its quality. However, although it would be expected that the increase in quality achieved via refactoring is reflected in the various metrics, measurements on real life systems indicate the opposite. We analyzed source code version control system logs of popular open source software systems to detect changes marked as refactorings and examine how the software metrics are affected by this process, in order to evaluate whether refactoring is effectively used as a means to improve software quality within the open source community.",2007,0, 3442,A Rapid Fault Injection Approach for Measuring SEU Sensitivity in Complex Processors,"Processors are very common components in current digital systems and to assess their reliability is an essential task during the design process. In this paper a new fault injection solution to measure SEU sensitivity in processors is presented. It consists in a hardware-implemented module that performs fault injection through the available JTAG-based On-Chip Debugger (OCD). It can be widely applicable to different processors since JTAG standard is an extended interface and OCDs are usually available in current processors. The hardware implementation avoids the communication between the target system and the software debugging tool. The method has been applied to a complex processor, the ARM7TDMI. Results illustrate the approach is a fast, efficient and cost-effective solution.",2007,0, 3443,Online Applications of Wavelet Transforms to Power System Relaying - Part II,"Recent wavelet developments in power engineering applications, include detection, localization, classification, identification, storage, compression, and network/system analysis of the power quality disturbance signals, to very recently, power system relaying [1,2,3,4]. This paper assesses the online use of wavelet analysis to power system relaying. The paper presents a novel technique for transmission-line fault detection and classification using the DWT for which an optimal selection of mother wavelet and data window size based on the minimum entropy criterion has been performed. The paper starts with the review of recent work within the field of wavelet analysis and its applications to power systems engineering. Then, the theoretical background of the technique is presented and the proposed method is described in detail. Finally, the effect of different parameters on the algorithm are examined in order to highlight its performance. Typical fault conditions on a practical 220 kV power system as generated by ATP/EMTP is analyzed with Daubechies wavelets. The performance of the fault classifier is tested using MATLAB software. The feasibility of using wavelet analysis to detect and classify faults is investigated. Finally it discusses the results, limitations and possible improvement. It is found that the use of wavelet transforms together with an effective classification procedure is considered to be straightforward, fast, computationally efficient and allow for real-time accurate applications in monitoring and classifying techniques in power engineering.",2007,0, 3444,Estimation of Fault Location on Distribution Feeders using PQ Monitoring Data,"This paper investigates the challenges in the extraction of fault data (fault current magnitude and type) for fault location application based on actual field data and proposes a procedure for practical implementation. The proposed scheme is implemented as a stand-alone software program, and is tested using actual field data collected at distribution substations and the results are compared with results of the state-of-the-art software package currently used in a utility.",2007,0, 3445,Implementation and Applications of Wide-area monitoring systems,"This paper discusses the design and applications of wide area monitoring and control systems, which can complement classical protection systems and SCADA/EMS applications. System wide installed phasor measurement units send their measured data to a central computer, where snapshots of the dynamic system behavior are made available online. This new quality of system information opens up a wide range of new applications to assess and actively maintain system's stability in case of voltage, angle or frequency instability, thermal overload and oscillations. Recent developed algorithms and their design for these application areas are introduced. With practical examples the benefits in terms of system security are shown..",2007,0, 3446,"Advancements in Distributed Generation Issues Interconnection, Modeling, and Tariffs","The California Energy Commission is cost-sharing research with the Department of Energy through the National Renewable Energy Laboratory to address distributed energy resources (DER) topics. These efforts include developing interconnection and power management technologies, modeling the impacts of interconnecting DER with an area electric power system, and evaluating possible modifications to rate policies and tariffs. As a result, a DER interconnection device has been developed and tested. A workshop reviewed the status and issues of advanced power electronic devices. Software simulations used validated models of distribution circuits that incorporated DER, and tests and measurements of actual circuits with and without DER systems are being conducted to validate these models. Current policies affecting DER were reviewed and rate making policies to support deployment of DER through public utility rates and policies were identified. These advancements are expected to support the continued and expanded use of DER systems.",2007,0, 3447,A Web-Based Fuel Management Software System for a Typical Indian Coal based Power Plant,"The fuel management system forms an integral part of the management process in a power plant and hence is one of the most critical areas. It deals with the management of commercial, operational and administrative functions pertaining to estimating fuel requirements, selection of fuel suppliers, fuel quality check, transportation and fuel handling, payment for fuel received, consumption and calculation of fuel efficiency. The results are then used for cost benefit analysis to suggest further plant improvement. At various levels, management information reports need to be extracted to communicate the required information across various levels of management. The core processes of fuel management involve a huge amount of paper work and manual labour, which makes it tedious, time-consuming and prone to human errors. Moreover, the time taken at each stage as well as the transparency of the relevant information has a direct bearing on the economics and efficient operation of the power plant. Both system performance and information transparency can be enhanced by the introduction of Information Technology in managing this area. This paper reports on the development of Web-based Fuel Management System Software, based on 3-tiered J2EE architecture, which aims at systematic functioning of the Core Business Processes of Fuel Management of a typical coal-fired thermal power plant in the Indian power scenario.",2007,0, 3448,Modeling Distribution Overcurrent Protective Devices for Time-Domain Simulations,"Poor overcurrent protective device coordination can cause prolonged and unnecessary voltage variation problems. The coordination of many protective devices can be a difficult task and unfortunately, device performance and accuracy are not evaluated once the device settings are chosen and deployed. Ideally, an automated system would interrogate system data at the substation and estimate voltage variations and assess protective device performance. To test the accuracy of the automated system, a simulation model is developed to generate test data. The time-domain overcurrent protective device models can be used to estimate the duration of voltage sag during utility fault clearing operation as well. This paper presents the modeling of overcurrent protective device models created in a time-domain power system simulator. The radial distribution simulation, also made in the same time-domain software, allows testing of different overcurrent protection device settings and placement.",2007,0, 3449,Bad-Smell Metrics for Aspect-Oriented Software,"Aspect-oriented programming (AOP) is a new programming paradigm that improves separation of concerns by decomposing the crosscutting concerns in aspect modules. Bad smells are metaphors to describe software patterns that are generally associated with bad design and bad programming of object-oriented programming (OOP). New notions and different ways of thinking for developing aspect-oriented (AO) software inevitably introduce bad smells which are specific bad design and bad programming in AO software called AO bad smells. Software metrics have been used to measure software artifact for a better understanding of its attributes and to assess its quality. Bad-smell metrics should be used as indicators for determining whether a particular fraction of AO code contains bad smells or not. Therefore, this paper proposes definition of metrics corresponding to the characteristic of each AO bad smell as a means to detecting them. The proposed bad-smell metrics are validated and the results show that the proposed bad- smell metrics can preliminarily indicate bad smells hidden in AO software.",2007,0, 3450,Resource Allocation Based On Workflow For Enhancing the Performance of Composite Service,"Under SOA, multiple services can be aggregated to create a new composite service based on some predefined workflow. The QoS of this composite service is determined by the cooperation of all these Web services. With workflow pipelining, it is unreasonable to improve the overall service performance by only considering individual services without considering the relationship among them. In this paper, we propose to allocate resources by tracing and predicting workloads dynamically with the pipelining of service requests in workflow graph. At any moment, there are a number of service requests being handled by different services. Firstly, we predict future workloads for any requests as soon as they arrive at any service in the workflow. Secondly, we allocate resources for the predicted workloads to enhance the performance by replicating more services to resources. Our target is to maximize the number of successful requests with the constraints of limited resources. Experiment shows that our dynamic resource allocation mechanism is more efficient for enhancing the global performance of composite service than static resource allocation mechanism in general.",2007,0, 3451,Web Service decision-making model based on uncertain-but-bounded attributes,"Web services have become one of the most popular technologies of Web application. But the quality of Web services is not as stable as traditional software components' because of the uncertainty of network. Based on non-probability-set, the convex method is used to judge the range of performance affected by uncertain-but-bounded attributes. This method only requires the highest and lowest value of the uncertain attribute values and need not to know their probability distribution. This paper proposes the metric algorithm of the change of web service quality, and proposes the Web service decisionmaking algorithm based on the theory of multiple attributes decision by TOPSIS (technique for order preference by similarity to idea solution) in operations research.",2007,0, 3452,Continuous SPA: Continuous Assessing and Monitoring Software Process,"In the past ten years many assessment approaches have been proposed to help manage software process quality. However, few of them are configurable and real-time in practice. Hence, it is advantageous to find ways to monitor the current status of software processes and detect the improvement opportunities. In this paper, we introduce a web- based prototype system (Continuous SPA) on continuous assessing and monitoring software process, and perform a practical study in one process area: project management. Our study results are positive and show that features such as global management, well-defined responsibility and visualization can be integrated in process assessment to help improve the software process management.",2007,0, 3453,An Architecture of a Workflow System for Integrated Asset Management in the Smart Oil Field Domain,"Integrated asset management (IAM) is the vision of IT-enabled transformation of oilfield operations where information integration from a variety of tools for reservoir modeling, simulation, and performance prediction will lead to rapid decision making for continuous optimization of oil production. In this paper, we discuss the similarities and differences of IAM applications and typical e-Science applications. We then propose an architecture for a workflow system for IAM based on the four key requirements: support for creation, orchestration and management of workflows including those involving legacy tools, support for audit trails and data quality indicators for data objects, usability and extensibility. Our architecture builds upon current research in the scientific workflow area and applies many of its learnings to address the requirements of our system. We propose some implementation strategies and technologies and identify some of the key research challenges in realizing our architectural vision.",2007,0, 3454,On the Contributions of an End-to-End AOSD Testbed,"Aspect-Oriented Software Development (AOSD) techniques are gaining increased attention from both academic and industrial organisations. In order to promote a smooth adoption of such techniques it is of paramount importance to perform empirical analysis of AOSD to gather a better understanding of its benefits and limitations. In addition, the effects of aspect-oriented (AO) mechanisms on the entire development process need to be better assessed rather than just analysing each development phase in isolation. As such, this paper outlines our initial effort on the design of a testbed that will provide end-to-end systematic comparison of AOSD techniques with other mainstream modularisation techniques. This will allow the proponents of AO and non- AO techniques to compare their approaches in a consistent manner. The testbed is currently composed of: (i) a benchmark application, (ii) an initial set of metrics suite to assess certain internal and external software attributes, and (in) a ""repository"" of artifacts derived from AOSD approaches that are assessed based on the application of (i) and (ii). This paper mainly documents a selection of techniques that will be initially applied to the benchmark. We also discuss the expected initial outcomes such a testbed will feed back to the compared techniques. The applications of these techniques are contributions from different research groups working on AOSD.",2007,0, 3455,Automated Functional Conformance Test Generation for Semantic Web Services,"We present an automated approach to generate functional conformance tests for semantic Web services. The semantics of the Web services are defined using the inputs, outputs, preconditions, effects (IOPEs) paradigm. For each Web service, our approach produces testing goals which are refinements of the Web service preconditions using a set of fault models. A novel planner component accepts these testing goals, along with an initial state of the world and the Web service definitions to generate a sequence of Web service invocations as a test case. Another salient feature of our approach is generation of verification sequences to ensure that the changes to the world produced by an effect are implemented correctly. Lastly, a given application incorporating a set of semantic Web services may be accessible through several interfaces such as 1) direct invocation of the Web services, or 2) a graphical user interface (GUI). Our technique allows generation of executable test cases which can be applied through both interfaces. We describe the techniques used in our test generation approach. We also present results which compare two approaches: an existing manual approach without the formal IOPEs information and the IOPEs-based approach reported in this paper. These results indicate that the approach described here leads to substantial savings in effort with comparable results for requirements coverage and fault detection effectiveness.",2007,0, 3456,Probabilistic QoS and soft contracts for transaction based Web services,"Web services orchestrations and choreographies require establishing quality of service (QoS) contracts with the user. This is achieved by performing QoS composition, based on contracts established between the orchestration and the called Web services. These contracts are typically stated in the form of hard guarantees (e.g., response time always less than 5 msec). In this paper we propose using soft contracts instead. Soft contracts are characterized by means of probability distributions for QoS parameters. We show how to compose such contracts, to yield a global contract (probabilistic) for the orchestration. Our approach is implemented by the TOrQuE tool. Experiments on TOrQuE show that overly pessimistic contracts can be avoided and significant room for safe overbooking exists.",2007,0, 3457,Utility-based QoS Brokering in Service Oriented Architectures,"Quality of service (QoS) is an important consideration in the dynamic service selection in the context of service oriented architectures. This paper extends previous work on QoS brokering for SOAs by designing, implementing, and experimentally evaluating a service selection QoS broker that maximizes a utility function for service consumers. Utility functions allow stakeholders to ascribe a value to the usefulness of a system as a function of several attributes such as response time, throughput, and availability. This work assumes that consumers of services provide to a QoS broker their utility functions and their cost constraints on the requested services. Service providers register with the broker by providing service demands for each of the resources used by the services provided and cost functions for each of the services. Consumers request services from the QoS broker, which selects a service provider that maximizes the consumer's utility function subject to its cost constraint. The QoS broker uses analytic queuing models to predict the QoS values of the various services that could be selected under varying workload conditions. The broker and services were implemented using a J2EE/Weblogic platform and experiments were conducted to evaluate the broker's efficacy. Results showed that the broker adequately adapts its selection of service providers according to cost constraints.",2007,0, 3458,Software Metric Estimation: An Empirical Study Using An Integrated Data Analysis Approach,"Automatic software effort estimation is important for quality management in the software development industry, but it still remains a challenging issue. In this paper we present an empirical study on the software effort estimation problem using a benchmark dataset. A number of machine learning techniques are employed to construct an integrated data analysis approach that extracts useful information from visualisation, feature selection, model selection and validation.",2007,0, 3459,Yarn hairiness determination the algorithms of computer measurement methods,The article describes the algorithm of the edge detection in the computer application of the assessment of yarn quality. The proposed algorithm and the measurement method allows the real setting of the length and number of the protruding fibres. That solution introduces a new quality to the measurement of yarn hairiness.,2007,0, 3460,Reusable Component Analysis for Component-Based Embedded Real-Time Systems,"Component-Based Software Engineering (CBSE) promises an improved ability to reuse software which would potentially decrease the development time while also improving the quality of the system, since the components are (re-)used by many. However, CBSE has not been as successful in the embedded systems domain as in the desktop domain, partly because requirements on embedded systems are stricter (e.g. requirements on safety, real-time and minimizing hardware resources). Moreover these requirements differ between industrial domains. Paradoxically, components should be context-unaware to be reusable at the same time as they should be context sensitive in order to be predictable and resource efficient. This seems to be a fundamental problem to overcome before the CBSE paradigm will be successful also in the embedded systems domain. Another problem is that some of the stricter requirements for embedded systems require certain analyses to be made, which may be very complicated and time-consuming for the system developer. This paper describes how one particular kind of analysis, of worst-case execution time, would fit into the CBSE development processes so that the component developer performs some analyses and presents the results in a form that is easily used for component and system verification during system development. This process model is not restricted to worst-case execution time analysis, but we believe other types of analyses could be performed in a similar way.",2007,0, 3461,A Formal Model for Quality of Service Measurement in e-Government,"Quality is emerging as a promising approach to promote the development of services in e-Government. A proper Quality of Services is mandatory in order to satisfy citizens and firms' needs and to accept the use of ICT in our life. This paper describes our ongoing research on QoS run-time measurement and preliminary ideas on the appliance of run-time monitoring to guarantee assurance of service applications. For this purpose, we also define and describe a formal model based on a set of quality parameters for e-Government services. These parameters can be useful both as a basis for understanding and assessing competing services, and as a way to determine what improvements are needed to assure citizens and firms satisfaction.",2007,0, 3462,"Air Quality Derivation utilizing Landsat TM image over Penang, Malaysia","This paper outlines recent developments in optical remote sensing of Landsat TM data for air quality monitoring for atmospheric particulate matter having a diameter less than 10-micro meter (PM10). The objective of this study is to evaluate the performance of the developed algorithm and suitability of remote sensing data for PM10 mapping. We used a DustTrak Aerosol Monitor 8520 to collect in situ data. The PM10 data were collected simultaneously during the satellite Landsat overpass the study area. An algorithm has been developed based on the optical aerosol characteristic in the atmosphere to estimate PM10 over Penang Island, Malaysia. The digital numbers were determined corresponding to the ground-truth locations for each band and then converted to radiance and reflectance values. The atmospheric reflectance values was extracted from the satellite observed reflectance values subtracters by the amount given by the surface reflectance. The surface refleatance values were retrieved using ATCOR2 in the PCI Geomatica 9.1 image processing software. The atmospheric reflectance values were later used for PM10 mapping using the calibrated algorithm. Results from this research have indicated that PM10 data have positive correlation with atmospheric reflectance in two visible bands (Red and Blue band). Finally, the map of pollution concentration generated from the satellite image using the proposed algorithm to illustrate spatial distribution pattern of air pollution for the study area. The proposed algorithm produced high correlation coefficient, R, and low root-mean-square, RMS, values. The concentrations of PM10 are high in the industrial zones and urban areas of Penang, Malaysia.",2007,0, 3463,Predictive Early Object Shedding in Media Processing Workflows,"Media-rich ubiquitous distributed media processing workflow systems continuously sense users' needs, status, and the context, filter and fuse a multitude of real-time media data, and react by adapting the environment to the user. One challenge facing these systems is that they need to process real-time data arriving continuously from the sensors and data rates and qualities may vary dramatically. Thus, reducing the amount of unqualified data objects that need to be processed within the underlying media processing workflows can enhance the performance significantly. In this paper, we first focus on the prediction of the output qualities at the actuator, especially in the presence of fusion operators in the workflow. We then present quality-aware early object elimination schemes to enable informed resource savings in continuous real-time media processing workflow middleware.",2007,0, 3464,VLSI Oriented Fast Multiple Reference Frame Motion Estimation Algorithm for H.264/AVC,"In H.264/AVC standard, motion estimation can be processed on multiple reference frames (MRF) to improve the video coding performance. For the VLSI real-time encoder, the heavy computation of fractional motion estimation (FME) makes the integer motion estimation (IME) and FME must be scheduled in two macro block (MB) pipeline stages, which makes many fast MRF algorithms inefficient for the computation reduction. In this paper, two algorithms are provided to reduce the computation of FME and IME. First, through analyzing the block's Hadamard transform coefficients, all-zero case after quantization can be accurately detected. The FME processing in the remaining frames for the block, detected as all-zero one, can be eliminated. Second, because the fast motion object blurs its edges in image, the effect of MRF to aliasing is weakened. The first reference frame is enough for fast motion MBs and MRF is just processed on those slow motion MBs with a small search range. The computation of IME is also highly reduced with this algorithm. Experimental results show that 61.4%-76.7% computation can be saved with the similar coding quality as the reference software. Moreover, the provided fast algorithms can be combined with fast block matching algorithms to further improve the performance.",2007,0, 3465,A Queueing-Theory-Based Fault Detection Mechanism for SOA-Based Applications,"SOA has become more and more popular, but fault tolerance is not supported in most SOA-based applications yet. Although fault tolerance is a grand challenge for enterprise computing, we can partially resolve this problem by focusing on its some aspect. This paper focuses on fault detection and puts forward a queueing-theory-based fault detection mechanism to detect the services that fail to satisfy performance requirements. This paper also gives a reference service model and reference architecture of fault-tolerance control center of Enterprise Services Bus for SOA- based applications.",2007,0, 3466,A Queueing-Theory-Based Fault Detection Mechanism for SOA-Based Applications,"SOA has become more and more popular, but fault tolerance is not supported in most SOA-based applications yet. Although fault tolerance is a grand challenge for enterprise computing, we can partially resolve this problem by focusing on its some aspect. This paper focuses on fault detection and puts forward a queueing-theory-based fault detection mechanism to detect the services that fail to satisfy performance requirements. This paper also gives a reference service model and reference architecture of fault-tolerance control center of Enterprise Services Bus for SOAbased applications.",2007,0,3465 3467,Deployment of Accountability Monitoring Agents in Service-Oriented Architectures,"Service-oriented architecture (SOA) provides a flexible paradigm to compose dynamic service processes using individual services. However, service processes can be vastly complex, involving many service partners, thereby giving rise to difficulties in terms of pinpointing the service(s) responsible for problematic outcomes. In this research, we study efficient and effective mechanisms to deploy agents to monitor and detect undesirable services in a service process. We model the agent deployment problem as the classic weighted set covering (WSC) problem and present agent selection solutions at different stages of service process deployment. We propose the MASS (merit based agent and service selection) algorithm that considers agent cost during QoS-based service composition by using a meritagent_cost heuristic metric. We also propose the IGA (incremental greedy algorithm) to achieve fast agent selection when a service process is reconfigured after service failures. The performance study shows that our proposed algorithms are effective on saving agent cost and efficient on execution time.",2007,0, 3468,Challenges and Opportunities in Information Quality,"Summary form only given. Each year companies are spending hundreds of thousands of dollars in data cleansing and other activities to improve the quality of information they use to conduct business. The hidden cost of bad data - lost opportunities, low productivity, waste, and myriads of other consequences - is believed to be much higher than these direct costs. One study estimates this combined cost due to bad data to be over U$30 billion in year 2006 alone. As business operations rely more and more on computerized systems, this cost is bound to increase at an alarming rate. Information quality (or data quality) has been an integral part of various enterprise systems such as master data management, customer data integration, and ETL (extraction, transform, and load). We are witnessing trends of renewed awareness and efforts, both in research and practice, to address information quality collectively as an independent value in enterprise computing. International organizations such as EPC Global and the International Standardization Organization (ISO) have launched working groups to study and possibly introduce standards that can be used to define, assess, and enhance information quality throughout the supply chain. Issues in information quality range over multiple disciplines including software engineering, databases, statistics, organizational operations, and accounting. The scope and goal of information quality management would depend on the organization's objectives and business models. Assessing the impact of data quality is a complex task involving key business performance indexes such as sales, profitability, and customer satisfaction. Methods of assuring data quality must address operational processes as well as supporting technologies. This panel, with input from experts from both academia and industry, explores the challenges and opportunities in information quality in the dynamic environment of today's enterprise computing.",2007,0, 3469,The Weight-Watcher Service and its Lightweight Implementation,"This paper presents the weight-watcher service. This service aims at providing resource consumption measurements and estimations for software executing on resource-constrained devices. By using the weight-watcher, software components can continuously adapt and optimize their quality of service with respect to resource availability. The interface of the service is composed of a profiler and a predictor. We present an implementation that is lightweight in terms of CPU and memory. We also performed various experiments that convey (a) the tradeoff between the memory consumption of the service and the accuracy of the prediction, as well as (b) a maximum overhead of 10% on the execution speed of the VM for the profiler to provide accurate measurements.",2007,0, 3470,Research of Real-time Forecast Technology in Fault of Missile Guidance System Based on Grey Model and Data Fusion,"To solve fault forecast in missile guidance system, a new fault forecast method was presented, in which the grey system and multiple-sensor data fusion were used. Grey Model (GM) forecast is invalid when the data sequence is zero-mean random process, to overcome the drawback, present an improved GM method. The simulation results show the fault forecast method has better performance in missile guidance systems.",2007,0, 3471,Voice Conversion Adopting SOLAFS,"An improved method of voice conversion is proposed to make the speech of a source speaker sound like uttered by a target speaker. Speaker individuality transformation is achieved by altering the spectral envelope and prosodic information. The main advantage of this method is to firstly apply the synchronized overlap-add fixed synthesis (SOLAFS) to modify the source speaker's speaking rate to match that of the target speaker, which enhances the performance of the whole conversion system compared with conventional systems without such a procedure. Besides, a precise estimation for the target excitation is advanced only with the information of the matched source's excitation and the average pitch period of the target speaker. The proposed scheme is evaluated using both subjective and objective measures. The experimental results show that the system is capable of effectively transforming speaker identity whilst the converted speech maintains high quality.",2007,0, 3472,Recognizing Humans Based on Gait Moment Image,"This paper utilizes the periodicity of swing distances to estimate gait period. It shows good adaptability to low quality silhouette images. Gait moment image (GMI) is implemented based on the estimated gait period. GMI is the gait probability image at each key moment in gait period. It reduces the noise of the silhouettes extracted from low quality videos by gait probability distribution at each key moment. Moment deviation image (MDI) is generated by using silhouette images and GMIs. As a good complement of gait energy image (GEI), MDI provides more motion features than the basic GEI. MDI is utilized together with GEI to represent a subject. The nearest neighbor classifier is adopted to recognize subjects. The proposed algorithm is evaluated on the USF gait database, and the performance is compared with the baseline algorithm and two other algorithms. Experimental results show that this algorithm achieves a higher total recognition rate than the other algorithms.",2007,0, 3473,A New Inner Congestion Control Mechanism in Terabit Routers,"Direct interconnection network (DIN) has gained more and more attention when designing the switching fabric in the terabit routers. Various congestion control mechanisms are proposed for DIN when it is employed in the parallel computer systems. However, they all have own shortcomings and are not suitable for fabrics in the terabit routers. Based on the requirements of terabit class routers we propose a new congestion control mechanism in this paper. It uses a new detection criterion, i.e. the length of the source buffer, which is more accurate. Besides, the presence of the real time traffic is considered in particular. No throttling is imposed on such traffic to provide QoS. The input traffic will go into different source queue according to their traffic class. The input queue for the real time traffic will no longer be stopped to provide QoS. Finally, simulations are carried out by OPNET software. The results show that our congestion control mechanism eliminates performance degradation for loads beyond saturation, keeping adequate levels of throughput at high loads.",2007,0, 3474,A Resource Allocation Model with Cost-Performance Ratio in Data Grid,"Resource allocation for data transfer is a fundamental issue for achieving high performance in data grid environments. In this paper, we survey the existing researches on allocation problems in grid environment and propose a resource allocation model based on the cost-performance ratio for commerce environments. This ratio takes both price and quality of data resource into consideration at the same time. According to the optimization objective we define two types of ratios: Price-aware ratio and quality-aware ratio. They are suitable for the environments where allocations put more emphasis on quality or price of resource respectively. Based on the maximal cost-efficiency, we formulize the allocation optimization problem and present two theorems and a deduction about its solutions. The corresponding proofs are also given in the paper. Finally, we present two algorithms to allocate and re-allocate data resources. The analysis shows that the algorithms can satisfy our requirements.",2007,0, 3475,A Traceability Link Model for the Unified Process,"Traceability links are widely accepted as efficient means to support an evolutionary software development. However, their usage in analysis and design is effort consuming and error prone due to lacking or missing methods and tools for their creation, update and verification. In this paper we analyse and classify Unified Process artefacts to establish a traceability link model for this process. This model defines all required links between the artefacts. Furthermore, it provides a basis for the (semi)-automatic establishment and the verification of links in Unified Process development projects. We also define a first set of rules as step towards an efficient management of the links. In the ongoing project the rule set is extended to establish a whole framework of methods and rules.",2007,0, 3476,Clustering Ensemble based on the Fuzzy KNN Algorithm,"Compared with the single clustering algorithm, Clustering Ensembles are deemed to be more robust and accurate, with combining multiple partitions of the given data into a single clustering solution of better quality. In this paper, we proposed a new Clustering Ensemble algorithm based on Fuzzy K Nearest Neighbor (FKNNCE) to generate the similarity matrix of data to summarize the ensemble and then use hierarchical clustering algorithm to get the final partition, without specified number of clusters in advance. After discussing some related topics, the paper adopts real data and conducts an Intrusion Detection Model to evaluate the performance of the Clustering Ensemble algorithm, furthermore compare it with other algorithms. Experimental results demonstrate the effectiveness of the proposed algorithm.",2007,0, 3477,On the Customization of Components: A Rule-Based Approach,"Realizing the quality-of-service (QoS) requirements for a software system continues to be an important and challenging issue in software engineering. A software system may need to be updated or reconfigured to provide modified QoS capabilities. These changes can occur at development time or at runtime. In component-based software engineering, software systems are built by composing components. When the QoS requirements change, there is a need to reconfigure the components. Unfortunately, many components are not designed to be reconfigurable, especially in terms of QoS capabilities. It is often labor-intensive and error-prone work to reconfigure the components, as developers need to manually check and modify the source code. Furthermore, the work requires experienced senior developers, which makes it costly. The limitations motivate the development of a new rule-based semiautomated component parameterization technique that performs code analysis to identify and adapt parameters and changes components into reconfigurable ones. Compared with a number of alternative QoS adaptation approaches, the proposed rule-based technique has advantages in terms of flexibility, extensibility, and efficiency. The adapted components support the reconfiguration of potential QoS trade-offs among time, space, quality, and so forth. The proposed rule-based technique has been successfully applied to two substantial libraries of components. The F-measure or balanced F-score results for the validation are excellent, that is, 94 percent. Index Terms-Performance measures, rule-based processing, representations.",2007,0, 3478,"Comments on ""Data Mining Static Code Attributes to Learn Defect Predictors""","In this correspondence, we point out a discrepancy in a recent paper, ""data mining static code attributes to learn defect predictors,"" that was published in this journal. Because of the small percentage of defective modules, using probability of detection (pd) and probability of false alarm (pf) as accuracy measures may lead to impractical prediction models.",2007,0, 3479,"Problems with Precision: A Response to ""Comments on 'Data Mining Static Code Attributes to Learn Defect Predictors'""","Zhang and Zhang argue that predictors are useless unless they have high precison&recall. We have a different view, for two reasons. First, for SE data sets with large neg/pos ratios, it is often required to lower precision to achieve higher recall. Second, there are many domains where low precision detectors are useful.",2007,0, 3480,2007 IEEE International Conference on Communications,"The following topics are dealt with: communications QoS, reliability and performance modelling; wireless mesh, ad hoc and cellular networks; network routing; network congestion control; QoS control in mobile networks; wireless local area networks; network recovery and protection; network resource management; network traffic engineering; QoS techniques for video and voice; LDPC codes; cooperative networks; MIMO communications; OFDM modulation; turbo coding; iterative decoding; ad hoc and peer-to-peer network security; authentication and biometric security; distributed security and denial-of-service attacks; firewalls; intrusion detection and avoidance; secure communication protocols for ad hoc networks; Web security; multimedia communications; home services; peer-to-peer streaming; overlay networks; wireless multimedia; error control and recovery; voice over IP; Internet architecture; service-aware networks; context and revenue management; passive optical networks; optical switching; WDM networks; signal processing for wireless systems; synchronization; channel estimation and equalization; cross-layer design; medium access Control; CDMA; space-time coding; satellite communications; transport protocols; software defined radio; cognitive radio and smart antenna technologies.",2007,0, 3481,A Measurement Based Dynamic Policy for Switched Processing Systems,"Switched processing systems (SPS) represent a canonical model for many areas of applications of communication, computer and manufacturing systems. They are characterized by flexible, interdependent service capabilities and multiple classes of job traffic flows. Recently, increased attention has been paid to the issue of improving quality of service (QoS) performance in terms of delays and backlogs of the associated scheduling policies, rather than simply maximizing the system's throughput. In this study, we investigate a measurement based dynamic service allocation policy that significantly improves performance with respect to delay metrics. The proposed policy solves a linear program at selected points in time that are in turn determined by a monitoring strategy that detects 'significant' changes in the intensities of the input processes. The proposed strategy is illustrated on a small SPS subject to different types of input traffic.",2007,0, 3482,Flow Management for SIP Application Servers,"In this paper, we study how to build a front-end flow management system for SIP application servers. This is challenging because of some special characteristics of SIP and SIP applications. (1) SIP flows are well organized into sessions. The session structure should be respected when managing SIP flows. (2) SIP has been adopted by telecom industry, whose applications have more critical QoS requirements than WEB ones. (3) SIP message retransmissions exacerbate the overload situation in case of load bursts; moreover, they may trigger persistent retransmission phenomenon, which retains large response times even after the original burst disappears. To address the combination of these challenges, we propose a novel front-end SIP flow management system FEFM. FEFM integrates concurrency limiting, message scheduling and admission control to achieve overload protection and performance management. It also devises some techniques such as response time prediction, twin-queue scheduling, and retransmission removal to accomplish SLA-oriented improvement, reduce the call rejection rate and banish the persistent retransmission phenomenon. Intensive experiments show that FEFM achieves overload protection in burst period, improves performance significantly, and has the ability to compromise different tradeoffs between throughput and SLA satisfaction.",2007,0, 3483,The Study of Noise-rejection by Using Pulse Time-delay Identification Method and the Analysis of PD Data Obtained in the Field,"Based on the mechanism of partial discharges occurring in the stator winding insulations of generators, a set of on-line PD measurement with double sensors is developed in this paper. After comparing the different types of measurement, the noise suppressing method is focused on the technique of ""pulse time-delay identification"" which is based on installing double sensors for every phase. The principle of the method is introduced in this paper. The software program designed by Labview provides the power frequency graph, N-Q and N- ¿ two dimension diagram, three dimension diagram and maximum quantity of PD and NQN tendency diagram to make a further analysis of the PD data. In addition, the dada obtained in the field is analyzed here, including an insulation fault detected in a plant successfully.",2007,0, 3484,Defining and Detecting Bad Smells of Aspect-Oriented Software,"Bad smells are software patterns that are generally associated with bad design and bad programming. They can be removed by using the refactoring technique which improves the quality of software. Aspect-oriented (AO) software development, which involves new notions and the different ways of thinking for developing software and solving the crosscutting problem, possibly introduces different kinds of design flaws. Defining bad smells hidden in AO software in order to point out bad design and bad programming is then necessary. This paper proposes the definition of new AO bad smells. Moreover, appropriate existing AO refactoring methods for eliminating each bad smell are presented. The proposed bad smells are validated. The results show that after removing the bad smells by using appropriate refactoring methods, the software quality is increased.",2007,0, 3485,Analysis of Conflicts among Non-Functional Requirements Using Integrated Analysis of Functional and Non-Functional Requirements,"Conflicts among non-functional requirements are often identified subjectively and there is a lack of conflict analysis in practice. Current approaches fail to capture the nature of conflicts among non-functional requirements, which makes the task of conflict resolution difficult. In this paper, a framework has been provided for the analysis of conflicts among non-functional requirements using the integrated analysis of functional and non-functional requirements. The framework identifies and analyzes conflicts based on relationships among quality attributes, functionalities and constraints. Since, poorly structured requirement statements usually result in confusing specifications; we also developed canonical forms for representing non-functional requirements. . The output of our framework is a conflict hierarchy that refines conflicts among non-functional requirements level by level. Finally, a case study is provided in which the proposed framework was applied to analyze and detect conflicts among the non-functional requirements of a search engine.",2007,0, 3486,Measuring and Assessing Software Reliability Growth through Simulation-Based Approaches,"In the past decade, several rate-based simulation approaches were proposed to predict software failure process. But most of them did not take the number of available debuggers into consideration and this may not be reasonable. In practice, the number of debuggers is always limited and controlled. If all debuggers or developers are busy, the new detected faults should be willing to wait (for a long time to be corrected and removed). Besides, practical experiences also show that the fault removal time is non-negligible and the number of removed faults generally lags behind the total number of detected faults. Based on these facts, in this paper, we will apply queueing theory to describe and explain the possible debugging behavior during software development. Two simulation procedures are developed based on G/G/ infin and G/G/m queueing models. The proposed methods will be illustrated with real software failure data. Experimental results will be analyzed and discussed in detail. The results we obtained will greatly help to understand the influence of size of debugger teams on the software failure correction activities and other related reliability assessments.",2007,0, 3487,Test Case Prioritization for Black Box Testing,"Test case prioritization is an effective and practical technique that helps to increase the rate of regression fault detection when software evolves. Numerous techniques have been reported in the literature on prioritizing test cases for regression testing. However, existing prioritization techniques implicitly assume that source or binary code is available when regression testing is performed, and therefore cannot be implemented when there is no program source or binary code to be analyzed. In this paper, we presented a new technique for black box regression testing, and we performed an experiment to measure our technique. Our results show that the new technique is helpful to improve the effectiveness of fault detection when performing regression test in black box environment.",2007,0, 3488,Bivariate Software Fault-Detection Models,"In this paper, we develop bivariate software fault-detection models with two time measures: calendar time (day) and test-execution time (CPU time) and incorporate both of them to assess the quantitative software reliability with higher accuracy. The resulting stochastic models are characterized by a simple binomial process and the bivariate order statistics of software fault-detection times with different time scales.",2007,0, 3489,A Control-Theoretic Approach to QoS Adaptation in Data Stream Management Systems Design,"In data stream management systems (DSMSs), request handling has to meet various QoS requirements. Traditional open-loop DSMS ignores system status information and run-time history in decision making and is unable to achieve stable QoS performance over run-time as the workload or environment changes. This paper discusses the design of a close-loop DSMS that applies adaptive control in control theory to adapt itself to the changing environment and workloads in order to optimize the QoS performance over run-time. The proposed adaptive DSMS is built on system identification and successive approximation knowledge in control theory and uses a least square approach for parameter estimation. Experimental data shows that the proposed adaptive DSMS outperforms the traditional open-loop DSMS in overall QoS among various network conditions and workload settings.",2007,0, 3490,"Quality Metrics for Internet Applications: Developing ""New"" from ""Old""","This discussion concerns 'metrics'. More specifically, we discuss quantitative metrics for evaluating Internet applications: what should we quantify, monitor and analyse in order to characterise, evaluate and develop Internet applications based on reusing existing Internet applications,which is widely available in the Internet. Due to the distinctive evolution nature of Internet applications, assessing quality of software will provide ease and higher accuracy for Web developers. However, there is a great gap between the rapid development of Internet applications and the slow speed of developing corresponding metric measures. To tackle this issue, we look into measuring the quality of Internet applications and enable Web developers to enhance the quality of their programs and identify reusable components from Internet-based resources.",2007,0, 3491,Redefining Performance Evaluation Tools for Real-Time QRS Complex Classification Systems,"In a heartbeat classification procedure, the detection of QRS complex waveforms is necessary. In many studies, this heartbeat extraction function is not considered: the inputs of the classifier are assumed to be correctly identified. This communication aims to redefine classical performance evaluation tools in entire QRS complex classification systems and to evaluate the effects induced by QRS detection errors on the performance of heartbeat classification processing (normal versus abnormal). Performance statistics are given and discussed considering the MIT/BIH database records that are replayed on a real-time classification system composed of the classical detector proposed by Hamilton and Tompkins, followed by a neural-network classifier. This study shows that a classification accuracy of 96.72% falls to 94.90% when a drop of 1.78% error rate is introduced in the detector quality. This corresponds to an increase of about 50% bad classifications.",2007,0, 3492,Failure and Coverage Factors Based Markoff Models: A New Approach for Improving the Dependability Estimation in Complex Fault Tolerant Systems Exposed to SEUs,"Dependability estimation of a fault tolerant computer system (FTCS) perturbed by single event upsets (SEUs) requires obtaining first the probability distribution functions for the time to recovery (TTR) and the time to failure (TTF) random variables. The application cross section (sigmaAP) approach does not give directly all the required information. This problem can be solved by means of the construction of suitable Markoff models. In this paper, a new method for constructing such models based on the system's failure and coverage factors is presented. Analytical dependability estimation is consistent with fault injection experiments performed in a fault tolerant operating system developed for a complex, real time data processing system.",2007,0, 3493,Automatic Synthesis of Fault Detection Modules for Mobile Robots,"In this paper, we present a new approach for automatic synthesis of fault detection modules for autonomous mobile robots. The method relies on the fact that hardware faults typically change the flow of sensory perceptions received by the robot and the subsequent behavior of the control program. We collect data from three experiments with real robots. In each experiment, we record all sensory inputs from the robots while they are operating normally and after software-simulated faults have been injected. We use back- propagation neural networks to synthesize task-dependent fault detection modules. The performance of the modules is evaluated in terms of false positives and latency.",2007,0, 3494,How Business Goals Drive Architectural Design,"This paper illustrates how business goals can significantly impact a software management system's architecture without necessarily affecting its functionality. These goals include 1) supporting hardware devices from different manufacturers, 2) considering language, culture, and regulations of different markets, 3) assessing tradeoffs and risks to determine how the product should support these goals, 4) refining goals such as scaling back on intended markets, depending on the company's comfort level with the tradeoffs and risks. More importantly, these business goals correspond to quality attributes the end system must exhibit. The system must be modifiable to support a multitude of hardware devices and consider different languages and cultures. Supporting different regulations in different geographic markets requires the system to respond to life-threatening events in a timely manner performance requirement.",2007,0, 3495,Scrum and CMMI Level 5: The Magic Potion for Code Warriors,Projects combining agile methods with CMMI1 are more successful in producing higher quality software that more effectively meets customer needs at a faster pace. Systematic Software Engineering works at CMMI level 5 and uses lean Software Development as a driver for optimizing software processes. Early pilot projects at Systematic showed productivity on Scrum teams almost twice that of traditional teams. Other projects demonstrated a story based test driven approach to software development reduced defects found during final test by 40%. We assert that Scrum and CMMI together bring a more powerful combination of adaptability and predictability than either one alone and suggest how other companies can combine them.,2007,0, 3496,Detecting and Reducing Partition Nodes in Limited-routing-hop Overlay Networks,"Many Internet applications use overlay networks as their basic facilities, like resource sharing, collaborative computing, and so on. Considering the communication cost, most overlay networks set limited hops for routing messages, so as to restrain routing within a certain scope. In this paper we describe partition nodes in such limited- routing-hop overlay networks, whose failure may potentially lead the overlay topology to be partitioned so that seriously affect its performance. We propose a proactive, distributed method to detect partition nodes and then reduce them by changing them into normal nodes. The results of simulations on both real-trace and generated topologies, scaling from 500 to 10000 nodes, show that our method can effectively detect and reduce partition nodes and improve the connectivity and fault tolerance of overlay networks.",2007,0, 3497,Quality of Service of Grid Computing: Resource Sharing,"Rapid advancement of communication technology has changed the landscape of computing. New models of computing, such as business-on-demand, Web services, peer-to-peer networks, and grid computing have emerged to harness distributed computing and network resources to provide powerful services. The non-deterministic characteristic of the resource availability in these new computing platforms raises an outstanding challenge: how to support quality of service (QoS) to meet a user's demand? In this study, we conduct a thorough study of QoS of distributed computing, especially on grid computing where the requirement of distributed sharing and coordination goes to the extreme. We start at QoS policies, and then focus on technical issues of the enforcement of the policies and performance optimization under each policy. This study provides a classification of existing software system based on their underlying policies, a systematic understanding of QoS, and a framework for QoS of grid computing.",2007,0, 3498,Train Management Platform for advanced maintenance of Passenger Information Systems,"A Passenger Information System provides information about the train's running to the passengers on-board. Many nodes are present in this system, located all along the train. To guarantee that the information provided is correct and to perform predictive maintenance on this system in order to reduce costs, an intelligent distributed software application is presented in this paper. This proposed application eases the tasks of fault detection and correlation and can lead to an improvement in performance by aiding in the task of preventive maintenance.",2007,0, 3499,Building a Novel GP-Based Software Quality Classifier Using Multiple Validation Datasets,"One problem associated with software quality classification (SQC) modeling is that the historical metric dataset obtained from a single software project are often not adequate to build robust and accurate models. To address this issue, multiple datasets obtained from different software projects are used for SQC modeling in recent research works. Our previous study has demonstrated that using multiple datasets for validation can achieve robust genetic programming (GP)-based SQC models. This paper further investigates the effectiveness of using multiple validation datasets. Moreover, a novel GP-based classifier consisting of training, multiple-dataset validation, and voting phases, is proposed. The experiments are carried out on seven NASA software projects. The results are compared with the results achieved by seventeen other data mining techniques. The comparisons demonstrate that the performance of our approach is significantly better by using multiple datasets from different software projects with similar reliability goals.",2007,0, 3500,An Empirical Study of the Classification Performance of Learners on Imbalanced and Noisy Software Quality Data,"In the domain of software quality classification, data mining techniques are used to construct models (learners) for identifying software modules that are most likely to be fault-prone. The performance of these models, however, can be negatively affected by class imbalance and noise. Data sampling techniques have been proposed to alleviate the problem of class imbalance, but the impact of data quality on these techniques has not been adequately addressed. We examine the combined effects of noise and imbalance on classification performance when seven commonly-used sampling techniques are applied to software quality measurement data. Our results show that some sampling techniques are more robust in the presence of noise than others. Further, sampling techniques are affected by noise differently given different levels of imbalance.",2007,0, 3501,A practical method for the software fault-prediction,"In the paper, a novel machine learning method, SimBoost, is proposed to handle the software fault-prediction problem when highly skewed datasets are used. Although the method, proved by empirical results, can make the datasets much more balanced, the accuracy of the prediction is still not satisfactory. Therefore, a fuzzy-based representation of the software module fault state has been presented instead of the original faulty/non-faulty one. Several experiments were conducted using datasets from NASA Metrics Data Program. The discussion of the results of experiments is provided.",2007,0, 3502,Software Defects Prediction using Operating Characteristic Curves,We present a software defect prediction model using operating characteristic curves. The main idea behind our proposed technique is to use geometric insight in helping construct an efficient and fast prediction method to accurately predict the. cumulative number of failures at any given stage during the software development process. Our predictive approach uses the number of detected faults instead of the software failure-occurrence time in the testing phase. Experimental results illustrate the effectiveness and the much improved performance of the proposed method in comparison with the Bayesian prediction approaches.,2007,0, 3503,An Improved Approach to Passive Testing of FSM-based Systems,"Fault detection is a fundamental part of passive testing which determines whether a system under test (SUT) is faulty by observing the input/output behavior of the SUT without interfering its normal operations. In this paper, we propose a new approach to finite state machine (FSM)-based passive fault detection which improves the performance of the approach in [4] and gathers more information during testing compared with the approach in [4]. The results of theoretical and experimental evaluations are reported.",2007,0, 3504,Deriving Queuing Network Model for UML for Software Performance Prediction,"It is an important issue for software architects to estimate the performance of software in the early stage of development process due to the needs to verify QoS. Queueing network model is a very useful tool to analyze the performance of a system from abstract model. In this paper, we propose a transformation technique from UML into queueing network model. This approach avoids the need for a prototype implementation since we can determine the overall form of performance equation from the architectural design description. We prove the accuracy of derived queueing network model, which is summarized at 85 percent, through a ubiquitous commerce system which extends mobile commerce system developed by our prior work.",2007,0, 3505,Anomaly-based Fault Detection System in Distributed System,"One of the important design criteria for distributed systems and their applications is their reliability and robustness to hardware and software failures. The increase in complexity, inter connectedness, dependency and the asynchronous interactions between the components that include hardware resources (computers, servers, network devices), and software (application services, middleware, web services, etc.) makes the fault detection and tolerance a challenging research problem. In this paper, we present an innovative approach based on statistical and data mining techniques to detect faults (hardware or software) and also identify the source of the fault. In our approach, we monitor and analyze in realtime all the interactions between all the components of a distributed system. We used data mining and supervised learning techniques to obtain the rules that can accurately model the normal interactions among these components. Our anomaly analysis engine will immediately produce an alert whenever one or more of the interaction rules that capture normal operations is violated due to a software or hardware failure. We evaluate the effectiveness of our approach and its performance to detect software faults that we inject asynchronously, and compare the results for different noise level.",2007,0, 3506,Exploring Genetic Programming and Boosting Techniques to Model Software Reliability,"Software reliability models are used to estimate the probability that a software fails at a given time. They are fundamental to plan test activities, and to ensure the quality of the software being developed. Each project has a different reliability growth behavior, and although several different models have been proposed to estimate the reliability growth, none has proven to perform well considering different project characteristics. Because of this, some authors have introduced the use of Machine Learning techniques, such as neural networks, to obtain software reliability models. Neural network-based models, however, are not easily interpreted, and other techniques could be explored. In this paper, we explore an approach based on genetic programming, and also propose the use of boosting techniques to improve performance. We conduct experiments with reliability models based on time, and on test coverage. The obtained results show some advantages of the introduced approach. The models adapt better to the reliability curve, and can be used in projects with different characteristics.",2007,0, 3507,Detecting Primary Transmitters via Cooperation and Memory in Cognitive Radio,"Effective detection of the activity of primary transmitters is known to be one of the major challenges to the implementation of cognitive radio systems. In this paper, we investigate the use of cooperation and memory (using tools from change detection) as means to enhance primary detection. We focus on the simple case of two secondary users and one primary source. Numerical results show the relevant performance benefits of both cooperation and memory.",2007,0, 3508,On the Evaluation of Header Compression for VoIP Traffic over DVB-RCS,"The transition towards the 'All IP' environment was long ago foreseen, but the actual shift towards this goal did not happen until recently. This inevitable convergence is naturally coupled with a number of significant advantages, but also introduces some problems that were not present before. One of these, which is the focus of this paper, is the efficient transport of voice traffic over IP (VoIP). Since voice streams consist of numerous but small packets, the overhead that is caused by the RTP/UDP/IP headers is comparable - if not higher than the capacity required for the actual payload. This handicap becomes even more severe in the case of radio communications, where the scarcity of bandwidth demands an as efficient as possible utilization. The above facts make the introduction of header compression algorithms necessary, in order to mitigate the problem of overwhelming overhead. This paper describes the design and implementation of a software platform that aims to evaluate the performance of a well known header compression scheme within the context of DVB-RCS. More specifically, the focus of this work is to assess quantitatively the gains in capacity and the degradation of quality of service, when the Compressed RTP header compression scheme is employed in this satellite environment. The presented testbed models the impairments of the satellite channel and applies the header compression mechanism on real VoIP traffic.",2007,0, 3509,Identification Of Software Performance Bottleneck Components In Reuse based Software Products With The Application Of Acquaintanceship Graphs,Component-based software engineering provides an opportunity for better quality and increased productivity in software development by using reusable software components [9]. Also performance is a make-or-break quality for software. The systematic application of software performance engineering techniques throughout the development process can help to identify design alternatives that preserve desirable qualities such as extensibility and reusability while meeting performance objectives [1]. Implementing the effective performance-based management of software intensive projects has proven to be challenging task now a days. This paper aims at identifying the major reasons of software performance failures in terms of the component communication path. My work focused on one of the applications of discrete mathematics namely graph theory. This study makes an attempt to predict the most used components to the least used with the help of acquaintanceship graphs and also the shortest communication path between any two components with the help of adjacency matrix. Experiments are conducted with four components and the result shows a promising approach towards component utilization and bottleneck determination that describe the major areas in the component communication to concentrate in achieving success in cost-effective development of highly performance software.,2007,0, 3510,Redundant Coupling Detection Using Dynamic Dependence Analysis,"Most of the software engineers realize the importance of avoiding coupling between programs modules in order to achieve the advantages of modules reusability. However, most of them practice modules' coupling in many cases without necessities. Many techniques have been proposed to detect module's coupling in computer programs. However, developers desire to automatically identify unnecessary module's coupling that can be eliminated with minimum effort, which we refer to as redundant coupling. Redundant coupling is the module coupling that does not contribute to the output of the program. In this paper we introduce an automated approach that uses the dynamic dependence analysis to detect the redundant coupling between program's modules. Such technique guides the developers to avoid redundant coupling when testing their programs.",2007,0, 3511,Biosensor for the Detection of Specific DNA Sequences by Impedance Spectroscopy,"In the agro-food industry there is a great need for rapid, sensitive and label-free biosensors for food quality control, for example the detection of pathogenic agents such as listeria or salmonella in dairy food samples. We are currently working on the development of a real-time monitoring system based on an interdigitated electrode DNA biochip for the detection of specific DNA target sequences. The biosensor contains four active areas on which DNA probes of various kinds can be immobilized, and is integrated into a microfluidic system. Impedimetric measurements of matching and non-matching synthetic target DNA sequences have been performed.",2007,0, 3512,Quality-of-Service Management in ASEMA System for Unicast MPEG4 Video Transmissions,"The Active Service Environment Management system (ASEMA) provides best possible multimedia service experience to end user. One of the main aspects of the ASEMA is to provide a variable live streaming service to the end users of the ASEMA. The ASEMA implements this through dynamic change in the properties of the video stream delivered to the end user's end device. The live video stream used in the ASEMA is delivered to the end users is in MPEG 4 video format. The video itself is streamed on top of the real time protocol (RTP) and the parameters are negotiated with the real time streaming protocol (RTSP) before the streaming commences. The research problem of this is to provide easy solution for QoS measurements in networks that are closed nature. The research is based on the constructive method of the related publications and the results are deducted from the constructed quality of service management of the ASEMA system. The management is founded on the measuring of the receiving and sending bit rate values in both ends, in the sending end and in the receiving end. If a fluctuation in the values are detected the video stream's properties are changed dynamically.",2007,0, 3513,Early Software Product Improvement with Sequential Inspection Sessions: An Empirical Investigation of Inspector Capability and Learning Effects,"Software inspection facilitates product improvement in early phases of software development by detecting defects in various types of documents, e.g., requirements and design specifications. Empirical study reports show that usage-based reading (UBR) techniques can focus inspectors on most important use cases. However, the impact of inspector qualification and learning effects in the context of inspecting a set of documents in several sessions is still not well understood. This paper contributes a model for investigating the impact of inspector capability and learning effects on inspection effectiveness and efficiency in a large-scale empirical study in an academic context. Main findings of the study are (a) the inspection technique UBR better supported the performance inspectors with lower experience in sequential inspection cycles (learning effect) and (b) when inspecting objects of similar complexity significant improvements of defect detection performance could be measured.",2007,0, 3514,A Two-Step Model for Defect Density Estimation,"Identifying and locating defects in software projects is a difficult task. Further, estimating the density of defects is more difficult. Measuring software in a continuous and disciplined manner brings many advantages such as accurate estimation of project costs and schedules, and improving product and process qualities. Detailed analysis of software metric data gives significant clues about the locations and magnitude of possible defects in a program. The aim of this research is to establish an improved method for predicting software quality via identifying the defect density of fault prone modules using machine-learning techniques. We constructed a two-step model that predicts defect density by taking module metric data into consideration. Our proposed model utilizes classification and regression type learning methods consecutively. The results of the experiments on public data sets show that the two-step model enhances the overall performance measures as compared to applying only regression methods.",2007,0, 3515,Distance Estimation of Switched Capacitor Banks in Utility Distribution Feeders,"This paper proposes an efficient and accurate technique for estimating the location of an energized capacitor bank downline from a monitoring equipment. The proposed technique is based on fundamental circuit theory and works with existing capacitor switching transient data of radial power distribution systems. Once the direction of a switched capacitor bank is found to be downstream from a power quality monitoring point, the feeder line reactance is estimated using the initial voltage change. The efficacy of the proposed technique is demonstrated with system data from IEEE test feeders modeled in a commercial time-domain modeling software package.",2007,0, 3516,Enlarging Instruction Streams,"Web applications are widely adopted and their correct functioning is mission critical for many businesses. At the same time, Web applications tend to be error prone and implementation vulnerabilities are readily and commonly exploited by attackers. The design of countermeasures that detect or prevent such vulnerabilities or protect against their exploitation is an important research challenge for the fields of software engineering and security engineering. In this paper, we focus on one specific type of implementation vulnerability, namely, broken dependencies on session data. This vulnerability can lead to a variety of erroneous behavior at runtime and can easily be triggered by a malicious user by applying attack techniques such as forceful browsing. This paper shows how to guarantee the absence of runtime errors due to broken dependencies on session data in Web applications. The proposed solution combines development-time program annotation, static verification, and runtime checking to provably protect against broken data dependencies. We have developed a prototype implementation of our approach, building on the JML annotation language and the existing static verification tool ESC/Java2, and we successfully applied our approach to a representative J2EE-based e-commerce application. We show that the annotation overhead is very small, that the performance of the fully automatic static verification is acceptable, and that the performance overhead of the runtime checking is limited.",2007,0, 3517,Failure Detection in Large-Scale Internet Services by Principal Subspace Mapping,"Fast and accurate failure detection is becoming essential in managing large-scale Internet services. This paper proposes a novel detection approach based on the subspace mapping between the system inputs and internal measurements. By exploring these contextual dependencies, our detector can initiate repair actions accurately, increasing the availability of the system. Although a classical statistical method, the canonical correlation analysis (CCA), is presented in the paper to achieve subspace mapping, we also propose a more advanced technique, the principal canonical correlation analysis (PCCA), to improve the performance of the CCA-based detector. PCCA extracts a principal subspace from internal measurements that is not only highly correlated with the inputs but also a significant representative of the original measurements. Experimental results on a Java 2 platform, enterprise edition (J2EE)- based Web application demonstrate that such property of PCCA is especially beneficial to failure detection tasks.",2007,0, 3518,Synchronization Techniques for Power Quality Instruments,"The measurement of voltage characteristics in power systems requires the accurate estimation of the power supply frequency and signal synchronization, even in the presence of disturbances. The authors developed and tested two innovative techniques for instrument synchronization. The first is based on signal spectral analysis techniques performed by means of the Chirp- transform analysis. The second is a phase-locked loop (PLL) software based on a time-domain coordinate transformation and an innovative phase-detection technique. To evaluate how synchronization techniques are adversely affected by the application of a disturbing influence, experimental tests were carried out, taking into account the requirements of the standards. The proposed techniques were compared with a standard hardware solution. In this paper, the proposed techniques are described, the experimental results are presented, and the accuracy specifications are discussed.",2007,0, 3519,Fault Tolerance of Multiprocessor-Structured Control System by Hardware and Software Reconfiguration,"Since the traditional redundancy for fault tolerance of a control system is complex in structure and expensive, a novel method for fault tolerance of multiprocessor-structured control system by hardware and software reconfiguration is presented. Based on the fact that the control system is composed of several processors, this method performs fault detection by self-diagnosis implemented in each processor and validation of exchanged information between the processors, tolerates fault by hardware and software reconfiguration carried out by monitoring and configuring device. Security strategy and operation mode were presented. The principle of the monitoring and configuring device was discussed in detail. The method was proved by a control system of dc motor and got a satisfied result.",2007,0, 3520,Node-Replacement Policies to Maintain Threshold-Coverage in Wireless Sensor Networks,"With the rapid deployment of wireless sensor networks, there are several new sensing applications with specific requirements. Specifically, target tracking applications are fundamentally concerned with the area of coverage across a sensing site in order to accurately track the target. We consider the problem of maintaining a minimum threshold-coverage in a wireless sensor network, while maximizing network lifetime and minimizing additional resources. We assume that the network has failed when the sensing coverage falls below the minimum threshold-coverage. We develop three node-replacement policies to maintain threshold-coverage in wireless sensor networks. These policies assess the candidature of each failed sensor node for replacement. Based on different performance criteria, every time a sensor node fails in the network, our replacement policies either replace with a new sensor or ignore the failure event. The node-replacement policies replace a failed node according to a node weight. The node weight is assigned based on one of the following parameters: cumulative reduction of sensing coverage, amount of energy increase per node, and local reduction of sensing coverage. We also implement a first-fail-first-replace policy and a no-replacement policy to compare the performance results. We evaluate the different node-replacement polices through extensive simulations. Our results show that given a fixed number of replacement sensor nodes, the node-replacement policies significantly increase the network lifetime and the quality of coverage, while keeping the sensing-coverage about a pre-set threshold.",2007,0, 3521,Improving the Diagnosis of Ischemic CVA's through CT Scan with Neural Networks,"Technological and computing evolution promoted new opportunities to improve the quality of life, in particular, the quality of diagnostic evaluations. Computerized tomography is one of the imaging equipments of diagnosis which has most benefited from technological improvements. Because of that, and due to the quality of the diagnosis produced, it is one of the most employed equipments in clinical applications. The ischaemic cerebral vascular accident (ICVA) is the pathology that confirms the frequent use of the computerized tomography. The interest for this pathology, and in general for the encephalon image analysis as a preventive diagnosis, is mainly due to the frequent occurrence of ICV As in development countries and its social-economic impact. In this sense, we propose to evaluate the ability of artificial neural networks (ANN) for automatic identification of ICVA by means of tissue density images obtained by computerised tomography. This work employed cranioencephalon computerised tomography exams and their respective medical reports, to train ANNs classifiers. Features extracted from the images were used as inputs to the classifiers. Once the ANNs were trained, the neural classifiers were tested with data never seen by the network. At this stage we may conclude that the ANNs may significantly contribute as an ICV As computerised tomography diagnostic aid, since among the test cases the automatic identification of ischaemic lesions has been performed with no false negatives e very few false positives.",2007,0, 3522,An Applied Study of Destructive Measurement System Analysis,"Measurement system analysis (MSA) is used to assess the ability of a measurement system to detect meaningful differences in process variables. For destructive measurement system, there are two methods to design and analyze it based on the homogenous batch size, including nested MSA and crossed MSA. At the same time, the P/T ratio (""tolerance method"") is modified to measure the suitability of destructive measurement system to make pass/fail decisions to a specification. Finally, rip off force testing system of chargers is assessed by crossed MSA and modified P/T ratio. Some suggestions for destructive MSA are also presented.",2007,0, 3523,Application of Multi-function Measure Control Analyzing Device in Electric Distribution,"With the rapid development of computer technology, many kinds of monitor and analysis devices such as power quality monitor, fault record, fault detection and other device can be integrated into one device by a computer for electric distribution. And it will look very expedient and can complete all of the work in electric distribution. In this paper, a kind of method to design such multifunctional device called multi-function measure control analyzing device employed in electric distribution will be introduced. And a new algorithm using the fractal number of phase voltages and phase currents is presented for power grid monitor detection and fault line selection. The apparatus we have developed operates very well in electric distribution. And the application results show that this system is very applicable and can be used with great validity in dynamic monitoring fault diagnosis and trouble removal.",2007,0, 3524,Countermeasures Against Branch Target Buffer Attacks,"Branch Prediction Analysis has been recently proposed as an attack method to extract the key from software implementations of the RSA public key cryptographic algorithm. In this paper, we describe several solutions to protect against such an attack and analyze their impact on the execution time of the cryptographic algorithm. We show that the code transformations required for protection against branch target buffer attacks can be automated and impose only a negligible performance penalty.",2007,0, 3525,A Practical Model for Measuring Maintainability,"The amount of effort needed to maintain a software system is related to the technical quality of the source code of that system. The ISO 9126 model for software product quality recognizes maintainability as one of the 6 main characteristics of software product quality, with adaptability, changeability, stability, and testability as subcharacteristics of maintainability. Remarkably, ISO 9126 does not provide a consensual set of measures for estimating maintainability on the basis of a system's source code. On the other hand, the maintainability index has been proposed to calculate a single number that expresses the maintainability of a system. In this paper, we discuss several problems with the MI, and we identify a number of requirements to be fulfilled by a maintainability model to be usable in practice. We sketch a new maintainability model that alleviates most of these problems, and we discuss our experiences with using such as system for IT management consultancy activities.",2007,0, 3526,Paceline: Improving Single-Thread Performance in Nanoscale CMPs through Core Overclocking,"Under current worst-case design practices, manufacturers specify conservative values for processor frequencies in order to guarantee correctness. To recover some of the lost performance and improve single-thread performance, this paper presents the Paceline leader-checker microarchitecture. In Paceline, a leader core runs the thread at higher-than-rated frequency, while passing execution hints and prefetches to a safely-clocked checker core in the same chip multiprocessor. The checker redundantly executes the thread faster than without the leader, while checking the results to guarantee correctness. Leader and checker cores periodically swap functionality. The result is that the thread improves performance substantially without significantly increasing the power density or the hardware design complexity of the chip. By overclocking the leader by 30%, we estimate that Paceline improves SPECint and SPECfp performance by a geometric mean of 21% and 9%, respectively. Moreover, Paceline also provides tolerance to transient faults such as soft errors.",2007,0, 3527,CacheScouts: Fine-Grain Monitoring of Shared Caches in CMP Platforms,"As multi-core architectures flourish in the marketplace, multi-application workload scenarios (such as server consolidation) are growing rapidly. When running multiple applications simultaneously on a platform, it has been shown that contention for shared platform resources such as last-level cache can severely degrade performance and quality of service (QoS). But today's platforms do not have the capability to monitor shared cache usage accurately and disambiguate its effects on the performance behavior of each individual application. In this paper, we investigate low-overhead mechanisms for fine-grain monitoring of the use of shared cache resources along three vectors: (a) occupancy - how much space is being used and by whom, (b) interference - how much contention is present and who is being affected and (c) sharing - how are threads cooperating. We propose the CacheScouts monitoring architecture consisting of novel tagging (software-guided monitoring IDs), and sampling mechanisms (set sampling) to achieve shared cache monitoring on per application basis at low overhead (<0.1%) and with very little loss of accuracy (<5%). We also present case studies to show how CacheScouts can be used by operating systems (OS) and virtual machine monitors (VMMs) for (a) characterizing execution profiles, (b) optimizing scheduling for performance management, (c) providing QoS and (d) metering for chargeback.",2007,0, 3528,JudoSTM: A Dynamic Binary-Rewriting Approach to Software Transactional Memory,"With the advent of chip-multiprocessors, we are faced with the challenge of parallelizing performance-critical software. Transactional memory (TM) has emerged as a promising programming model allowing programmers to focus on parallelism rather than maintaining correctness and avoiding deadlock. Many implementations of hardware, software, and hybrid support for TM have been proposed; of these, software-only implementations (STMs) are especially compelling since they can be used with current commodity hardware. However, in addition to higher overheads, many existing STM systems are limited to either managed languages or intrusive APIs. Furthermore, transactions in STMs cannot normally contain calls to unobservable code such as shared libraries or system calls. In this paper we present JudoSTM, a novel dynamic binary-rewriting approach to implementing STM that supports C and C++ code. Furthermore, by using value-based conflict detection, JudoSTM additionally supports the transactional execution of both (i) irreversible system calls and (ii) library functions that may contain locks. We significantly lower overhead through several novel optimizations that improve the quality of rewritten code and reduce the cost of conflict detection and buffering. We show that our approach performs comparably to Rochester's RSTM library-based implementation- demonstrating that a dynamic binary-rewriting approach to implementing STM is an interesting alternative.",2007,0, 3529,A Study of a Transactional Parallel Routing Algorithm,"Transactional memory proposes an alternative synchronization primitive to traditional locks. Its promise is to simplify the software development of multi-threaded applications while at the same time delivering the performance of parallel applications using (complex and error prone) fine grain locking. This study reports our experience implementing a realistic application using transactional memory (TM). The application is Lee's routing algorithm and was selected for its abundance of parallelism but difficulty of expressing it with locks. Each route between a source and a destination point in a grid can be considered a unit of parallelism. Starting from this simple approach, we evaluate the exploitable parallelism of a transactional parallel implementation and explore how it can be adapted to deliver better performance. The adaptations do not introduce locks nor alter the essence of the implemented algorithm, but deliver up to 20 times more parallelism. The adaptations are derived from understanding the application itself and TM. The evaluation simulates an abstracted TM system and, thus, the results are independent of specific software or hardware TM implemented, and describe properties of the application.",2007,0, 3530,Subjective Evaluation of Techniques for Proper Name Pronunciation,"Automatic pronunciation of unknown words of English is a hard problem of great importance in speech technology. Proper names constitute an especially difficult class of words to pronounce because of their variable origin and uncertain degree of assimilation of foreign names to the conventions of the local speech community. In this paper, we compare four different methods of proper name pronunciation for English text-to-speech (TTS) synthesis. The first (intended to be used as the primary strategy in a practical TTS system) uses a set of manually supplied pronunciations, referred to as the ldquodictionaryrdquo pronunciations. The remainder are pronunciations obtained from three different data-driven approaches (intended as candidates for the back-up strategy in a real system) which use the dictionary of ldquoknownrdquo proper names to infer pronunciations for unknown names. These are: pronunciation by analogy (PbA), a decision tree method (CART), and a table look-up method (TLU). To assess the acceptability of the pronunciations to potential users of a TTS system, subjective evaluation was carried out, in which 24 listeners rated 1200 synthesized pronunciations of 600 names by the four methods using a five-point (opinion score) scale. From over 50 000 proper names and their pronunciations, 150 so-called one-of-a-kind pronunciations were selected for each of the four methods (600 in total). A one-of-a-kind pronunciation is one for which one of the four methods disagrees with the other three methods, which agree among themselves. Listener opinions on one-of-a-kind pronunciations are argued to be a good measure of the overall quality of a particular method. For each one-of-a-kind pronunciation, there is a corresponding so-called rest pronunciation (another 600 in total), on which the remaining three competitor methods agree, for which listener opinions are taken to be indicative of the general quality of the competition. Nonparametric tests of significance of mean opin- on scores show that the dictionary pronunciations are rated superior to the automatically inferred pronunciations with little difference between the data-driven methods for the one-of-a-kind pronunciations, but for the rest pronunciations there is suggestive evidence that PbA is superior to both CART and TLU, which perform at approximately the same level.",2007,0, 3531,Software-Based Failure Detection and Recovery in Programmable Network Interfaces,"Emerging network technologies have complex network interfaces that have renewed concerns about network reliability. In this paper, we present an effective low-overhead fault tolerance technique to recover from network interface failures. Failure detection is based on a software watchdog timer that detects network processor hangs and a self-testing scheme that detects interface failures other than processor hangs. The proposed self-testing scheme achieves failure detection by periodically directing the control flow to go through only active software modules in order to detect errors that affect instructions in the local memory of the network interface. Our failure recovery is achieved by restoring the state of the network interface using a small backup copy containing just the right amount of information required for complete recovery. The paper shows how this technique can be made to minimize the performance impact to the host system and be completely transparent to the user.",2007,0, 3532,Optimization of Variability in Software Product Lines,"The widespread use of the product line approach allows companies to realize significant improvements in time-to- market, cost, productivity, and quality. However, a fundamental problem in software product line engineering is that a product line of industrial size can easily incorporate several thousand variable features. The complexity caused by this amount of variability makes variability management and product derivation tasks extremely difficult. To address this problem, we present a new method to optimize the variability provided in a software product line. Our method constructs a visualization that provides a classification of the usage of variable features in real products derived from the product line. We show how this classification can be used to derive restructuring strategies for simplifying the variability. The effectiveness of our work is demonstrated by presenting a case study of optimizing the variability in a large industrial software product line.",2007,0, 3533,Prediction of Self-Similar Traffic and its Application in Network Bandwidth Allocation,"In this paper, traffic prediction models based on chaos theory are studied and compared with FARIMA (fractional autoregressive integrated moving average) predictors by means of the adopted measurements of predictability. The traffic prediction results are applied in the bandwidth allocation of a mesh network, and the OPNET simulation platform is developed in order to compare their effects. The adopted predictability measurements are inadequate because although the chaotic predictor based on the Lyapunov exponent with worse values of the measurements can timely predict the burstiness of self- similar traffic, the FARIMA predictor forecasts the burstiness with a time-delay. The DAMA (dynamic assignment multiaccess) bandwidth allocation strategy combined with the chaotic predictor can provide better QoS performance.",2007,0, 3534,Adaptive Dual-Cross-Diamond-Hexagon Search Algorithm for Fast Block Motion Estimation,"Motion estimation (ME) is the cardinal technique of video coding standard, since it could significantly affect the output quality of a coding system. Based on the analyses of the limitation on the SAD matching and the allocation for encoding bits, this paper proposes the adaptive dual-cross-diamond-hexagon algorithm according to three main factors affecting the efficiency of ME, namely, the prediction of initial search center, matching criteria and searching strategy. Experimental results show that the proposed algorithm performs faster than MVFAST and PMVFAST, whereas similar prediction quality is still maintained.",2007,0, 3535,Usability Evaluation of B2C Web Site,"Web site usability is a critical metric for assessing the quality of the B2C Web site. A measure of usability must not only provide a rating for a specific Web site, but also should it illuminate the specific strengths and weaknesses about site design. In this paper, the usability and usability evaluation of B2C Web site are described. A comprehensive set of usability guidelines developed by Microsoft (MUG) is revised and utilized. The index and sub index comprising these guidelines are present firstly. The weights of each indexes and sub indexes are decided by AHP(Analytical Hierarchy Process). Base on the investigation data, a mathematic arithmetic is proposed to calculate the grade of each B2C Web site. The illustrated example shows that the evaluation approach of this paper is very effective.",2007,0, 3536,Functional Test-Case Generation by a Control Transaction Graph for TLM Verification,"Transaction level modeling allows exploring several SoC design architectures leading to better performance and easier verification of the final product. Test cases play an important role in determining the quality of a design. Inadequate test-cases may cause bugs to remain after verification. Although TLM expedites the verification of a hardware design, the problem of having high coverage test cases remains unsettled at this level of abstraction. In this paper, first, in order to generate test-cases for a TL model we present a Control-Transaction Graph (CTG) describing the behavior of a TL Model. A Control Graph is a control flow graph of a module in the design and Transactions represent the interactions such as synchronization between the modules. Second, we define dependent paths (DePaths) on the CTG as test-cases for a transaction level model. The generated DePaths can find some communication errors in simulation and detect unreachable statements concerning interactions. We also give coverage metrics for a TL model to measure the quality of the generated test-cases. Finally, we apply our method on the SystemC model of AMBA-AHB bus as a case study and generate test-cases based on the CTG of this model.",2007,0, 3537,Functional Verification of RTL Designs driven by Mutation Testing metrics,"The level of confidence in a VHDL description directly depends on the quality of its verification. This quality can be evaluated by mutation-based test, but the improvement of this quality requires tremendous efforts. In this paper, we propose a new approach that both qualifies and improves the functional verification process. First, we qualify test cases thanks to the mutation testing metrics: faults are injected in the design under verification (DUV) (making DUV's mutants) to check the capacity of test cases to detect theses mutants. Then, a heuristic is used to automatically improve IPs validation data. Experimental results obtained on RTL descriptions from ITC'99 benchmark show how efficient is our approach.",2007,0, 3538,An Object Oriented Complexity Metric Based on Cognitive Weights,"Complexity in general is defined as ""the degree to which a system or component has a design or implementation that is difficult to understand and verify "". Complexity metrics are used to predict critical information about reliability and maintainability of software systems. Object oriented software development requires a different approach to software metrics. In this paper, an attempt has been made to propose a metric for an object oriented code, which calculates the complexity of a class at method level. The proposed measure considers the internal architecture of the class, subclass, and member functions, while other proposed metrics for object oriented programming do not. An attempt has also been made to evaluate and validate the proposed measure in terms of Weyuker's properties and against the principles of measurement theory. It has been found that seven of nine Weyuker's properties have been satisfied by the proposed measure. It also satisfies most of the parameters required by the measurement theory perspective, hence establishes as a well-structured one.",2007,0, 3539,First International Symposium on Empirical Software Engineering and Measurement-Title,The following topics are dealt with: empirical software engineering; software measurement; experience management; software testing; software validation; software effort estimation; software quality; software defect prediction; software metrics; software data mining; software architecture.,2007,0, 3540,A Critical Analysis of Empirical Research in Software Testing,"In the foreseeable future, software testing will remain one of the best tools we have at our disposal to ensure software dependability. Empirical studies are crucial to software testing research in order to compare and improve software testing techniques and practices. In fact, there is no other way to assess the cost-effectiveness of testing techniques, since all of them are, to various extents, based on heuristics and simplifying assumptions. However, when empirically studying the cost and fault- detection rates of a testing technique, a number of validity issues arise. Further, there are many ways in which empirical studies can be performed, ranging from simulations to controlled experiments with human subjects. What are the strengths and drawbacks of the various approaches? What is the best option under which circumstances? This paper presents a critical analysis of empirical research in software testing and will attempt to highlight and clarify the issues above in a structured and practical manner.",2007,0, 3541,"Assessing, Comparing, and Combining Statechart- based testing and Structural testing: An Experiment","Although models have been proven to be helpful in a number of software engineering activities there is still significant resistance to model-driven development. This paper investigates one specific aspect of this larger problem. It addresses the impact of using statecharts for testing class clusters that exhibit a state-dependent behavior. More precisely, it reports on a controlled experiment that investigates their impact on testing fault-detection effectiveness. Code-based, structural testing is compared to statechart-based testing and their combination is investigated to determine whether they are complementary. Results show that there is no significant difference between the fault detection effectiveness of the two test strategies but that they are significantly more effective when combined. This implies that a cost-effective strategy would specify statechart-based test cases early on, execute them once the source code is available, and then complete them with test cases based on code coverage analysis.",2007,0, 3542,Test Inspected Unit or Inspect Unit Tested Code?,"Code inspection and unit testing are two popular fault- detecting techniques at unit level. Organizations where inspections are done generally supplement it with unit testing, as both are complementary. A natural question is the order in which the two techniques should be exercised as this may impact the overall effectiveness and efficiency of the verification process. In this paper, we present a controlled experiment comparing the two execution-orders, namely, code inspection followed by unit testing (CI-UT) and unit testing followed by code inspection (UT-CI), performed by a group of fresh software engineers in a company. The subjects inspected program-units by traversing a set of usage scenarios and applied unit testing by writing JUnit tests for the same. Our results showed that unit testing can be more effective, as well as more efficient, if applied after code inspection whereas the later is unaffected of the execution- order. Overall results suggest that sequence CI-UT performs better than UT-CI in time-constrained situations.",2007,0, 3543,Defect Detection Efficiency: Test Case Based vs. Exploratory Testing,"This paper presents a controlled experiment comparing the defect detection efficiency of exploratory testing (ET) and test case based testing (TCT). While traditional testing literature emphasizes test cases, ET stresses the individual tester's skills during test execution and does not rely upon predesigned test cases. In the experiment, 79 advanced software engineering students performed manual functional testing on an open-source application with actual and seeded defects. Each student participated in two 90-minute controlled sessions, using ET in one and TCT in the other. We found no significant differences in defect detection efficiency between TCT and ET. The distributions of detected defects did not differ significantly regarding technical type, detection difficulty, or severity. However, TCT produced significantly more false defect reports than ET. Surprisingly, our results show no benefit of using predesigned test cases in terms of defect detection efficiency, emphasizing the need for further studies of manual testing.",2007,0, 3544,Comparing Model Generated with Expert Generated IV&V Activity Plans,"An IV&V activity plan describes what assurance activities to perform, where to do them, when, and to what extent. Meaningful justification for an IV&V budget and evidence that activities performed actually provide high assurance has been difficult to provide from plans created (generally ad hoc) by experts. JAXA now uses the ""strategic IV&V planning and cost model"" to addresses these issues and complement expert planning activities. This research presents a grounded empirical study that compares plans generated by the strategic model to those created by experts on several past IV&V projects. Through this research, we found that the model generated plan typically is a superset of the experts ' plan. We found that experts tended to follow the most cost-effective route but had a bias in their particular activity selections. Ultimately we found increased confidence in both expert and model based planning and now have new tools for assessing and improving them.",2007,0, 3545,"Filtering, Robust Filtering, Polishing: Techniques for Addressing Quality in Software Data","Data quality is an important aspect of empirical analysis. This paper compares three noise handling methods to assess the benefit of identifying and either filtering or editing problematic instances. We compare a 'do nothing' strategy with (i) filtering, (ii) robust filtering and (Hi) filtering followed by polishing. A problem is that it is not possible to determine whether an instance contains noise unless it has implausible values. Since we cannot determine the true overall noise level we use implausible val.ues as a proxy measure. In addition to the ability to identify implausible values, we use another proxy measure, the ability to fit a classification tree to the data. The interpretation is low misclassification rates imply low noise levels. We found that all three of our data quality techniques improve upon the 'do nothing' strategy, also that the filtering and polishing was the most effective technique for dealing with noise since we eliminated the fewest data and had the lowest misclassification rates. Unfortunately the polishing process introduces new implausible values. We believe consideration of data quality is an important aspect of empirical software engineering. We have shown that for one large and complex real world data set automated techniques can help isolate noisy instances and potentially polish the values to produce better quality data for the analyst. However this work is at a preliminary stage and it assumes that the proxy measures of lity are appropriate.",2007,0, 3546,An Estimation Model for Test Execution Effort,"Testing is an important activity to ensure software quality. Big organizations can have several development teams with their products being tested by overloaded test teams. In such situations, test team managers must be able to properly plan their schedules and resources. Also, estimates for the required test execution effort can be an additional criterion for test selection, since effort may be restrictive in practice. Nevertheless, this information is usually not available for test cases never executed before. This paper proposes an estimation model for test execution effort based on the test specifications. For that, we define and validate a measure of size and execution complexity of test cases. This measure is obtained from test specifications written in a controlled natural language. We evaluated the model through an empirical study on the mobile application domain, which results suggested an accuracy improvement when compared with estimations based only on historical test productivity.",2007,0, 3547,Usability Evaluation Based on Web Design Perspectives,"Given the growth in the number and size of Web Applications worldwide, Web quality assurance, and more specifically Web usability have become key success factors. Therefore, this work proposes a usability evaluation technique based on the combination of Web design perspectives adapted from existing literature, and heuristics. This new technique is assessed using a controlled experiment aimed at measuring the efficiency and effectiveness of our technique, in comparison to Nielsen's heuristic evaluation. Results indicated that our technique was significantly more effective than and as efficient as Nielsen's heuristic evaluation.",2007,0, 3548,Toward Reducing Fault Fix Time: Understanding Developer Behavior for the Design of Automated Fault Detection Tools,"The longer a fault remains in the code from the time it was injected, the more time it will take to fix the fault. Increasingly, automated fault detection (AFD) tools are providing developers with prompt feedback on recently-introduced faults to reduce fault fix time. If however, the frequency and content of this feedback does not match the developer's goals and/or workflow, the developer may ignore the information. We conducted a controlled study with 18 developers to explore what factors are used by developers to decide whether or not to address a fault when notified of the error. The findings of our study lead to several conjectures about the design of AFD tools to effectively notify developers of faults in the coding phase. The AFD tools should present fault information that is relevant to the primary programming task with accurate and precise descriptions. The fault severity and the specific timing of fault notification should be customizable. Finally, the AFD tool must be accurate and reliable to build trust with the developer.",2007,0, 3549,Evaluating the Impact of Adaptive Maintenance Process on Open Source Software Quality,"The paper focuses on measuring and assessing the relation of adaptive maintenance process and quality of open source software (OSS). A framework for assessing adaptive maintenance process is proposed and applied. The framework consists of six sub- processes. Five OSSs with considerable number of releases have been studied empirically. Their main evolutionary and quality characteristics have been measured. The main results of the study are the following:. 1) Software maintainability is affected mostly by the activities of the 'analysis' maintenance sub-process. 2) Software testability is affected by the activities of all maintenance sub-processes. 3) Software reliability is affected mostly by the activities of the 'design' and 'delivery' maintenance sub- processes. 4) Software complexity is affected mostly by the activities of the 'problem identification', design', 'implementation' and 'test' sub-processes. 5) Software flexibility is affected mostly by the activities of the 'delivery' sub-process.",2007,0, 3550,The Effects of Over and Under Sampling on Fault-prone Module Detection,"The goal of this paper is to improve the prediction performance of fault-prone module prediction models (fault-proneness models) by employing over/under sampling methods, which are preprocessing procedures for a fit dataset. The sampling methods are expected to improve prediction performance when the fit dataset is unbalanced, i.e. there exists a large difference between the number of fault-prone modules and not-fault-prone modules. So far, there has been no research reporting the effects of applying sampling methods to fault-proneness models. In this paper, we experimentally evaluated the effects of four sampling methods (random over sampling, synthetic minority over sampling, random under sampling and one-sided selection) applied to four fault-proneness models (linear discriminant analysis, logistic regression analysis, neural network and classification tree) by using two module sets of industry legacy software. All four sampling methods improved the prediction performance of the linear and logistic models, while neural network and classification tree models did not benefit from the sampling methods. The improvements of Fl-values in linear and logistic models were 0.078 at minimum, 0.224 at maximum and 0.121 at the mean.",2007,0, 3551,Generalizing fault contents from a few classes,"The challenges in fault prediction today are to get a prediction as early as possible, at as low a cost as possible, needing as little data as possible and preferably in such a language that your average developer can understand where it came from. This paper presents a fault sampling method where a summary of a few, easily available metrics is used together with the results of a few sampled classes to generalize the fault content to an entire system. The method is tested on a large software system written in Java, that currently consists of around 2000 classes and 300,000 lines of code. The evaluation shows that the fault generalization method is good at predicting fault-prone clusters and that it is possible to generalize the values of a few representative classes.",2007,0, 3552,Fine-Grained Software Metrics in Practice,"Modularity is one of the key features of the Object- Oriented (00) paradigm. Low coupling and high cohesion help to achieve good modularity. Inheritance is one of the core concepts of the 00 paradigm which facilitates modularity. Previous research has shown that the use of the friend construct as a coupling mechanism in C+ + software is extensive. However, measures of the friend construct are scarse in comparison with measures of inheritance. In addition, these existing measures are coarse-grained, in spite of the widespread use of the friend mechanism. In this paper, a set of software metrics are proposed that measure the actual use of the friend construct, inheritance and other forms of coupling. These metrics are based on the interactions for which each coupling mechanism is necessary and sufficient. Previous work only considered the declaration of a relationship between classes. The software metrics introduced are empirically assessed using the LEDA software system. Our results indicate that the friend mechanism is used to a very limited extent to access hidden methods in classes. However, access to hidden attributes is more common.",2007,0, 3553,Evaluating Software Project Control Centers in Industrial Environments,"Many software development organizations still lack support for detecting and reacting to critical project states in order to achieve planned goals. One means to institutionalize project control, systematic quality assurance, and management support on the basis of measurement and explicit models is the establishment of so-called software project control centers. However, there is only little experience reported in the literature with respect to setting up and applying such control centers in industrial environments. One possible reason is the lack of appropriate evaluation instruments (such as validated questionnaires and appropriate analysis procedures). Therefore, we developed an initial measurement instrument to systematically collect experience with respect to the deployment and use of control centers. Our main research goal was to develop and evaluate the measurement instrument. The instrument is based on the technology acceptance model (TAM) and customized to project controlling. This article illustrates the application and evaluation of this measurement instrument in the context of industrial case studies and provides lessons learned for further improvement. In addition, related work and conclusions for future work are given.",2007,0, 3554,Fault-Prone Filtering: Detection of Fault-Prone Modules Using Spam Filtering Technique,"The fault-prone module detection in source code is of importance for assurance of software quality. Most of previous conventional fault-prone detection approaches have been based on using software metrics. Such approaches, however, have difficulties in collecting the metrics and constructing mathematical models based on the metrics. In order to mitigate such difficulties, we propose a novel approach for detecting fault-prone modules using a spam filtering technique. Because of the increase of needs for spam e-mail detection, the spam filtering technique has been progressed as a convenient and effective technique for text mining. In our approach, fault-prone modules are detected in a way that the source code modules are considered as text files and are applied to the spam filter directly. In order to show the usefulness of our approach, we conducted an experiment using source code repository of a Java based open source development. The result of experiment shows that our approach can classify more than 70% of software modules correctly.",2007,0, 3555,Characterizing Software Architecture Changes: An Initial Study,"With today's ever increasing demands on software, developers must produce software that can be changed without the risk of degrading the software architecture. Degraded software architecture is problematic because it makes the system more prone to defects and increases the cost of making future changes. The effects of making changes to software can be difficult to measure. One way to address software changes is to characterize their causes and effects. This paper introduces an initial architecture change characterization scheme created to assist developers in measuring the impact of a change on the architecture of the system. It also presents an initial study conducted to gain insight into the validity of the scheme. The results of this study indicated a favorable view of the viability of the scheme by the subjects, and the scheme increased the ability of novice developers to assess and adequately estimate change effort.",2007,0, 3556,An Approach to Global Sensitivity Analysis: FAST on COCOMO,"There are various models in software engineering that are used to predict quality-related aspects of the process or artefacts. The use of these models involves elaborate data collection in order to estimate the input parameters. Hence, an interesting question is which of these input factors are most important. More specifically, which factors need to be estimated best and which might be removed from the model? This paper describes an approach based on global sensitivity analysis to answer these questions and shows its applicability in a case study on the COCOMO application at NASA.",2007,0, 3557,An Approach to Outlier Detection of Software Measurement Data using the K-means Clustering Method,"The quality of software measurement data affects the accuracy of project manager's decision making using estimation or prediction models and the understanding of real project status. During the software measurement implementation, the outlier which reduces the data quality is collected, however its detection is not easy. To cope with this problem, we propose an approach to outlier detection of software measurement data using the k-means clustering method in this work.",2007,0, 3558,A cost effectiveness indicator for software development,"Product quality, development productivity, and staffing needs are main cost drivers in software development. The paper proposes a cost-effectiveness indicator that combines these drivers using an economic criterion.",2007,0, 3559,Estimating the Quality of Widely Used Software Products Using Software Reliability Growth Modeling: Case Study of an IBM Federated Database Project,"Software producers can better manage the quality of their deployed software products using estimates of quality. Current best practices for making estimates are to use software reliability growth modeling (SRGM), which assumes that testing environments approximate deployment environments. This important assumption does not hold for widely used software products, which are operated in a wide variety of configurations under many different usage scenarios. However, the literature contains little empirical data on the impact of this violation of assumptions on the accuracy and the usefulness of predictions. In this paper, we report results and experiences using SRGM on an IBM federated database project. We examine defect data from 3 releases spanning approximately 9 years. We find SRGM to be of limited use to the project: absolute relative errors are at least 34%, and predictions are, at times, implausible. We discuss alternative approaches for estimating quality of widely used software products.",2007,0, 3560,Investigating Test Teams' Defect Detection in Function test,"In a case study, the defect detection for functional test teams is investigated. In the study it is shown that the test teams not only discover defects in the features under test that they are responsible for, but also defects in interacting components, belonging to other test teams' features. The paper presents the metrics collected and the results as such from the study, which gives insights into a complex development environment and highlights the need for coordination between test teams in function test.",2007,0, 3561,Comparison of Outlier Detection Methods in Fault-proneness Models,"In this paper, we experimentally evaluated the effect of outlier detection methods to improve the prediction performance of fault-proneness models. Detected outliers were removed from a fit dataset before building a model. In the experiment, we compared three outlier detection methods (Mahalanobis outlier analysis (MOA), local outlier factor method (LOFM) and rule based modeling (RBM)) each applied to three well-known fault-proneness models (linear discriminant analysis (LDA), logistic regression analysis (LRA) and classification tree (CT)). As a result, MOA and RBM improved Fl-values of all models (0.04 at minimum, 0.17 at maximum and 0.10 at mean) while improvements by LOFM were relatively small (-0.01 at minimum, 0.04 at maximum and 0.01 at mean).",2007,0, 3562,Assessing the Quality Impact of Design Inspections,"Inspections are widely used and studies have found them to be effective in uncovering defects. However, there is less data available regarding the impact of inspections on different defect types and almost no data quantifying the link between inspections and desired end product qualities. This paper addresses this issue by investigating whether design inspection checklists can be tailored so as to effectively target certain defect types without impairing the overall defect detection rate. The results show that the design inspection approach used here does uncover useful design quality issues and that the checklists can be effectively tailored for some types of defects.",2007,0, 3563,Fault-Driven Re-Scheduling For Improving System-level Fault Resilience,"The productivity of HPC system is determined not only by their performance, but also by their reliability. The conventional method to limit the impact of failures is checkpointing. However, existing research shows that such a reactive fault tolerance approach can only improve system productivity marginally. Leveraging the recent progress made in the field of failure prediction, we propose fault-driven rescheduling (FARS) to improve system resilience to failures, and investigate the feasibility and effectiveness of utilizing failure prediction to dynamically adjust the placement of active jobs (e.g. running jobs) in response to failure prediction. In particular, a rescheduling algorithm is designed to enable effective job adjustment by evaluating performance impact of potential failures and rescheduling on user jobs. The proposed FARS complements existing research on fault-aware scheduling by allowing user jobs to avoid imminent failures at runtime. We evaluate FARS by using actual workloads and failure events collected from production HPC systems. Our preliminary results show the potential of FARS on improving system resilience to failures.",2007,0, 3564,Testing conformance on Stochastic Stream X-Machines,"Stream X-machines have been used to specify real systems requiring to represent complex data structures. One of the advantages of using stream X-machines to specify a system is that it is possible to produce a test set that, under certain conditions, detects all the faults of an implementation. In this paper we present a formal framework to test temporal behaviors in systems where temporal aspects are critical. Temporal requirements are expressed by means of random variables and affect the duration of actions. Implementation relations are presented as well as a method to determine the conformance of an implementation with respect to a specification by applying a test set.",2007,0, 3565,Hardness for Explicit State Software Model Checking Benchmarks,"Directed model checking algorithms focus computation resources in the error-prone areas of concurrent systems. The algorithms depend on some empirical analysis to report their performance gains. Recent work characterizes the hardness of models used in the analysis as an estimated number of paths in the model that contain an error. This hardness metric is computed using a stateless random walk. We show that this is not a good hardness metric because models labeled hard with a stateless random walk metric have easily discoverable errors with a stateful randomized search. We present an analysis which shows that a hardness metric based on a stateful randomized search is a tighter bound for hardness in models used to benchmark explicit state directed model checking techniques. Furthermore, we convert easy models into hard models as measured by our new metric by pushing the errors deeper in the system and manipulating the number of threads that actually manifest an error.",2007,0, 3566,State-based Testing is Functional Testing,"Empirical studies report unsatisfactory fault detection of state-based methods in class testing and advocate the use of functional methods to complement state-based testing. In this paper, we take the view that the modest fault detection of state-based class testing reported in the literature is actually due to the inappropriate state diagram used. We show that functional testing of a class can be reduced to state-based testing, provided that the right state model is produced. We present a strategy for constructing a state diagram for a class method, based on a domain partition derived through functional techniques. We also describe a method for deriving test sequences from the resulting state diagrams, essentially a variant of the W-method. The paper also reports results from an experimental evaluation of the proposed approach, based on mutants generated by Mu- Java.",2007,0, 3567,On the Accuracy of Spectrum-based Fault Localization,"Spectrum-based fault localization shortens the test- diagnose-repair cycle by reducing the debugging effort. As a light-weight automated diagnosis technique it can easily be integrated with existing testing schemes. However, as no model of the system is taken into account, its diagnostic accuracy is inherently limited. Using the Siemens Set benchmark, we investigate this diagnostic accuracy as a function of several parameters (such as quality and quantity of the program spectra collected during the execution of the system), some of which directly relate to test design. Our results indicate that the superior performance of a particular similarity coefficient, used to analyze the program spectra, is largely independent of test design. Furthermore, near- optimal diagnostic accuracy (exonerating about 80% of the blocks of code on average) is already obtained for low-quality error observations and limited numbers of test cases. The influence of the number of test cases is of primary importance for continuous (embedded) processing applications, where only limited observation horizons can be maintained.",2007,0, 3568,Software Fault Prediction using Language Processing,"Accurate prediction of faulty modules reduces the cost of software development and evolution. Two case studies with a language-processing based fault prediction measure are presented. The measure, refereed to as a QALP score, makes use of techniques from information retrieval to judge software quality. The QALP score has been shown to correlate with human judgements of software quality. The two case studies consider the measure's application to fault prediction using two programs (one open source, one proprietary). Linear mixed-effects regression models are used to identify relationships between defects and QALP score. Results, while complex, show that little correlation exists in the first case study, while statistically significant correlations exists in the second. In this second study the QALP score is helpful in predicting faults in modules (files) with its usefulness growing as module size increases.",2007,0, 3569,Design of Multi-Function Monitoring Integrated Device in Power System,"With high development of electronics and computer technology, a new real-time system of multi-function monitoring integrated device based on industrial control computer automatic monitoring system with multi-sensor is designed in this paper. This system presents a scheme for power quality monitoring, electric parameters and data record, fault line diagnosis and selection with the concerns of real-time and complex algorithms application. The design of hardware system and the software design are introduced in detail. In a sense, we use a kind of a new technology spanning several disciplines, information blending, into the design in this paper. Further, a new algorithm using the calculation of fractal number to judge the failure line is presented for power grid monitor detection and fault line selection. By debugging and practical running, the design is proven to be accurate and practical. Comparing with the former designs, this design has better real-time performance, smaller volume and lower cost.",2007,0, 3570,A Study on Performance Measurement of a Plastic Packaging Organization's Manufacturing System by AHP Modeling,"By the effect of globalization, products, services, capital, technology, and people began to circulate more freely in the world. As a conclusion, in order to achieve and gain an advantage against competitors, manufacturing firms had to adopt themselves to changing conditions and evaluate their critical performance criteria. In this study, the aim is to determine general performance criteria and their characteristics and classifications from previous studies and evaluate performance criteria for a plastic packaging organization by utilizing analytic hierarchy process (AHP) modeling. A specific manufacturing organization, operating in the Turkish plastic packaging sector has been selected and the manufacturing performance criteria have been determined for that specific organization. Finally, the selected criteria have been assessed according to their relative importance by utilizing AHP approach and expert choice (EC) software program. As a result of this study, operating managers chose cost, quality, customer satisfaction and time factors as criteria for this organization. As the findings of the study indicate, the manufacturing organization operating in the plastic packaging sector, overviews its operations and measures its manufacturing performance basically on those four criteria and their sub criteria. Finally, relative importance of those main measures and their sub criteria are determined in consideration to plastic packaging sector.",2007,0, 3571,Study on Software of VXIbus Boundary Scan Test Generation,"The goal of this paper is to develop a set of software of boundary-scan test (BST) generation using some test generation algorithms and test data. In order to get the test data quickly and effectively, a new innovative method of establishing test project description (TPD) file is presented. During the testing of two different boundary-scan circuit boards, all faults can be detected, indicating that the expected design objective is achieved.",2007,0, 3572,Predict Malfunction-Prone Modules for Embedded System Using Software Metrics,"High software dependability is significant for many software systems, especially in embedded system. Dependability is usually measured from the user's viewpoint in terms of time between failures, according to an operational profile. A software malfunction is defined as a defect in an executable software product that may cause a failure. Thus, malfunctions are attributed to the software modules that cause failures. Developers tend to focus on malfunctions, because they are closely related to the amount of rework necessary to prevent future failures. This paper defined a software module malfunction-prone by class cohesion metrics when there is a high risk that malfunctions will be discovered during operations. Also proposed a novel cohesion measure method for derived classes in embedded system.",2007,0, 3573,Numerical Simulation of the Temperature Distribution in SiC Sublimation Growth System,"Although serious attempts have been developed silicon carbide bulk crystal growth technology to an industrial process during the last years, the quality of crystal remains deficient. One of the major problems is that the thermal field of SiC growth systems is not fully understood. Numerical simulation is considered as an important tool for the investigation of the thermal field distribution inside the growth crucible system involved with SiC bulk growth. We employ the finite-element software package ANSYS to provide additional information on the thermal field distribution. A two-dimensional model has been developed to simulate the axisymmetric growth system consist of a cylindrical susceptor (graphite crucible), a graphite felt insulation, and a copper inductive coil. The modeling field is coupled electromagnetic heating and thermal transfer. The induced magnetic field is used to predict heat generation due to magnetic induction. Conduction, convection and radiation in various components of the system are accounted for the heat transfer ways. The thermal field in SiC sublimation growth system was provided.",2007,0, 3574,Design and Implementation of Testing Network for Power Line Fault Detection Based on nRF905,"The design scheme of automatic power line fault detection system based on wireless sensor network is proposed. The hardware architecture and software design based on nRF905 are also described in detail. The experimental results show that the system has the advantages of simple installation, high stability and accuracy.",2007,0, 3575,Research and Application of ATML for Aircraft Electric Power Diagnosis and Prognostic System,"The technology of integrated health management (IHVM) is an important approach to improve the security, reliability and maintainability of airplanes and spacecrafts, and to reduce their cost significantly. Taken aircraft electric power system into consideration, it is one of the important equipment that provides electric energy for all electro-equipments in airplane. Its working state is a crucial to ensure airplane normal and secure flight, so the research of condition detection, fault diagnosis and prognosis is a great concern in the field of IHVM system. Aiming at interoperability and complexity of data exchanging for aircraft electric power diagnosis and prognostic system, a method of using ATML (automatic test markup language) based on XML (extensible markup language) is proposed in order to describe the information of devices and conditions for fault diagnosis and prognostic in the system. First, based on reference frame of integrated health management system, a distributed multi-level fault diagnosis and prognosis system structure that exchange data through ATML is designed. Secondly, condition monitor is a foundation of fault diagnosis and prognosis, and a schema of test information for aircraft electric power parameters is designed by using ATML after data of crucial fault is analyzed. The result shows that the method can implement seamless interaction among test, diagnosis and prognosis. At same time, sharing and transplant of test information and diagnostic knowledge is improved and the process of fault diagnosis is accelerated.",2007,0, 3576,Performance Analysis of CORBA Replication Models,"Active and passive replication models constitute an effective way to achieve the availability objectives of distributed real-time (DRE) systems. These two models have different impacts on the application performance. Although these models have been commonly used in practical systems, a systematic quantitative evaluation of their influence on the application performance has not been conducted. In this paper we describe a methodology to analyze the application performance in the presence of active and passive replication models. For each one of these models, we obtain an analytical expression for the application response time in terms of the model parameters. Based on these analytical expressions, we derive the conditions under which one replication model has better performance over the other. Our results indicate that the superiority of one replication model over the other is governed not only by the model parameters but also by the application characteristics. We illustrate the value of the analytical expressions to assess the influence of the parameters of each model on the application response time and for a comparative analysis of the two models.",2007,0, 3577,A Web based System to Calculate Quality Metrics for Digital Passport Photographs,"Quality measurement of digital passport photographs is a complex problem. The first problem to solve is to establish the quality metrics, quality scales and the level of acceptance for a certain quality. The second problem to solve is to offer an integrated system which covers the principles of software engineering plus a user-friendly interface. This paper introduces the solution for both problems. The method to calculate different and novel quality metrics based on a conventional estimation and a quality metric based on nonconformance of quality requirements for digital passport photographs. Conformance clauses of quality requirements were obtained from the specifications of the machine readable-travel documents (MRTD) document of the International-Civil Aviation Organization (ICAO) ICAO/MRTD and from the document of the International Standardization Organization (ISO). Series of algorithms were implemented to measure the quality of passport photographs in two ways: as an image and as a biometric sample.",2007,0, 3578,A Modeling of Software Architecture Reliability,"This paper introduces software architecture reliability estimation and some typical software reliability model based architecture. We modify model of software reliability estimation so that to improve precision of estimating software architecture reliability. At the same time, this paper proposed the simplified method of calculating software architecture reliability based on the state transition matrix. The improved model is validated by an application system in the paper, and the result show that the precision is effectively increased.",2007,0, 3579,Silicon Debug for Timing Errors,"Due to various sources of noise and process variations, assuring a circuit to operate correctly at its desired operational frequency has become a major challenge. In this paper, we propose a timing-reasoning-based algorithm and an adaptive test-generation algorithm for diagnosing timing errors in the silicon-debug phase. We first derive three metrics that are strongly correlated to the probability of a candidate's being an actual error source. We analyze the problem of circuit timing uncertainties caused by delay variations and test sampling. Then, we propose a candidate-ranking heuristic, which is robust with respect to such sources of timing uncertainty. Based on the initial ranking result and the timing information, we further propose an adaptive path-selection and test-generation algorithm to generate additional diagnostic patterns for further improvement of the first-hit-rate. The experimental results demonstrate that combining the ranking heuristic and the adaptive test-generation method would result in a very high resolution for timing diagnosis.",2007,0, 3580,Exploration of Quantitative Scoring Metrics to Compare Systems Biology Modeling Approaches,"In this paper, we report a focused case study to assess whether quantitative metrics are useful to evaluate molecular-level system biology models on cellular metabolism. Ideally, the bio-modeling community shall be able assess systems biology models based on objective and quantitative metrics. This is because metric-based model design not only can accelerate the validation process, but also can improve the efficacy of model design. In addition, the metric will enable researchers to select models with any desired quality standards to study biological pathway. In this case study, we compare popular systems biology modeling approaches such as Michaelis-Menten kinetics and generalized mass action and flux balance analysis to examine the difficulties in developing quantitative metrics for bio-model assessment. We created a set of guidelines in evaluating the efficacy of various bio-modeling approaches and system analysis in several "";bio-systems of interest"";. We found that quantitative scoring metrics are essential aids for (i) model adopters and users to determine fundamental distinctions among bio-models, and (ii) model developers to improve key areas in bio-modeling. Eventually, we want to extend this evaluation practice to broad systems biology modeling.",2007,0, 3581,Determination of simple thresholds for accelerometry-based parameters for fall detection,"The increasing population of elderly people is mainly living in a home-dwelling environment and needs applications to support their independency and safety. Falls are one of the major health risks that affect the quality of life among older adults. Body attached accelerometers have been used to detect falls. The placement of the accelerometric sensor as well as the fall detection algorithms are still under investigation. The aim of the present pilot study was to determine acceleration thresholds for fall detection, using triaxial accelerometric measurements at the waist, wrist, and head. Intentional falls (forward, backward, and lateral) and activities of daily living (ADL) were performed by two voluntary subjects. The results showed that measurements from the waist and head have potential to distinguish between falls and ADL. Especially, when the simple threshold-based detection was combined with posture detection after the fall, the sensitivity and specificity of fall detection were up to 100 %. On the contrary, the wrist did not appear to be an optimal site for fall detection.",2007,0, 3582,Experience at Italian National Institute of Health in the quality control in telemedicine: tools for gathering data information and quality assessing,"The authors proposed a set of tools and procedures to perform a Telemedicine Quality Control process (TM-QC) to be submitted to the telemedicine (TM) manufacturers. The proposed tools were: the Informative Questionnaire (InQu), the Classification Form (ClFo), the Technical File (TF), the Quality Assessment Checklist (QACL). The InQu served to acquire the information about the examined TM product/service; the ClFo allowed to classify a TM product/service as belonging to one application area of TM. The TF was intended as a technical dossier of product and forced the TM supplier to furnish the only requested documentation of its product, so to avoid redundant information. The QACL was a checklist of requirements, regarding all the essential aspects of the telemedical applications, that each TM products/services must be met. The final assessment of the TM product/service was carried out via the QACL, by computing the number of agreed requirements: on the basis of this computation, a Quality Level (QL) was assigned to the telemedical application. Seven levels were considered, ranging from the Basic Quality Level (QL1- B) to the Excellent Quality Level (QL7-E). The TM-QC process resulted a powerful tool to perform the quality control of the telemedical applications and should be a guidance to all the TM practitioners, from the manufacturers to the expert evaluators. The quality control process procedures proposed thus could be adopted in future as routine procedures and could be useful in the assessing the TM delivering into the National Health Service versus the traditional face to face healthcare services.",2007,0, 3583,High-available grid services through the use of virtualized clustering,"Grid applications comprise several components and web-services that make them highly prone to the occurrence of transient software failures and aging problems. This type of failures often incur in undesired performance levels and unexpected partial crashes. In this paper we present a technique that offers high-availability for Grid services based on concepts like virtualization, clustering and software rejuvenation. To show the effectiveness of our approach, we have conducted some experiments with OGSA-DAI middleware. One of the implementations of OGSA-DAI makes use of use of Apache Axis V1.2.1, a SOAP implementation that suffers from severe memory leaks. Without changing any bit of the middleware layer we have been able to anticipate most of the problems caused by those leaks and to increase the overall availability of the OGSA-DAI Application Server. Although these results are tightly related with this middleware it should be noted that our technique is neutral and can be applied to any other Grid service that is supposed to be high-available.",2007,0, 3584,On-Line Periodic Self-Testing of High-Speed Floating-Point Units in Microprocessors,"On-line periodic testing of microprocessors is a viable low-cost alternative for a wide variety of embedded systems which cannot afford hardware or software redundancy techniques but necessitate the detection of intermittent or permanent faults. Low-cost, on-line periodic testing has been previously applied to the integer datapaths of microprocessors but not to their high-performance real number processing counterparts consisting of sophisticated high-speed floating-point (FP) units. In this paper, we present, an effective on-line periodic self-testing methodology for high-speed FP units and demonstrate it on high-speed FP adders/subtracters of both single and double precision. The proposed self-test code development methodology leads to compact self-test routines that exploit the integer part of the processors instruction set architecture to apply test sets to the FP subsystem periodically. The periodic self-test routines exhibit very low memory storage requirements along with a very small number of memory references which are both fundamental requirements for on-line periodic testing. A comprehensive set of experiments on both single and double precision FP units including pipelined versions, and on a RISC processor with a complete FP unit demonstrate the efficacy of the methodology in terms of very high fault coverage and low memory footprint thus rendering the proposed methodology highly appropriate for on-line periodic testing.",2007,0, 3585,On The Detection of Test Smells: A Metrics-Based Approach for General Fixture and Eager Test,"As a fine-grained defect detection technique, unit testing introduces a strong dependency on the structure of the code. Accordingly, test coevolution forms an additional burden on the software developer which can be tempered by writing tests in a manner that makes them easier to change. Fortunately, we are able to concretely express what a good test is by exploiting the specific principles underlying unit testing. Analogous to the concept of code smells, violations of these principles are termed test smells. In this paper, we clarify the structural deficiencies encapsulated in test smells by formalizing core test concepts and their characteristics. To support the detection of two such test smells, General Fixture and Eager Test, we propose a set of metrics defined in terms of unit test concepts. We compare their detection effectiveness using manual inspection and through a comparison with human reviewing. Although the latter is the traditional means for test quality assurance, our results indicate it is not a reliable means for test smell detection. This work thus stresses the need for a more reliable detection mechanism and provides an initial contribution through the validation of test smell metrics.",2007,0, 3586,An Observation-Based Approach to Performance Characterization of Distributed n-Tier Applications,"The characterization of distributed n-tier application performance is an important and challenging problem due to their complex structure and the significant variations in their workload. Theoretical models have difficulties with such wide range of environmental and workload settings. Experimental approaches using manual scripts are error-prone, time consuming, and expensive. We use code generation techniques and tools to create and run the scripts for large-scale experimental observation of n-tier benchmarking application performance measurements over a wide range of parameter settings and software/hardware combinations. Our experiments show the feasibility of experimental observations as a sound basis for performance characterization, by studying in detail the performance achieved by (up to 3) database servers and (up to 12) application servers in the RUBiS benchmark with a workload of up to 2700 concurrent users.",2007,0, 3587,Historical Risk Mitigation in Commercial Aircraft Avionics as an Indicator for Intelligent Vehicle Systems,"How safety is perceived in conjunction with consumer products has much to do with its presentation to the buying public and the company reputation for performance and safety. As the automobile industry implements integrated vehicle safety and driver aid systems, the question of public perception of the true safety benefits would seem to parallel the highly automated systems of commercial aircraft, a market in which perceived benefits of flying certainly outweigh concerns of safety. It is suggested that the history of critical aircraft systems provides a model for the wide-based implementation of automated systems in automobiles. The requirement for safety in aircraft systems as an engineering design parameter takes on several forms such as wear-out, probability of catastrophic failure and mean time between replacement or repair (MTBR). For automobile systems as in aircraft, it is a multidimensional topic encompassing a variety of hardware and software functions, fail-safe or fail-operational capability and operator and control interaction. As with critical flight systems, the adherence to specific federal safety requirements is also a cost item to which all manufacturers must adhere, but that also provides a common baseline to which all companies must design. Long a requirement for the design of systems for military and commercial aircraft control, specific safety standards have produced methodologies for analysis and system mechanization that would suggest the operational safety design methods needed for automobiles. Ultimately, tradeoffs must be completed to attain an acceptable level of safety when compared to the cost for developing and selling the system. As seen with commercial aircraft, acceptance of product safety by the public is not based on understanding strict technical requirements but is primarily the result of witnessing many hours of fault free operation, and seeking opinions of those they feel are knowledgeable. This brief study will use data from p- reliminary concept studies for the Automated Highway System and developments by human factors analysts and sociologists concerning perceptions of risk to present an evaluation of the technological methods historically used to mitigate risk in critical aircraft systems and how they might apply to automation in automobiles.",2007,0, 3588,Estimating Uncertainty of a Measurement Process,"Estimating a measurement of software quality is a challenge where uncertainty and variation have the greatest impact. Especially, at times when there is not enough information. Here, EUMP (estimating uncertainty of a measurement process) is introduced. EUMP is a recursive process which is using both multi regression and Monte Carlo simulation. It can be systematically obtained through EUMP for all distribution functions of both dependent and independent variables. Moreover, dependent variable can be estimated. Finally, a predictable system will be described, where the error of an estimated dependent variable will be proven it goes to zero when time (t) goes to infinity.",2007,0, 3589,Image Quality Assessment: an Overview and some Metrological Considerations,"This paper presents the state of the art in the field of Image Quality Assessment (IQA), providing a classification of some of the most important objective and subjective IQA methods. Furthermore, some aspects of the field are analysed from a metrological point of view, also through comparison with the software quality measurement area. In particular, a statistical approach to the evaluation of the uncertainty for IQA objective methods is presented and an example is provided. The topic of measurement modelling for subjective IQA methods is also analysed. Finally, a case study of images corrupted by impulse noise is discussed.",2007,0, 3590,Mining the Lexicon Used by Programmers during Sofware Evolution,"Identifiers represent an important source of information for programmers understanding and maintaining a system. Self-documenting identifiers reduce the time and effort necessary to obtain the level of understanding appropriate for the task at hand. While the role of the lexicon in program comprehension has long been recognized, only a few works have studied the quality and enhancement of the identifiers and no works have studied the evolution of the lexicon. In this paper, we characterize the evolution of program identifiers in terms of stability metrics and occurrences of renaming. We assess whether an evolution process similar to the one occurring for the program structure exists for identifiers. We report data and results about the evolution of three large systems, for which several releases are available. We have found evidence that the evolution of the lexicon is more limited and constrained than the evolution of the structure. We argue that the different evolution results from several factors including the lack of advanced tool support for lexicon construction, documentation, and evolution.",2007,0, 3591,Re-computing Coverage Information to Assist Regression Testing,"This paper presents a technique that leverages an existing regression test-selection algorithm to compute accurate, updated coverage data on a version of the software, Pi+1, without rerunning any test cases that do not execute the changes from the previous version of the software, Pi, to Pi+1-Users of our technique can avoid the expense of rerunning the entire test suite on Pi+1 or the inaccuracy produced by previous approaches that estimate coverage data for Pi+1 or reuse outdated coverage data from Pi. This paper also presents a tool, RECOVER, that implements our technique, along with a set of empirical studies. The studies show the inaccuracies that can exist when an application-regression-test selection-uses estimated and outdated coverage data. The studies also show that the overhead incurred by our technique is negligible.",2007,0, 3592,Combinatorial Interaction Regression Testing: A Study of Test Case Generation and Prioritization,"Regression testing is an expensive part of the software maintenance process. Effective regression testing techniques select and order (or prioritize) test cases between successive releases of a program. However, selection and prioritization are dependent on the quality of the initial test suite. An effective and cost efficient test generation technique is combinatorial interaction testing, CIT, which systematically samples all t-way combinations of input parameters. Research on CIT, to date, has focused on single version software systems. There has been little work that empirically assesses the use of CIT test generation as the basis for selection or prioritization. In this paper we examine the effectiveness of CIT across multiple versions of two software subjects. Our results show that CIT performs well in finding seeded faults when compared with an exhaustive test set. We examine several CIT prioritization techniques and compare them with a re-generation/prioritization technique. We find that prioritized and re-generated/prioritized CIT test suites may find faults earlier than unordered CIT test suites, although the re-generated/prioritized test suites sometimes exhibit decreased fault detection.",2007,0, 3593,Fault Detection Probability Analysis for Coverage-Based Test Suite Reduction,"Test suite reduction seeks to reduce the number of test cases in a test suite while retaining a high percentage of the original suite's fault detection effectiveness. Most approaches to this problem are based on eliminating test cases that are redundant relative to some coverage criterion. The effectiveness of applying various coverage criteria in test suite reduction is traditionally based on empirical comparison of two metrics derived from the full and reduced test suites and information about a set of known faults: (1) percentage size reduction and (2) percentage fault detection reduction, neither of which quantitatively takes test coverage data into account. Consequently, no existing measure expresses the likelihood of various coverage criteria to force coverage-based reduction to retain test cases that expose specific faults. In this paper, we develop and empirically evaluate, using a number of different coverage criteria, a new metric based on the ""average expected probability of finding a fault"" in a reduced test suite. Our results indicate that the average probability of detecting each fault shows promise for identifying coverage criteria that work well for test suite reduction.",2007,0, 3594,Maintaining Multi-Tier Web Applications,"Large-scale multi-tier web applications are inherently dynamic, complex, heterogeneous and constantly evolving. Maintaining such applications is important yet inevitably expensive. First, the size of the test suite of an evolving system will be continuously growing. Second, to ensure that the changes will not affect the quality of the systems, regression testing is frequently performed. To effectively and efficiently maintain web applications after each change, obsolete test cases must be removed and regression testing should selectively re-test. To this end there is a need for an inter-tier change impact analysis, which requires a coherent model rendering inter-tier dependence information. We present a technique that makes use of an integrated inter-connection dependence model to analyze cross-tier change impacts. These are then used to select affected test cases for regression testing and to identify repairable test cases for reuse, or to discard obsolete non-repairable test cases. Our empirical study shows that with this technique, the maintenance cost of the target system can be significantly reduced.",2007,0, 3595,The Economics of Open Source Software: An Empirical Analysis of Maintenance Costs,"A quality degradation effect of proprietary code has been observed as a consequence of maintenance. This quality degradation effect, called entropy, is a cause for higher maintenance costs. In the Open Source context, the quality of code is a fundamental tenet of open software developers. As a consequence, the quality degradation principle measured by entropy cannot be assumed to be valid. The goal of the paper is to analyze the entropy of Open Source applications by measuring the evolution of maintenance costs over time. Analyses are based on cost data collected from a sample of 1251 Open Source application versions, compared with the costs estimated with a traditional model for proprietary software. Findings indicate that Open Source applications are less subject to entropy, have lower maintenance costs and also a lower need for maintenance interventions aimed at restoring quality. Finally, results show that a lower entropy is favored by greater functional simplicity.",2007,0, 3596,Improving Predictive Models of Software Quality Using an Evolutionary Computational Approach,"Predictive models can be used to identify components as potentially problematic for future maintenance. Source code metrics can be used as input features to classifiers, however, there exist a large number of structural measures that capture different aspects of coupling, cohesion, inheritance, complexity and size. Feature selection is the process of identifying a subset of attributes that improves a classifier's performance. The focus of this study is to explore the efficacy of a genetic algorithm as a method of improving a classifier's ability to identify problematic components.",2007,0, 3597,SUDS: An Infrastructure for Creating Bug Detection Tools,SUDS is a powerful infrastructure for creating dynamic bug detection tools. It contains phases for both static analysis and dynamic instrumentation allowing users to create tools that take advantage of both paradigms. The results of static analysis phases can be used to improve the quality of dynamic bug detection tools created with SUDS and could be expanded to find defects statically. The instrumentation engine is designed in a manner that allows users to create their own correctness models quickly but is flexible to support construction of a wide range of different tools. The effectiveness of SUDS is demonstrated by showing that it is capable of finding bugs and that performance is improved when static analysis is used to eliminated unnecessary instrumentation.,2007,0, 3598,Stateful Detection in High Throughput Distributed Systems,"With the increasing speed of computers and the complexity of applications, many of today's distributed systems exchange data at a high rate. Significant work has been done in error detection achieved through external fault tolerance systems. However, the high data rate coupled with complex detection can cause the capacity of the fault tolerance system to be exhausted resulting in low detection accuracy. We present a new stateful detection mechanism which observes the exchanged application messages, deduces the application state, and matches against anomaly-based rules. We extend our previous framework (the monitor) to incorporate a sampling approach which adjusts the rate of verified messages. The sampling approach avoids the previously reported breakdown in the monitor capacity at high application message rates, reduces the overall detection cost and allows the monitor to provide accurate detection. We apply the approach to a reliable multicast protocol (TRAM) and demonstrate its performance by comparing it with our previous framework.",2007,0, 3599,Impact of TSPi on Software Projects,"This paper describes the effect of TSPi (introduction to the team software process) on key performance dimensions in software projects, including the ability to estimate, the quality of the software produced and the productivity achieved. The study examines the impact of the TSPi on the performance of 31 software teams. Finally an analysis comparing the results through two iterations is discussed.",2007,0, 3600,An Unsupervised Intrusion Detection Method Combined Clustering with Chaos Simulated Annealing,"Keeping networks security has never been such an imperative task as today. Threats come from hardware failures, software flaws, tentative probing and malicious attacks. In this paper, a new detection method, Intrusion Detection based on Unsupervised Clustering and Chaos Simulated Annealing algorithm (IDCCSA), is proposed. As a novel optimization technique, chaos has gained much attention and some applications during the past decade. For a given energy or cost function, by following chaotic ergodic orbits, a chaotic dynamic system may eventually reach the global optimum or its good approximation with high probability. To enhance the performance of simulated annealing which is to find a near-optimal partitioning clustering, simulated annealing algorithm is proposed by incorporating chaos. Experiments with KDD cup 1999 show that the simulated annealing combined with chaos can effectively enhance the searching efficiency and greatly improve the detection quality.",2007,0, 3601,A Fault Detection Mechanism for Fault-Tolerant SOA-Based Applications,"Fault tolerance is an important capability for SOA-based applications, since it ensures the dynamic composition of services and improves the dependability of SOA-based applications. Fault detection is the first step of fault detection, so this paper focuses on fault detection, and puts forward a fault detection mechanism, which is based on the theories of artificial neural network and probability change point analysis rather than static service description, to detect the services that fail to satisfy performance requirements at runtime. This paper also gives reference model of fault-tolerance control center of enterprise services bus.",2007,0, 3602,Comparison of Artificial Neural Network and Regression Models in Software Effort Estimation,"Good practices in software project management are basic requirements for companies to stay in the market, because the effective project management leads to improvements in product quality and cost reduction. Fundamental measurements are the prediction of size, effort, resources, cost and time spent in the software development process. In this paper, predictive Artificial Neural Network (ANN) and Regression based models are investigated, aiming at establishing simple estimation methods alternatives. The results presented in this paper compare the performance of both methods and show that artificial neural networks are effective in effort estimation.",2007,0, 3603,A Constructive RBF Neural Network for Estimating the Probability of Defects in Software Modules,"Much of the current research in software defect prediction focuses on building classifiers to predict only whether a software module is fault-prone or not. Using these techniques, the effort to test the software is directed at modules that are labelled as fault-prone by the classifier. This paper introduces a novel algorithm based on constructive RBF neural networks aimed at predicting the probability of errors in fault-prone modules; it is called RBF-DDA with Probabilistic Outputs and is an extension of RBF-DDA neural networks. The advantage of our method is that we can inform the test team of the probability of defect in a module, instead of indicating only if the module is fault-prone or not. Experiments carried out with static code measures from well-known software defect datasets from NASA show the effectiveness of the proposed method. We also compared the performance of the proposed method in software defect prediction with kNN and two of its variants, the S-POC-NN and R-POC-NN. The experimental results showed that the proposed method outperforms both S-POC-NN and R-POC-NN and that it is equivalent to kNN in terms of performance with the advantage of producing less complex classifiers.",2007,0, 3604,An Evaluation of Unstructured Text Mining Software,"Five text mining software tools were evaluated by four undergraduate students inexperienced in the text mining field. The software was run on the Microsoft Windows XP operating system, and employed a variety of techniques to mine unstructured text for information. The considerations used to evaluate the software included cost, ease of learning, functionality, ease of use and effectiveness. Hands on mining of text files also led us to more informative conclusions of the software. Through our evaluation we found that two software products (SAS and SPSS) had qualities that made them more desirable than the others.",2007,0, 3605,Vector signal analyzer implemented as a synthetic instrument,"Synthetic Instruments use the substantial signal processing assets of a field programmable gate array (FPGA) to perform the multiple tasks of targeted digital signal processing (DSP) based instruments. The signal conditioning common to many instruments includes analog spectral translation, filtering, and gain control to align the bandwidth and dynamic range of the input signal to the bandwidth and dynamic range capabilities of the A-to-D converter (ADC) which moves the signal from the analog domain to the sampled data domain. Once in the sampled data domain, the signal processing performed by the FPGA includes digital spectral translation, filtering, and gain control to perform its chartered DSP tasks. A common DSP task is spectral analysis from which frequency dependent (i.e., spectral) amplitude and phase is extracted from an input time signal. Another high interest DSP task is vector signal analysis from which time dependent (i.e., temporal) amplitude and phase is extracted from the input time signal. With access to the time varying amplitude-phase profiles of the input signal, the vector signal analyzer can present many of the quality measures of a modulation process. These include estimates of undesired attributes such as modulator distortion, phase noise, clock-jitter, l-Q imbalance, inter-symbol interference, and others. Here, the boundary between synthetic instruments (SI) and software defined radios (SDR) becomes very thin indeed. Essentially this is where the SI is asked to become a smart SDR, performing all the tasks of a DSP radio receiver and reporting small variations between the observed modulated signal parameters and those of an ideal modulated signal. Various quality measures (e.g., the size of errors) have value in qualifying and probing performance boundaries of communication systems.",2007,0,4258 3606,Building a Database to Support Intelligent Computational Quality Assurance of Resistance Spot Welding Joints,"A database system for storing information on resistance spot welding processes is outlined. Data stored in the database can be used for computationally estimating the quality of spot welding joints and for adaptively setting up new welding processes in order to ensure consistent high quality. This is achieved by storing current and voltage signals in the database, extracting features out of those signals and using the features as training input for classifier algorithms. Together the database and the associated data mining modules form an adaptive system that improves its performance over time. An entity-relationship model of the application domain is presented and then converted into a concrete database design. Software interfaces for accessing the database are described and the utility of the database and the access interfaces as components of a welding quality assurance system is evaluated. A relational database with tables for storing test sets, welds, signals, features and metadata is found suitable for the purpose. The constructed database has served well as a repository for research data and is ready to be transferred to production use at a manufacturing site.",2007,0, 3607,Are Two Heads Better than One? On the Effectiveness of Pair Programming,"Pair programming is a collaborative approach that makes working in pairs rather than individually the primary work style for code development. Because PP is a radically different approach than many developers are used to, it can be hard to predict the effects when a team switches to PP. Because projects focus on different things, this article concentrates on understanding general aspects related to effectiveness, specifically project duration, effort, and quality. Not unexpectedly, our meta-analysis showed that the question of whether two heads are better than one isn't precise enough to be meaningful. Given the evidence, the best answer is ""it depends"" - on both the programmer's expertise and the complexity of the system and tasks to be solved. Two heads are better than one for achieving correctness on highly complex programming tasks. They might also have a time gain on simpler tasks. Additional studies would be useful. For example, further investigation is clearly needed into the interaction of complexity and programmer experience and how they affect the appropriateness of a PP approach; our current understanding of this phenomenon rests chiefly on a single (although large) study. Only by understanding what makes pairs work and what makes them less efficient can we take steps to provide beneficial work conditions, to avoid detrimental conditions, and to avoid pairing altogether when conditions are detrimental. With the right cooks and the right combination of ingredients, the broth has the potential to be very good indeed.",2007,0, 3608,Estimation of ADSL and ADSL2+ Service Quality,"In this paper, estimation of ADSL and ADSL2+ service performances is presented. Since the transmission medium quality is one of the key requirements for performance capabilities of these services, an analytical method for modelling of primary per-unit length parameters of twisted pair is described. This model is implemented in the existing software extended to allow for estimation of data rates on telephone subscriber loops for xDSL service. For the example of particular twisted pair telephone cable, an experimental measurement of its parameters is done, which is followed by their modelling using described software. At the end, comparison of experimental and modelled results of capacity for specific cable is presented.",2007,0, 3609,Mechatronic Software Testing,"The paper describes mechatronic software testing techniques. Testing is different from common testing and includes special features of mechatronic systems. It may be put into effect with the aim of improving quality, assessing reliability, checking and conforming correctness. Various adapted techniques may be employed for the purpose, such as, for example, the white box technique or the black box technique when correctness testing, endurance and stress testing in relability testing or the usage of ready programs for performance testing.",2007,0, 3610,Automatic Document Logo Detection,"Automatic logo detection and recognition continues to be of great interest to the document retrieval community as it enables effective identification of the source of a document. In this paper, we propose a new approach to logo detection and extraction in document images that robustly classifies and precisely localizes logos using a boosting strategy across multiple image scales. At a coarse scale, a trained Fisher classifier performs initial classification using features from document context and connected components. Each logo candidate region is further classified at successively finer scales by a cascade of simple classifiers, which allows false alarms to be discarded and the detected region to be refined. Our approach is segmentation free and lay-out independent. We define a meaningful evaluation metric to measure the quality of logo detection using labeled groundtruth. We demonstrate the effectiveness of our approach using a large collection of real-world documents.",2007,0, 3611,OCR Accuracy Improvement through a PDE-Based Approach,"This paper focuses on improving the optical character recognition (OCR) system 's accuracy by restoring damaged character through a PDE (Partial Differential Equation)-based approach. This approach, proposed by D. Tschumperle, is an anisotropic diffusion approach driven by local tensors fields. Actually, such approach has many useful properties that are relevant for use in character restoration. For instance, this approach is very appropriate for the processing of oriented patterns which are major characteristics of textual documents. It incorporates both edge enhancing diffusion that tends to preserve local structures during smoothing and coherence-enhancing diffusion that processes oriented structures by smoothing along the flow direction. Furthermore, this tensor diffusion-based approach compared to the existing sate of the art requires neither segmentation nor training steps. Some experiments, done on degraded document images, illustrate the performance of this PDE-based approach in improving both of the visual quality and the OCR accuracy rates for degraded document images.",2007,0, 3612,A Best Practice Guide to Resource Forecasting for Computing Systems,"Recently, measurement-based studies of software systems have proliferated, reflecting an increasingly empirical focus on system availability, reliability, aging, and fault tolerance. However, it is a nontrivial, error-prone, arduous, and time-consuming task even for experienced system administrators, and statistical analysts to know what a reasonable set of steps should include to model, and successfully predict performance variables, or system failures of a complex software system. Reported results are fragmented, and focus on applying statistical regression techniques to monitored numerical system data. In this paper, we propose a best practice guide for building empirical models based on our experience with forecasting Apache web server performance variables, and forecasting call availability of a real-world telecommunication system. To substantiate the presented guide, and to demonstrate our approach in a step by step manner, we model, and predict the response time, and the amount of free physical memory of an Apache web server system, as well as the call availability of an industrial telecommunication system. Additionally, we present concrete results for a) variable selection where we cross benchmark three procedures, b) empirical model building where we cross benchmark four techniques, and c) sensitivity analysis. This best practice guide intends to assist in configuring modeling approaches systematically for best estimation, and prediction results.",2007,0, 3613,Harbor: Software-based Memory Protection For Sensor Nodes,"Many sensor nodes contain resource constrained microcontrollers where user level applications, operating system components, and device drivers share a single address space with no form of hardware memory protection. Programming errors in one application can easily corrupt the state of the operating system or other applications. In this paper, we propose Harbor, a memory protection system that prevents many forms of memory corruption. We use software based fault isolation (""sandboxing"") to restrict application memory accesses and control flow to protection domains within the address space. A flexible and efficient memory map data structure records ownership and layout information for memory regions; writes are validated using the memory map. Control flow integrity is preserved by maintaining a safe stack that stores return addresses in a protected memory region. Run-time checks validate computed control flow instructions. Cross domain calls perform low-overhead control transfers between domains. Checks are introduced by rewriting an application's compiled binary. The sand- boxed result is verified on the sensor node before it is admitted for execution. Harbor's fault isolation properties depend only on the correctness of this verifier and the Harbor runtime. We have implemented and tested Harbor on the SOS operating system. Harbor detected and prevented memory corruption caused by programming errors in application modules that had been in use for several months. Harbor's overhead, though high, is less than that of application-specific virtual machines, and reasonable for typical sensor workloads.",2007,0, 3614,Edge Weighted Spatio-Temporal Search for Error Concealment,"In temporal error concealment (EC), the sum of absolute difference (SAD) is commonly used to identify the best replacement macroblock. Even though the use of SAD ensures spatial continuity and produces visually good results, it is insufficient to ensure edge alignment. Other distortion criteria based solely on structural alignment may also perform poorly in the absence of strong edges. In this paper, we propose a spatio-temporal EC search algorithm using an edge weighted SAD distortion criterion. This distortion criterion ensures both edge alignment and spatial continuity. We assume the loss of motion information and use zero motion vector as the starting search point. We show that the proposed algorithm outperforms the use of unweighted SAD in general. Most importantly, the perceptual quality of EC is improved due to edge alignment while ensuring spatial continuity.",2007,0, 3615,Protection of Induction Motor Using PLC,"The goal of this paper is to protect induction motors against possible failures by increasing the reliability, the efficiency, and the performance. The proposed approach is a sensor-based technique. For this purpose, currents, voltages, speed and temperature values of the induction motor were measured with sensors. When any fault condition is detected during operation of the motor, PLC controlled on-line operation system activates immediately. The performance of the protection system proposed is discussed by means of application results. The motor protection achieved in the study can be faster than the classical techniques and applied to larger motors easily after making small modifications on both software and hardware.",2007,0, 3616,A WSAD-Based Fact Extractor for J2EE Web Projects,"This paper describes our implementation of a fact extractor for J2EE Web applications. Fact extractors are part of each reverse engineering toolset; their output is used by reverse engineering analyzers and visualizers. Our fact extractor has been implemented on top of IBM's Websphere Application Developer (WSAD). The extractor's schema has been defined with the Eclipse Modeling Framework (EMF) using a graphical modeling approach. The extractor extensively reuses functionality provided by WSAD, EMF, and Eclipse, and is an example of component-based development. In this paper, we show how we used this development approach to accomplish the construction of our fact extractor, which, as a result, could be realized with significantly less code and in shorter time compared to a homegrown extractor implemented from scratch. We have assessed our extractor and the produced facts with a table- based and a graph-based visualizer. Both visualizers are integrated with Eclipse.",2007,0, 3617,Improving Usability of Web Pages for Blinds,"Warranting the access to Web contents to any citizen, even to people with physical disabilities, is a major concern of many government organizations. Although guidelines for Web developers have been proposed by international organisations (such as the W3C) to make Web site contents accessible, the wider part of today's Web sites are not completely usable by peoples with sight disabilities. In this paper, two different approaches for dynamically transforming Web pages into aural Web pages, i.e. pages that are optimised for blind peoples, will be presented. The approaches exploit heuristic techniques for summarising Web pages contents and providing them to blind users in order to improve the usability of Web sites. The techniques have been validated in an experiment where usability metrics have been used to assess the effectiveness of the Web page transformation techniques.",2007,0, 3618,Digital Thermal Microscope for Biomedical Application,"In order to analyze the biomedical, we proposed a novel digital thermal microscope based on the uncooled focal plane detector, aiming to achieve the long-wave infrared microscope image, especially for biomedical analysis. Both the mathematical mode of noise equivalent temperature difference (NETD) and the noise equivalent eradiation difference (NEED) were established for micro thermal imaging system. Based on the mathematical model, some measures were taken to increase the system temperature resolution. Furthermore the uncooled focal plane arrays has inherent non-uniformities, so we proposed an adaptive algorithm that can complete NUC by only one frame. Results of our thermal microscope have proved that NUC can weaken striping noise greatly and plateau histogram equalization can further enhance the image quality. The software for the thermal microscope is provided based on Visual C++ and the methods mentioned above. Results of real thermal image experiments have shown that the digital thermal microscope is designed successfully and achieves good performance. With the thermal microscope, minute sized thermal analysis can be achieved. Thus it will become an effective means for diagnosis and the detection of cancer, and it can also accelerate the development of methods for biomedical engineering. The system is very meaningful for academic analysis and is promising for practical applications.",2007,0, 3619,A Self-Consistent Substrate Thermal Profile Estimation Technique for Nanoscale ICs—Part II: Implementation and Implications for Power Estimation and Thermal Management,"As transistors continue to evolve along Moore's Law and silicon devices take advantage of this evolution to offer increasing performance, there is a critical need to accurately estimate the silicon-substrate (junction or die) thermal gradients and temperature profile for the development and thermal management of future generations of all high-performance integrated circuits (ICs) including microprocessors. This paper presents an accurate chip-level leakage-aware method that self-consistently incorporates various electrothermal couplings between chip power, junction temperature, operating frequency, and supply voltage for substrate thermal profile estimation and also employs a realistic package thermal model that comprehends different packaging layers and noncubic structure of the package, which are not accounted for in traditional analyses. The evaluation using the proposed methodology is efficient and shows excellent agreements with an industrial-quality computational-fluid-dynamics (CFD) based commercial software. Furthermore, the methodology is shown to become increasingly effective with increase in leakage as technology scales. It is shown that considering electrothermal couplings and realistic package thermal model not only improves the accuracy of estimating the heat distribution across the chip but also has significant implications for precise power estimation and thermal management in nanometer-scale CMOS technologies.",2007,0, 3620,Size and Frequency of Class Change from a Refactoring Perspective,"A previous study by Bieman et al., investigated whether large, object-oriented classes were more susceptible to change than smaller classes. The measure of change used in the study was the frequency with which the features of a class had been changed over a specific period of time. From a refactoring perspective, the frequency of class change is of value But even for a relatively simple refactoring such as 'rename method', multiple classes may undergo minor modification without any net increase in class (and system) size. In this paper, we suggest that the combination of 'versions of a class and number of added lines of code ' in the bad code 'smell' detection process may give a better impression of which classes are most suitable candidates for refactoring; as such, effort in detecting bad code smells should apply to classes with a high growth rate as well as a high change frequency. To support our investigation, data relating to changes from 161 Java classes was collected. Results concluded that it is not necessarily the case that large classes are more change-prone than relatively smaller classes. Moreover, the bad code smell detection process is informed by using the combination of change frequency and class size as a heuristic.",2007,0, 3621,A Requirement Level Modification Analysis Support Framework,"Modification analysis is an essential phase of most software maintenance processes, requiring decision makers to perform and predict potential change impacts, feasibility and costs associated with a potential modification request. The majority of existing techniques and tools supporting modification analysis focusing on source code level analysis and require an understanding of the system and its implementation. In this research, we present a novel approach to support the identification of potential modification and re-testing efforts associated with a modification request, without the need for analyzing or understanding the system source code. We combine Use Case Maps with Formal Concept Analysis to provide a unique modification analysis framework that can assist decision makers during modification analysis at the requirements level. We demonstrate the applicability of our approach on a telephony system case study.",2007,0, 3622,A Scalable Parallel Deduplication Algorithm,"The identification of replicas in a database is fundamental to improve the quality of the information. Deduplication is the task of identifying replicas in a database that refer to the same real world entity. This process is not always trivial, because data may be corrupted during their gathering, storing or even manipulation. Problems such as misspelled names, data truncation, data input in a wrong format, lack of conventions (like how to abbreviate a name), missing data or even fraud may lead to the insertion of replicas in a database. The deduplication process may be very hard, if not impossible, to be performed manually, since actual databases may have hundreds of millions of records. In this paper, we present our parallel deduplication algorithm, called FER- APARDA. By using probabilistic record linkage, we were able to successfully detect replicas in synthetic datasets with more than 1 million records in about 7 minutes using a 20- computer cluster, achieving an almost linear speedup. We believe that our results do not have similar in the literature when it comes to the size of the data set and the processing time.",2007,0, 3623,Assessing the Object-level behavioral complexity in Object Relational Databases,"Object Relational Database Management Systems model set of interrelated objects using references and collection attributes. The static metrics capture the internal quality of the database schema at the class -level during design time. Complex databases like ORDB exhibit dynamism during runtime and hence require performance-level monitoring. This is achieved by measuring the access and invocations of the objects during runtime, thus assessing the behavior of the objects. Runtime coupling and cohesion metrics are deemed as attributes of measuring the Object-level behavioral complexity. In this work, we evaluate the runtime coupling and cohesion metrics and assess their influence in measuring the behavioral complexity of the objects in ORDB. Further, these internal measures of object behavior are externalized in measuring the performance of the database in entirety. Experiments on sample ORDB schemas are conducted using statistical analysis and correlation clustering techniques to assess the behavior of the objects in real time. The results indicate the significance of the object behavior in influencing the database performance. The scope of this work and the future works in extending this research form the concluding note.",2007,0, 3624,Technique Integration for Requirements Assessment,"In determining whether to permit a safety-critical software system to be certified and in performing independent verification and validation (IV&V) of safety- or mission-critical systems, the requirements traceability matrix (RTM) delivered by the developer must be assessed for accuracy. The current state of the practice is to perform this work manually, or with the help of general-purpose tools such as word processors and spreadsheets Such work is error-prone and person-power intensive. In this paper, we extend our prior work in application of Information Retrieval (IR) methods for candidate link generation to the problem of RTM accuracy assessment. We build voting committees from five IR methods, and use a variety of voting schemes to accept or reject links from given candidate RTMs. We report on the results of two experiments. In the first experiment, we used 25 candidate RTMs built by human analysts for a small tracing task involving a portion of a NASA scientific instrument specification. In the second experiment, we randomly seeded faults in the RTM for the entire specification. Results of the experiments are presented.",2007,0, 3625,Statistical Assessment of Global and Local Cylinder Wear,"Assessment of cylindricity has been traditionally performed on the basis of cylindrical crowns containing a set of points that are supposed to belong to a controlled cylinder. As such, all sampled points must lie within a crown. In contrast, the present paper analyzes the cylindricity for wear applications, in which a statistical trend is assessed, rather than to assure that all points fall within a given tolerance. Principal component analysis is used to identify the central axis of the sampled cylinder, allowing to find the actual (expected value of the) radius and axis of the cylinder. Application of k-cluster and transitive closure algorithms allow to identify particular areas of the cylinder which are specially deformed. For both, the local areas and the global cylinder, a quantile analysis allows to numerically grade the degree of deformation of the cylinder. The algorithms implemented are part of the CYLWEARcopy system and used to assess local and global wear cylinders.",2007,0, 3626,Developing Intentional Systems with the PRACTIONIST Framework,"Agent-based systems have become a very attractive approach for dealing with the complexity of modern software applications and have proved to be useful and successful in some industrial domains. However, engineering such systems is still a challenge due to the lack of effective tools and actual implementations of very interesting and fascinating theories and models. In this area the so-called intentional stance of systems can be very helpful to efficiently predict, explain, and define the behaviour of complex systems, without having to understand how they actually work, but explaining them in terms of some mental qualities or attitudes, rather than their physical or design stance. In this paper we present the PRACTIONIST framework, that supports the development of PRACTIcal reasONIng sySTems according to the BDI model of agency, which uses some mental attitudes such as beliefs, desires, and intentions to describe and specify the behaviour of system components. We adopt a goal-oriented approach and a clear separation between the deliberation phase and the means-ends reasoning, and consequently between the states of affairs to pursue and the way to do it. Moreover, PRACTIONIST allows developers to implement agents that are able to reason about their beliefs and the other agents' beliefs, expressed by modal logic formulas.",2007,0, 3627,Discovering Web Services Using Semantic Keywords,"With the increasing growth in popularity of Web services, the discovery of relevant services becomes a significant challenge. In order to enhance the service discovery is necessary that both the Web service description and the request for discovering a service explicitly declare their semantics. Some languages and frameworks have been developed to support rich semantic service descriptions and discover using ontology concepts. However, the manual creation of such concepts is tedious and error-prone and many users accustomed to automatic tools might not want to invert his time in obtaining this knowledge. In this paper we propose a system that assists to both service producers and service consumers in the discovery of semantic keywords which can be used to describe and discover Web services respectively. First, our system enhances semantically the list of keywords extracted from the elements that comprise the description of a Web service and the user keywords used for discover a service. Second, an ontology matching process is used to discovers matchings between the ontological terms of a service description and a request for service selection. Third, a subsumption reasoning algorithm tries to find service description(s) which match the user request.",2007,0, 3628,The Design of a Multimedia Protocol Analysis Software Environment,"We have developed a variant of Estelle, called Time-Estelle which is able to express multimedia quality of service (QoS) parameters, synchronisation scenarios, and time-dependent and probabilistic behaviours of multimedia protocols. We have developed an approach to verifying a multimedia protocol specified in Time-Estelle. To predict the performance of a multimedia system, we have also developed a method for the performance analysis of a multimedia protocol specified in Time-Estelle. However, without the support of a software environment to automate the processes, verification and performance analysis methods would be very time-consuming. This paper describes the design of such a software environment.",2007,0, 3629,A Fault Detection Mechanism for Service-Oriented Architecture Based on Queueing Theory,"SOA is an ideal solution to application building, since it reuses the existing services as many as possible. The fault tolerance is one important capability to ensure the SOA- based applications are high reliable and available. However, fault tolerance is such a complex issue for most SOA providers that they hardly provide this capability in their products. This paper provides a queuing-theory-based algorithm to fault detection, which can be used to detect the services whose performance becomes unsatisfactory at runtime according to the QoS descriptor. Based on this algorithm, this paper also gives the reference models of the extended service and the architecture of fault-tolerance control center of enterprise services bus for SOA-based applications.",2007,0, 3630,Test Case Prioritization Based on Varying Testing Requirement Priorities and Test Case Costs,"Test case prioritization is an effective and practical technique in regression testing. It schedules test cases in order of precedence that increases their ability to meet some performance goals, such as code coverage, rate of fault detection. In previous work, the test case prioritization techniques and metrics usually assumed that testing requirement priorities and test case costs are uniform. In this paper, basing on varying testing requirement priorities and test case costs, we present a new, general test case prioritization technique and an associated metric. The case study illustrates that the rate of ""units-oftesting-requirement-priority-satisfied- per-unit-test-case-cost"" can be increased, and then the testing quality and customer satisfaction can be improved.",2007,0, 3631,Towards Automatic Measurement of Probabilistic Processes,In this paper we propose a metric for finite processes in a probabilistic extension of CSP. The kernel of the metric corresponds to trace equivalence and most of the operators in the process algebra is shown to satisfy non-expansiveness property with respect to this metric. We also provide an algorithm to calculate the distance between two processes to a prescribed discount factor in polynomial time. The algorithm has been implemented in a tool that helps us to measure processes automatically.,2007,0, 3632,A Reinforcement-Learning Approach to Failure-Detection Scheduling,"A failure-detection scheduler for an online production system must strike a tradeoff between performance and reliability. If failure-detection processes are run too frequently, valuable system resources are spent checking and rechecking for failures. However, if failure-detection processes are run too rarely, a failure can remain undetected for a long time. In both cases, system performability suffers. We present a model-based learning approach that estimates the failure rate and then performs an optimization to find the tradeoff that maximizes system performability. We show that our approach is not only theoretically sound but practically effective, and we demonstrate its use in an implemented automated deadlock-detection system for Java.",2007,0, 3633,Failure Analysis of Open Source J2EE Application Servers,"Open source J2EE Application Servers (J2EE ASs) have already attracted industrial attentions in recent years, but the reliability is still an obstacle to their wide acceptance. The detail of software failures in J2EE AS was seldom discussed in the past, although such information is valuable to evaluate and improve its reliability. In this paper, we present a measurement- based failure classification and analysis. Presented results indicate open source J2EE AS's reliability needs improvement. Some notable results include: (1) the implementation of a widely used fault-tolerant mechanism, clustering, needs improvement; (2) only 15% of reported failures are removed within a week, and about 10% still open by the time we finish the study; (3) different J2EE ASs have different unreliable services so reliability improvement activities should be specific to both AS and applications.",2007,0, 3634,A Multivariate Analysis of Static Code Attributes for Defect Prediction,"Defect prediction is important in order to reduce test times by allocating valuable test resources effectively. In this work, we propose a model using multivariate approaches in conjunction with Bayesian methods for defect predictions. The motivation behind using a multivariate approach is to overcome the independence assumption of univariate approaches about software attributes. Using Bayesian methods gives practitioners an idea about the defectiveness of software modules in a probabilistic framework rather than the hard classification methods such as decision trees. Furthermore the software attributes used in this work are chosen among the static code attributes that can easily be extracted from source code, which prevents human errors or subjectivity. These attributes are preprocessed with feature selection techniques to select the most relevant attributes for prediction. Finally we compared our proposed model with the best results reported so far on public datasets and we conclude that using multivariate approaches can perform better.",2007,0, 3635,Distribution Metric Driven Adaptive Random Testing,"Adaptive random testing (ART) was developed to enhance the failure detection capability of random testing. The basic principle of ART is to enforce random test cases evenly spread inside the input domain. Various distribution metrics have been used to measure different aspects of the evenness of test case distribution. As expected, it has been observed that the failure detection capability of an ART algorithm is related to how evenly test cases are distributed. Motivated by such an observation, we propose a new family of ART algorithms, namely distribution metric driven ART, in which, distribution metrics are key drivers for evenly spreading test cases inside ART. Out study uncovers several interesting results and shows that the new algorithms can spread test cases more evenly, and also have better failure detection capabilities.",2007,0, 3636,Cohesion Metrics for Predicting Maintainability of Service-Oriented Software,"Although service-oriented computing (SOC) is a promising paradigm for developing enterprise software systems, existing research mostly assumes the existence of black box services with little attention given to the structural characteristics of the implementing software, potentially resulting in poor system maintainability. Whilst there has been some preliminary work examining coupling in a service-oriented context, there has to date been no such work on the structural property of cohesion. Consequently, this paper extends existing notions of cohesion in OO and procedural design in order to account for the unique characteristics of SOC, allowing the derivation of assumptions linking cohesion to the maintainability of service-oriented software. From these assumptions, a set of metrics are derived to quantify the degree of cohesion of service oriented design constructs. Such design level metrics are valuable because they allow the prediction of maintainability early in the SDLC.",2007,0, 3637,Abstraction in Assertion-Based Test Oracles,"Assertions can be used as test oracles. However, writing effective assertions of right abstraction levels is difficult because on the one hand, detailed assertions are preferred for thorough testing (i.e., to detect as many errors as possible), but on the other hand abstract assertions are preferred for readability, maintainability, and reusability. As assertions become a practical tool for testing and debugging programs, this is an important and practical problem to solve for the effective use of assertions. We advocate the use of model variables - specification-only variables of which abstract values are given as mappings from concrete program states - to write abstract assertions for test oracles. We performed a mutation testing experiment to evaluate the effectiveness of the use of model variables in assertion-based test oracles. According to our experiment, assertions written in terms of model variables are as effective as assertions written without using model variables in detecting (injected) faults, and the execution time overhead of model variables are negligible. Our findings are applicable to other use of runtime checkable assertions.",2007,0, 3638,A Phase-Locked Loop for the Synchronization of Power Quality Instruments in the Presence of Stationary and Transient Disturbances,"Power quality instrumentation requires accurate fundamental frequency estimation and signal synchronization, even in the presence of both stationary and transient disturbances. In this paper, the authors present a synchronization technique for power quality instruments based on a single-phase software phase-locked loop (PLL), which is able to perform the synchronization, even in the presence of such disturbances. Moreover, PLL is able to detect the occurrence of a transient disturbance. To evaluate if and how the synchronization technique is adversely affected by the application of stationary and transient disturbing influences, appropriate testing conditions have been developed, taking into account the requirements of the in-force standards and the presence of the voltage transducer.",2007,0, 3639,Rough Set Theory Measures to Knowledge Generation,"The accelerated growth of the information volumes on processes, phenomena and reports brings about an increasing interest in the possibility of discovering knowledge from data sets. This is a challenging task because in many cases it deals with extremely large, inherently not structured and fuzzy data, plus the presence of uncertainty. Therefore it is required to know a priori the quality of future procedures without using any additional information. In this paper we propose new measures to evaluate the quality of training sets used by algorithms for learning of supervised classifiers. Our training set assessment relied on measures furnished by rough sets theory. Our experimental results involved three classifiers (k-NN, C-4.5 and MLP) from international data bases. New training sets are built taking into account the results of the measures and the accuracy obtained by the classifiers, with the aim of infer the accuracy that the classifiers would obtain using a new training set. This is possible using a rule generator (C4.5) and a function estimation algorithm (k-NN).",2007,0, 3640,On Metrics-Driven Software Process,"Metrics can drive software processing, because they found the base of its quantizing management. There are two fundamental requirements in engineering: formal modeling and quantitative modeling. We must emphasize that metrics for software engineering is insufficient in quantification now. In order to improve software and software processing, the factors that affect schedule, cost and quality of software development should be measured. Metrics produces adjustable and iterative motions in software processing. Based on Jaynes' maximum entropy principle, this paper establishes a model to quantify the factors and introduces distance to compare the metric indicators. The authors propose that the metric estimation tree can be used and the nodes that stand for the software attributes in the tree can be marked with their corresponding evaluation values. Dynamic feedback in the software processing will be combined with AHP (analytic hierarchy process), and the project and process of development will be learned and analyzed entirely, concentratedly and dynamically.",2007,0, 3641,Analysis of Anomalies in IBRL Data from a Wireless Sensor Network Deployment,"Detecting interesting events and anomalous behaviors in wireless sensor networks is an important challenge for tasks such as monitoring applications, fault diagnosis and intrusion detection. A key problem is to define and detect those anomalies with few false alarms while preserving the limited energy in the sensor network. In this paper, using concepts from statistics, we perform an analysis of a subset of the data gathered from a real sensor network deployment at the Intel Berkeley Research Laboratory (IBRL) in the USA, and provide a formal definition for anomalies in the IBRL data. By providing a formal definition for anomalies in this publicly available data set, we aim to provide a benchmark for evaluating anomaly detection techniques. We also discuss some open problems in detecting anomalies in energy constrained wireless sensor networks.",2007,0, 3642,Quality Assessment Based on Attribute Series of Software Evolution,"Defect density and defect prediction are essential for efficient resource allocation in software evolution. In an empirical study we applied data mining techniques for value series based on evolution attributes such as number of authors, commit messages, lines of code, bug fix count, etc. Daily data points of these evolution attributes were captured over a period of two months to predict the defects in the subsequent two months in a project. For that, we developed models utilizing genetic programming and linear regression to accurately predict software defects. In our study, we investigated the data of three independent projects, two open source and one commercial software system. The results show that by utilizing series of these attributes we obtain models with high correlation coefficients (between 0.716 and 0.946). Further, we argue that prediction models based on series of a single variable are sometimes superior to the model including all attributes: in contrast to other studies that resulted in size or complexity measures as predictors, we have identified the number of authors and the number of commit messages to versioning systems as excellent predictors of defect densities.",2007,0, 3643,An Experimental Platform for Root Cause Diagnosis Research,"To obtain a healthy integrated production system that achieves defined quality goals in service oriented architecture (SOA), such as availability and performance, the timely detection and resolution of failures is needed. The goal of this thesis identify the primary or root causes (faults) of a set of observed symptoms in an integrated production system that indicate degradation and failure in system components leading to abnormal system performance. Our hypothesis is, if we combine more diagnosis methods (such as more symptom repository, dynamic analysis capabilities, different artificial intelligent (AI) reasoning capabilities or other available technologies and tools) into an existing tool such as open source AspectJ based diagnostic tool Glassbox, we can find more amount of root causes and more precise - ""actual"" root causes. We are building an experiment platform to validate this hypothesis.",2007,0, 3644,Enhancing the Assessment Environment within a Learning Management Systems,There are many open source or proprietary assessment tools on the market. These tools are embedded in learning management systems (LMS) and have as goal evaluation of learners. A problem that currently appears is that assessment tools are not always fair or accurate in classifying students according with accumulated knowledge. We propose a software module called quality module (QM) that may run along with an assessment tool within an LMS. The QM performs offline analysis and obtains knowledge regarding the classification capability of the assessment tool. The QM also obtains knowledge for course managers regarding the course difficulty such that course materials and quizzes may be altered in order to obtain a more accurate assessment tool and thus better classification among learners.,2007,0, 3645,Mining Software Data,"Data mining techniques and machine learning methods are commonly used in several disciplines. It is possible that they could also provide a basis for quality assessment of software development processes and the final software product. Number of researches who employ such techniques and methods on software cost and effort estimation are increasing. This article provides a software quality perspective on data mining and machine learning applications, and explains the current research challenges that are related to incorporating these tools based on several metrics derived from the software development processes and the software itself.",2007,0, 3646,Service-Oriented Business Process Modeling and Performance Evaluation based on AHP and Simulation,"With the evolution of grid technologies and the application of service-oriented architecture (SOA), more and more enterprises are integrated and collaborated with each other in a loosely coupled environment. A business process in that environment, i.e., the service-oriented business process (SOBP), shows highly flexibility for its free selection and composition of different services. The performance of the business process usually has to be evaluated and predicted before its being implemented. And it has special features since it includes both business-level and IT-level attributes. However, the existing modeling and performance evaluation methods of business process are mainly concentrated on business-level performance. And the researches on service selection and composition are usually limited to the IT-level metrics. An extended activity-network-based SOBP Model, its three-level performance metrics, and the corresponding calculation algorithm are proposed to fulfill these requirements. The advantages of our method in SOBP modeling and performance evaluation are highlighted also.",2007,0, 3647,The 18th IEEE International Symposium on Software Reliability - Title page,The following topics are dealt with: software system reliability; dependable software systems; validation; security testing and analysis; test automation; software security; metrics and measurements and software quality prediction.,2007,0, 3648,Software Reliability Modeling with Test Coverage: Experimentation and Measurement with A Fault-Tolerant Software Project,"As the key factor in software quality, software reliability quantifies software failures. Traditional software reliability growth models use the execution time during testing for reliability estimation. Although testing time is an important factor in reliability, it is likely that the prediction accuracy of such models can be further improved by adding other parameters which affect the final software quality. Meanwhile, in software testing, test coverage has been regarded as an indicator for testing completeness and effectiveness in the literature. In this paper, we propose a novel method to integrate time and test coverage measurements together to predict the reliability. The key idea is that failure detection is not only related to the time that the software experiences under testing, but also to what fraction of the code has been executed by the testing. This is the first time that execution time and test coverage are incorporated together into one single mathematical form to estimate the reliability achieved. We further extend this method to predict the reliability of fault- tolerant software systems. The experimental results with multi-version software show that our reliability model achieves a substantial estimation improvement compared with existing reliability models.",2007,0, 3649,Requirement Error Abstraction and Classification: A Control Group Replicated Study,"This paper is the second in a series of empirical studies about requirement error abstraction and classification as a quality improvement approach. The Requirement error abstraction and classification method supports the developers' effort in efficiently identifying the root cause of requirements faults. By uncovering the source of faults, the developers can locate and remove additional related faults that may have been overlooked, thereby improving the quality and reliability of the resulting system. This study is a replication of an earlier study that adds a control group to address a major validity threat. The approach studied includes a process for abstracting errors from faults and provides a requirement error taxonomy for organizing those errors. A unique aspect of this work is the use of research from human cognition to improve the process. The results of the replication are presented and compared with the results from the original study. Overall, the results from this study indicate that the error abstraction and classification approach improves the effectiveness and efficiency of inspectors. The requirement error taxonomy is viewed favorably and provides useful insights into the source of faults. In addition, human cognition research is shown to be an important factor that affects the performance of the inspectors. This study also provides additional evidence to motivate further research.",2007,0, 3650,Prioritization of Regression Tests using Singular Value Decomposition with Empirical Change Records,"During development and testing, changes made to a system to repair a detected fault can often inject a new fault into the code base. These injected faults may not be in the same files that were just changed, since the effects of a change in the code base can have ramifications in other parts of the system. We propose a methodology for determining the effect of a change and then prioritizing regression test cases by gathering software change records and analyzing them through singular value decomposition. This methodology generates clusters of files that historically tend to change together. Combining these clusters with test case information yields a matrix that can be multiplied by a vector representing a new system modification to create a prioritized list of test cases. We performed a post hoc case study using this technique with three minor releases of a software product at IBM. We found that our methodology suggested additional regression tests in 50% of test runs and that the highest-priority suggested test found an additional fault 60% of the time.",2007,0, 3651,Correlations between Internal Software Metrics and Software Dependability in a Large Population of Small C/C++ Programs,"Software metrics are often supposed to give valuable information for the development of software. In this paper we focus on several common internal metrics: Lines of Code, number of comments, Halstead Volume and McCabe's Cyclomatic Complexity. We try to find relations between these internal software metrics and metrics of software dependability: Probability of Failure on Demand and number of defects. The research is done using 59 specifications from a programming competition---The Online Judge--on the internet. Each specification provides us between 111 and 11,495programs for our analysis; the total number of programs used is 71,917. We excluded those programs that consist of a look-up table. The results for the Online Judge programs are: (1) there is a very strong correlation between Lines of Code and Hal- stead Volume; (2) there is an even stronger correlation between Lines of Code and McCabe's Cyclomatic Complexity; (3) none of the internal software metrics makes it possible to discern correct programs from incorrect ones; (4) given a specification, there is no correlation between any of the internal software metrics and the software dependability metrics.",2007,0, 3652,Using In-Process Testing Metrics to Estimate Post-Release Field Quality,"In industrial practice, information on the software field quality of a product is available too late in the software lifecycle to guide affordable corrective action. An important step towards remediation of this problem lies in the ability to provide an early estimation of post-release field quality. This paper evaluates the Software Testing and Reliability Early Warning for Java (STREW-J) metric suite leveraging the software testing effort to predict post-release field quality early in the software development phases. The metric suite is applicable for software products implemented in Java for which an extensive suite of automated unit test cases are incrementally created as development proceeds. We validated the prediction model using the STREW-J metrics via a two-phase case study approach which involved 27 medium-sized open source projects, and five industrial projects. The error in estimation and the sensitivity of the predictions indicate the STREW-J metric suite can be used effectively to predict post-release software field quality.",2007,0, 3653,Data Mining Techniques for Building Fault-proneness Models in Telecom Java Software,"This paper describes a study performed in an industrial setting that attempts to build predictive models to identify parts of a Java system with a high fault probability. The system under consideration is constantly evolving as several releases a year are shipped to customers. Developers usually have limited resources for their testing and inspections and would like to be able to devote extra resources to faulty system parts. The main research focus of this paper is two-fold: (1) use and compare many data mining and machine learning techniques to build fault-proneness models based mostly on source code measures and change/fault history data, and (2) demonstrate that the usual classification evaluation criteria based on confusion matrices may not be fully appropriate to compare and evaluate models.",2007,0, 3654,Predicting Subsystem Failures using Dependency Graph Complexities,"In any software project, developers need to be aware of existing dependencies and how they affect their system. We investigated the architecture and dependencies of Windows Server 2003 to show how to use the complexity of a subsystem's dependency graph to predict the number of failures at statistically significant levels. Such estimations can help to allocate software quality resources to the parts of a product that need it most, and as early as possible.",2007,0, 3655,Development of an Online Real Time Web Accessible Low-Voltage Switchgear Arcing Fault Early Warning System,"This paper discusses the design and development aspects associated with hardware and software of a Web accessible online real time low-voltage switchgear arcing fault early warning system based on NI C series I/O Modules and NI cRIO-9004 controller. Based on the higher-order harmonic differential approach, the switchgear arcing fault early warning system was implemented. The NI cRIO-9004 controller with IP addressable and remote access capabilities provides options for comprehensive fault recording and diagnostic capabilities. Dedicated test-bench was established in the laboratory environment to validate the algorithm and test the performance of this newly developed system.",2007,0, 3656,A Structural Complexity Metric for Software Components,"At present, the number of components increases largely, and component-based software development (CBSD) is becoming a new effective software development paradigm, how to measure their reliability, maintainability and complexity attracts more and more attentions. This paper presents a metric to assess the structural complexity of components. Moreover it proves that the metric satisfies some good properties.",2007,0, 3657,An AOP-based Performance Evaluation Framework for UML Models,"Performance is a key aspect of non-functional software requirements. While performance cross-cuts much of the software functionality, it is frequently difficult to express in traditional software development representations. In this paper we propose a framework for evaluating software performance using aspect-oriented programming (AOP) and examine its strengths and limitations. The framework provides a mechanism for supporting software performance evaluation prior to final software release. AOP is a promising software engineering technique for expressing cross-cutting characteristics of software systems. We consider software performance as a cross-cutting concern since it is not confined only a few modules, but often spread over multiple functional and non-functional elements. A key strength of our framework is the use of a code instrumentation mechanism of AOP - AspectJ code for performance analysis is separated from Java code for functional requirements. Java code is executable regardless of Aspect J code and can be woven together with AspectJ code when performance is evaluated. Our performance evaluation modules, written in AspectJ are semi-automatically or automatically generated from the UML [1] models with annotated performance profiles. The AspectJ code generator facilitates performance evaluation by allowing performance requirements that have been specified in UML models to be analyzed. The UML diagrams can then be improved by reflecting the feedback from the results of the performance analysis.",2007,0, 3658,One in a baker's dozen: debugging debugging,"In the work of Voas (1993), they outlined 13 major software engineering issues needing further research: (1) what is software quality? (2) what are the economic benefits behind existing software engineering techniques?, (3) does process improvement matter?, (4) can you trust software metrics and measurement?, (5) why are software engineering standards confusing and hard to comply with, (6) are standards interoperable, (7) how to decommission software?, (8) where are reasonable testing and debugging stoppage criteria?, (9) why are COTS components so difficult to compose?, (10) why are reliability measurement and operational profile elicitation viewed suspiciously, (11) can we design in the ""ilities"" both technically and economically, (12) how do we handle the liability issues surrounding certification, and (13) is intelligence and autonomic computing feasible? This paper focuses on a simple and easy to understand metric that addresses the eighth issue, a testing and debugging testing stoppage criteria based on expected probability of failure graphs.",2007,0, 3659,"Scalable, Adaptive, Time-Bounded Node Failure Detection","This paper presents a scalable, adaptive and time-bounded general approach to assure reliable, real-time node-failure detection (NFD) for large-scale, high load networks comprised of commercial off-the-shelf (COTS) hardware and software. Nodes in the network are independent processors which may unpredictably fail either temporarily or permanently. We present a generalizable, multilayer, dynamically adaptive monitoring approach to NFD where a small, designated subset of the nodes are communicated information about node failures. This subset of nodes are notified of node failures in the network within an interval of time after the failures. Except under conditions of massive system failure, the NFD system has a zero false negative rate (failures are always detected with in a finite amount of time after failure) by design. The NFD system continually adjusts to decrease the false alarm rate as false alarms are detected. The NFD design utilizes nodes that transmit, within a given locality, ""heartbeat"" messages to indicate that the node is still alive. We intend for the NFD system to be deployed on nodes using commodity (i.e. not hard-real-time) operating systems that do not provide strict guarantees on the scheduling of the NFD processes. We show through experimental deployments of the design, the variations in the scheduling of heartbeat messages can cause large variations in the false-positive notification behavior of the NFD subsystem. We present a per-node adaptive enhancement of the NFD subsystem that dynamically adapts to provide run-time assurance of low false-alarm rates with respect to past observations of heartbeat scheduling variations while providing finite node-failure detection delays. We show through experimentation that this NFD subsystem is highly scalable and uses low resource overhead.",2007,0, 3660,Behavioral Fault Modeling for Model-based Safety Analysis,"Recent work in the area of model-based safety analysis has demonstrated key advantages of this methodology over traditional approaches, for example, the capability of automatic generation of safety artifacts. Since safety analysis requires knowledge of the component faults and failure modes, one also needs to formalize and incorporate the system fault behavior into the nominal system model. Fault behaviors typically tend to be quite varied and complex, and incorporating them directly into the nominal system model can clutter it severely. This manual process is error-prone and also makes model evolution difficult. These issues can be resolved by separating the fault behavior from the nominal system model in the form of a ""fault model"", and providing a mechanism for automatically combining the two for analysis. Towards implementing this approach we identify key requirements for a flexible behavioral fault modeling notation. We formalize it as a domain-specific language based on Lustre, a textual synchronous dataflow language. The fault modeling extensions are designed to be amenable for automatic composition into the nominal system model.",2007,0, 3661,Combining Software Quality Analysis with Dynamic Event/Fault Trees for High Assurance Systems Engineering,"We present a novel approach for probabilistic risk assessment (PRA) of systems which require high assurance that they will function as intended. Our approach uses a new model i.e., a dynamic event/fault tree (DEFT) as a graphical and logical method to reason about and identify dependencies between system components, software components, failure events and system outcome modes. The method also explicitly includes software in the analysis and quantifies the contribution of the software components to overall system risk/ reliability. The latter is performed via software quality analysis (SQA) where we use a Bayesian network (BN) model that includes diverse sources of evidence about fault introduction into software; specifically, information from the software development process and product metrics. We illustrate our approach by applying it to the propulsion system of the miniature autonomous extravehicular robotic camera (mini-AERCam). The software component considered for the analysis is the related guidance, navigation and control (GN&C) component. The results of SQA indicate a close correspondence between the BN model estimates and the developer estimates of software defect content. These results are then used in an existing theory of worst-case reliability to quantify the basic event probability of the software component in the DEFT.",2007,0, 3662,Analytic Model for Web Anomalies Classification,"In this paper, an analytic technique is proposed to improve the dynamic Web application quality and reliability. The technique integrates orthogonal defect classification (ODC), and Markov chain to classify as well as analyze the collective view of Web errors. The error collective view will be built from access logs and defect data. This classification technique will enable viewing the Web errors in page, path, and application contexts. This technique will help in developing reliable Web applications that benefit from the understanding of Web anomalies and past issues. The preliminary results of applying this approach to a case study from telecommunications industry are included to demonstrate its' viability.",2007,0, 3663,Parsimonious classifiers for software quality assessment,"Modeling to predict fault-proneness of software modules is an important area of research in software engineering. Most such models employ a large number of basic and derived metrics as predictors. This paper presents modeling results based on only two metrics, lines of code and cyclomatic complexity, using radial basis functions with Gaussian kernels as classifiers. Results from two NASA systems are presented and analyzed.",2007,0, 3664,Conquering Complexity,"In safety-critical systems, the potential impact of each separate failure is normally studied in detail and remedied by adding backups. Failure combinations, though, are rarely studied exhaustively; there are just too many of them, and most have a low probability of occurrence. Defect detection in software development is usually understood to be a best effort at rigorous testing just before deployment. But defects can be introduced in all phases of software design, not just in the final coding phase. Defect detection therefore shouldn't be limited to the end of the process, but practiced from the very beginning. In a rigorous model-based engineering process, each phase is based on the construction of verifiable models that capture the main decisions.",2007,0, 3665,EEG Spike Detection Based on Qualitative Modeling of Visual Observation,"An EEG spike wave detection method based on qualitative modeling of visual observation is proposed in this paper. The method, based on qualitative measurement of sharpness degree of waves at spike wave frequencies, constructs a qualitative description model of visual observation to discriminate spike waves from none spike waves. The application of this method is illustrated with an example. And the corresponding result shows that this method is effective and direct to a certain extent.",2007,0, 3666,The effect on tracking loop performance when using the efficient search method for fast fine frequency acquisition,"When implementing real-time software based GNSS receiver, the accurate and fast signal acquisition and tracking method is more necessary than the hardware based receiver owing to lots of computation time. Generally, in the software based GNSS receiver, parallel code phase signal acquisition method using FFT has been used. But it has the defect of the limited frequency resolution according to input data's length. Considering the carrier frequency resolution must be about tens of Hz in order to perform the tracking loop correctly, the fine frequency acquisition method to minimize the computation time should be necessary to overcome this limitation. Also the examination whether the method will enhance the tracking loop's performance and cause the reduction of computation time should be necessary. To evaluate these necessities, in this paper, the efficient search method for fine frequency acquisition will be suggested and tested. Furthermore, the effect on signal tracking loop, when the result of signal acquisition obtained by the suggested method is applied, will be analyzed.",2007,0, 3667,Robustness of a Spoken Dialogue Interface for a Personal Assistant,"Although speech recognition systems have become more reliable in recent years, they are still highly error-prone. Other components of a spoken language dialogue system must then be robust enough to handle these errors effectively, to avoid recognition errors from adversely affecting the overall performance of the system. In this paper, we present the results of a study focusing on the robustness of our agent-based dialogue management approach. We found that while the speech recognition software produced serious errors, the dialogue manager was generally able to respond reasonably to users' utterances.",2007,0, 3668,"Negotiation Dynamics: Analysis, Concession Tactics, and Outcomes","Given that a negotiation outcome is determined to a large extent by the successive offers exchanged by negotiating agents, it is useful to analyze dynamic patterns of the bidding, what Raiffa calls the ""negotiation dance"". Patterns in such exchanges may provide additional insight into the strategies used by the agents. The current practice of evaluating a negotiation strategy, however, is to primarily focus on fairness and quality aspects of the agreement. There is a lack of tools and methods that facilitate a precise analysis of the negotiation dynamics. To fill this gap, this paper introduces a method for analysis based on a classification of negotiation steps. The method provides the basic tools to perform a detailed and quantified analysis of a negotiation between two agents in terms of dynamic properties of the negotiation trace. The method can be applied to well-designed tournaments, but can also be used to analyze single 1-on-l negotiation. Example findings of applying the method to analyze the ABMP and Trade-Off strategies show that sensitivity to the preferences of the opponent is independent, respectively dependent, on a correct model of that opponent. Furthermore, the results illustrate that having domain knowledge is not always enough to avoid making unintentional steps.",2007,0, 3669,"Software-Based Online Detection of Hardware Defects Mechanisms, Architectural Support, and Evaluation","As silicon process technology scales deeper into the nanometer regime, hardware defects are becoming more common. Such defects are bound to hinder the correct operation of future processor systems, unless new online techniques become available to detect and to tolerate them while preserving the integrity of software applications running on the system. This paper proposes a new, software-based, defect detection and diagnosis technique. We introduce a novel set of instructions, called access-control extension (ACE), that can access and control the microprocessor's internal state. Special firmware periodically suspends microprocessor execution and uses the ACE instructions to run directed tests on the hardware. When a hardware defect is present, these tests can diagnose and locate it, and then activate system repair through resource reconfiguration. The software nature of our framework makes it flexible: testing techniques can be modified/upgraded in the field to trade off performance with reliability without requiring any change to the hardware. We evaluated our technique on a commercial chip-multiprocessor based on Sun's Niagara and found that it can provide very high coverage, with 99.22% of all silicon defects detected. Moreover, our results show that the average performance overhead of software-based testing is only 5.5%. Based on a detailed RTL-level implementation of our technique, we find its area overhead to be quite modest, with only a 5.8% increase in total chip area.",2007,0, 3670,Mutable Protection Domains: Towards a Component-Based System for Dependable and Predictable Computing,"The increasing complexity of software poses significant challenges for real-time and embedded systems beyond those based purely on timeliness. With embedded systems and applications running on everything from mobile phones, PDAs, to automobiles, aircraft and beyond, an emerging challenge is to ensure both the functional and timing correctness of complex software. We argue that static analysis of software is insufficient to verify the safety of all possible control flow interactions. Likewise, a static system structure upon which software can be isolated in separate protection domains, thereby defining immutable boundaries between system and application-level code, is too inflexible to the challenges faced by real-time applications with explicit timing requirements. This paper, therefore, investigates a concept called ""mutable protection domains"" that supports the notion of hardware-adaptable isolation boundaries between software components. In this way, a system can be dynamically reconfigured to maximize software fault isolation, increasing dependability, while guaranteeing various tasks are executed according to specific time constraints. Using a series of simulations on multidimensional, multiple-choice knapsack problems, we show how various heuristics compare in their ability to rapidly reorganize the fault isolation boundaries of a component- based system, to ensure resource constraints while simultaneously maximizing isolation benefit. Our ssh oneshot algorithm offers a promising approach to address system dynamics, including changing component invocation patterns, changing execution times, and mispredictions in isolation costs due to factors such as caching.",2007,0, 3671,A Discrete Differential Operator for Direction-based Surface Morphometry,"This paper presents a novel directional morphometry method for surfaces using first order derivatives. Non-directional surface morphometry has been previously used to detect regions of cortical atrophy using brain MRI data. However, evaluating directional changes on surfaces requires computing gradients to obtain a full metric tensor. Non-directionality reduces the sensitivity of deformation-based morphometry to area-preserving deformations. By proposing a method to compute directional derivatives, this paper enables analysis of directional deformations on surfaces. Moreover, the proposed method exhibits improved numerical accuracy when evaluating mean curvature, compared to the so-called cotangent formula. The directional deformation of folding patterns was measured in two groups of surfaces and the proposed methodology allowed to defect morphological differences that were not detected using previous non-directional morphometry. The methodology uses a closed-form analytic formalism rather than numerical approximation and is readily generalizable to any application involving surface deformation.",2007,0, 3672,9D-6 Signal Analysis in Scanning Acoustic Microscopy for Non-Destructive Assessment of Connective Defects in Flip-Chip BGA Devices,"Failure analysis in industrial applications often require methods working non-destructively for allowing a variety of tests at a single device. Scanning acoustic microscopy in the frequency range above 100 MHz provides high axial and lateral resolution, a moderate penetration depth and the required non-destructivity. The goal of this work was the development of a method for detecting and evaluating connective defects in densely integrated flip-chip ball grid array (BGA) devices. A major concern was the ability to automatically detect and differentiate the ball-connections from the surrounding underfill and the derivation of a binary classification between void and intact connection. Flip chip ball grid arrays with a 750 mum silicon layer on top of the BGA were investigated using time resolved scanning acoustic microscopy. The microscope used was an Evolution II (SAM TEC, Aalen, Germany) in combination with a 230 MHz transducer. Short acoustic pulses were emitted into the silicon through an 8 mm liquid layer. In receive mode reflected signals were recorded, digitized and stored at the SAM's internal hard drive. The off-line signal analysis was performed using custom-made MATLAB (The Mathworks, Natick, USA) software. The sequentially working analysis characterized echo signals by pulse separation to determine the positions of BGA connectors. Time signals originated at the connector interface were then investigated by wavelet- (WVA) and pulse separation analysis (PSA). Additionally the backscattered amplitude integral (BAI) was estimated. For verification purposes defects were evaluated by X-ray- and scanning electron microscopy (SEM). It was observed that ball connectors containing cracks seen in the SEM images show decreased values of wavelet coefficients (WVC). However, the relative distribution was broader compared to intact connectors. It was found that the separation of pulses originated at the entrance and exit of the ball array corresponded to the condition of- the connector. The success rate of the acoustic method in detecting voids was 96.8%, as verified by SEM images. Defects revealed by the acoustic analysis and confirmed by SEM could be detected by X-ray microscopy only in 64% of the analysed cases. The combined analyses enabled a reliable and non destructive detection of defect ball-grid array connectors. The performance of the automatically working acoustical method seemed superior to X-ray microscopy in detecting defect ball connectors.",2007,0, 3673,P3B-2 Clutter From Multiple Scattering and Aberration in a Nonlinear Medium,"Aberration, clutter, and reverberation degrade the quality of ultrasonic images. When an acoustic pulse propagates through tissue these effects occur simultaneously and it is difficult to obtain independent estimates for the precise source of the point spread function broadening. The purpose of this paper is to characterize the sources of clutter and reverberation with a simulation of ultrasonic propagation through the abdomen. A full-wave equation that describes nonlinear propagation in a heterogeneous attenuating medium is solved numerically with finite differences in the time domain (FDTD). Three dimensional solutions of the equation are verified with water tank measurements of a commercial diagnostic ultrasound transducer and are shown to be in excellent agreement in terms of the fundamental and harmonic acoustic fields, and the power spectrum at the focus. The linear and nonlinear components of the algorithm are also verified independently. In the linear non-attenuating regime solutions match results from Field II, a well established software package used in transducer modeling, to within 0.3 dB. In addition to thermoviscous attenuation we present a numerical solution of the relaxation attenuation laws that allows modeling of arbitrary frequency dependent attenuation, such as that observed in tissue. A perfectly matched layer (PML) is implemented at the boundaries with a novel numerical implementation that allows the PML to be used with high order discretizations. A -78 dB reduction in the reflected amplitude is demonstrated. The numerical algorithm is used to simulate a focused ultrasonic pulse propagating through a histologically determined representation of the human abdomen. An ultrasound image is created in silicon using the same physical and algorithmic process used in an ultrasound scanner: a series of pulses are transmitted through heterogeneous scattering tissue and the received echoes are used in a delay-and-sum beamforming algorithm to generate a ima- ges. The resulting harmonic image exhibits characteristic improvement in lesion boundary definition and contrast when compared to the fundamental image. We demonstrate a mechanism of harmonic image quality improvement by showing that the harmonic point spread function is less sensitive to reverberation clutter.",2007,0, 3674,Evolving Conditional Value Sets of Cost Factors for Estimating Software Development Effort,"The software cost estimation process is one of the most critical managerial activities related to project planning, resource allocation and control. As software development is a highly dynamic procedure, the difficulty of providing accurate cost estimations tends to increase with development complexity. The inherent problems of the estimation process stem from its dependence on several complex variables, whose values are often imprecise, unknown, or incomplete, and their interrelationships are not easy to comprehend. Current software cost estimation models do not inspire enough confidence and accuracy with their predictions. This is mainly due to the models' sensitivity to project data values, and this problem is amplified because of the vast variances found in historical project attribute data. This paper aspires to provide a framework for evolving value ranges for cost attributes and attaining mean effort values using the Al-oriented problem-solving approach of genetic algorithms, with a twofold aim. Firstly, to provide effort estimations by analogy to the projects classified in the evolved ranges and secondly, to identify any present correlations between effort and cost attributes.",2007,0, 3675,Cooperative mixed strategy for service selection in service oriented architecture,"In service oriented architecture (SOA), service brokers could find many service providers which offer same function with different quality of service (QoS). Under this condition, users may encounter difficulty to decide how to choose from the candidates to obtain optimal service quality. This paper tackles the service selection problem (SSP) of time-sensitive services using the theory of games creatively. Pure strategies proposed by current studies are proved to be improper to this problem because the decision conflicts among the users result in poor performance. A novel cooperative mixed strategy (CMS) with good computability is developed in this paper to solve such inconstant-sum non-cooperative n-person dynamic game. Unlike related researches, CMS offers users an optimized probability mass function instead of a deterministic decision to select a proper provider from the candidates. Therefore it is able to eliminate the fluctuation of queue length, and raise the overall performance of SOA significantly. Furthermore, the stability and equilibrium of CMS are proved by simulations.",2007,0, 3676,A real-time implementation of chaotic contour tracing and filling of video objects on reconfigurable hardware,"This paper proposes a real-time, robust, scalable and compact field programmable gate array (FPGA) based architecture and its implementation of contour tracing and filling of video objects. Achieving real-time performance on general purpose sequential processors is difficult due to the heavy computational complexity in contour tracing and filling, thus a hardware acceleration is inevitable. Our finding to the existing related work confirms that the proposed architecture is much more feasible, cost effective and features important algorithmic-specific qualities, including deleting dead contour branches and removing noisy contours, which are required in many video processing applications. Moreover, performance analysis shows that our hardware approach achieves an order of magnitude performance improvement over the existing pure software-based implementations. Our implementation attained an optimum processing clock of 156 MHz while utilizing minimal hardware resources and power. The proposed FPGA design was successfully simulated, synthesized and verified for its functionality, accuracy and performance on an actual hardware platform which consists of a frame grabber with a user programmable Xilinx Virtex-4 SX35 FPGA.",2007,0, 3677,A Distributed Algorithm for Service Commitment in Allocating Services to Applications,"In this paper, we extend our previous work on committing service instances to the applications in a service-oriented environment, and we propose a distributed heuristic algorithm which is able to estimate the number of future service instances needed by each application instance in a future time. Also, this algorithm does not assume any specific type of distribution function for services execution time and applications interarrival time. In this paper, after presenting the mathematical basis for the proposed distributed algorithm, we explain how this algorithm can be implemented in a distributed environment, and through the simulations and performance comparisons, we show that the proposed algorithm improves the performance of the system significantly, compared to a No commitment policy system and a full commitment policy system.",2007,0, 3678,Implementation and Performance Evaluation of an Adaptable Failure Detector for Distributed System,"Unreliable failure detectors have been an important abstraction to build dependable distributed applications over asynchronous distributed systems subject to faults. Their implementations are commonly based on timeouts to ensure algorithm termination. However, for systems built on the Internet, it is hard to estimate this time value due to traffic variations. In order to increase the performance, self-tuned failure detectors dynamically adapt their timeouts to the communication delay behavior added of a safety margin. In this paper, we propose a new implementation of a failure detector. This implementation is a variant of the heartbeat failure detector which is adaptable and can support scalable applications. In this implementation we dissociate two aspects: a basic estimation of the expected arrival date to provide a short detection time, and an adaptation of the quality of service. The latter is based on two principles: an adaptation layer and a heuristic to adapt the sending period of ""I am alive"" messages.",2007,0, 3679,Research of Software Reliability Based on Synthetic Architecture,"In this paper we introduce some typical architecture-based software reliability model for software architecture reliability estimation, and modify model of software reliability estimation so that to improve precision of estimating software architecture reliability. Our approach is based on Markov chain properties and software architecture. At the same time, we propose the method of how to use the modified model to make a reliability estimation in a synthetic architecture so that to expand the application domain of this model. In addition, we conduct an experiment on to validate this method..",2007,0, 3680,UML-based safety analysis of distributed automation systems,"HAZOP (hazard and operability) studies are carried out to analyse complex automated systems, especially large and distributed automated systems. The aim is to systematically assess the automated system regarding possibly negative effects of deviations from standard operation on safety and performance. Today, HAZOP studies require significant manual effort and tedious work of several costly experts. The authors of this paper propose a knowledge-based approach to support the HAZOP analysis and to reduce the required manual effort. The main ideas are (1) to incorporate knowledge about typical problems in automation systems, in combination with their causes and their effects, in a rule base, and (2) to apply this rule base by means of a rule engine on the description of the automated system under consideration. This yields a list of possible dangers regarding safety risks and performance reductions. These results can be used by the automation experts to improve the system's design. Within this paper, the general approach is presented, and an example application is dealt with where the system design is given in the form of a UML class diagram, and the HAZOP study is focused on hazards caused by faulty communication within the distributed system.",2007,0, 3681,Neural network controlled voltage disturbance detector and output voltage regulator for Dynamic Voltage Restorer,"This paper describes the high power DVR (Dynamic Voltage Restorer) with the neural network controlled voltage disturbance detector and output voltage regulator. Two essential parts of DVR control are how to detect the voltage disturbance such as voltage sag and how to compensate it as fast as possible respectively. The new voltage disturbance detector was implemented by using the delta rule of the neural network control. Through the proposed method, we can instantaneously track the amplitude of each phase voltage under the severe unbalanced voltage conditions. Compared to the conventional synchronous reference frame method, the proposed one shows the minimum time delay to determine the instance of the voltage disturbance event. Also a modified d-q transformed voltage regulator for single phase inverter was adopted to obtain the fast dynamic response and the robustness, where three independent single phase inverters are controlled by using the amplitude of source voltage obtained by neural network controller. By using the proposed voltage regulator, the voltage disturbance such as voltage sag can be compensated quickly to the nominal voltage level. The proposed disturbance detector and the voltage regulator were applied to the high power DVR (1000 kV A@440 V) that was developed for the application of semiconductor manufacture plant. The performances of the proposed DVR control were verified through computer simulation and experimental results. Finally, conclusions are given.",2007,0, 3682,Fast and reliable plagiarism detection system,"Plagiarism and similarity detection software is well-known in universities for years. Despite the variety of methods and approaches used in plagiarism detection, the typical trade-off between the speed and the reliability of the algorithm still remains. We introduce a new two-step approach to plagiarism detection that combines high algorithmic performance and the quality of pairwise file comparison. Our system uses fast detection method to select suspicious files only, and then invokes precise (and slower) algorithms to get reliable results. We show that the proposed method does not noticeably reduce the quality of the pairwise comparison mechanism while providing better speed characteristics.",2007,0, 3683,An Approach for Assessment of Reliability of the System Using Use Case Model,"Existing approaches on reliability assessment of the system entirely depends on expertise, knowledge of system analysts and computation of usage probability of different user operations. Existing approach is therefore can not be taken as accurate, particularly to deal with new or unfamiliar systems. Further modern systems are very large and complex to manipulate without any automation. Addressing these issues, we propose an analytical technique in our work. In this paper, we propose a novel approach to assess reliability of the system using use case model. We consider a metric to assess the reliability of a system under development.",2007,0, 3684,Assessing tram schedules using a library of simulation components,"Assessing tram schedules is important to assure an efficient use of infrastructure and for the provision of a good quality service. Most existing infrastructure modeling tools provide support to assess an individual aspect of rail systems in isolation, and do not provide enough flexibility to assess many aspects that influence system performance at once. We propose a library of simulation components that enable rail designers to assess different system configurations. In this paper we show how we implemented some basic safety measures used in rail systems such as: reaction to control objects (e.g. traffic lights), priority rules, and block safety systems.",2007,0, 3685,Discrete event simulation modeling of resource planning and service order execution for service businesses,"In this paper, we present a framework for developing discrete-event simulation models for resource-intensive service businesses. The models simulate interactions of activities of demand planning of service engagements, supply planning of human resources, attrition of resources, termination of resources and execution of service orders to estimate business performance of resource-intensive service businesses. The models estimate serviceability, costs, revenue, profit and quality of service businesses. The models are also used in evaluating effectiveness of various resource management analytics and policies. The framework is aided by an information meta-model, which componentizes modeling objects of service businesses and allows effective integration of the components.",2007,0, 3686,Transactional Memory Execution for Parallel Multithread Programming without Lock,"With the increasing popularity of shared-memory programming model, especially at the advent of multicore processors, applications need to become more concurrent to take advantage of the increased computational power provided by chip level multiprocessing. Traditionally locks are used to enforce data dependence and timing constraints between the various threads. However locks are error- prone, and often leading to unwanted race conditions, priority inversion, or deadlock. Therefore, recent waves of research projects are exploring transaction memory systems as an alternative synchronization mechanism to locks. This paper presents a software transactional memory execution model for parallel multithread programming without lock.",2007,0, 3687,GLIMS: Progress in mapping the world’s glaciers,"The global land ice measurement from space (GLIMS) project is a cooperative effort of over sixty institutions world-wide with the goal of inventorying a majority of the world's estimated 160000 glaciers. Data sent from regional center analysts to the GLIMS team at the national snow and ice data center (NSIDC) in Boulder, Colorado are inserted into a geospatial database. The GLIMS Glacier Database now contains outlines of over 58 000 glaciers. As submissions to the database from all over the world increase, we find that we must accommodate a greater diversity in the character and quality of the data submitted than was originally anticipated. We present an overview of the current glacier outline inventory, and examine issues related to data coverage, and data quality. A significant achievement of the GLIMS project is that the database and interfaces to it, as well as the GLIMSView tool, provided to help in the production of GLIMS analyses, are all built from open source software. Issues we've dealt with to achieve a high-quality glacier database include: 1. data submitted without proper metadata; 2. data submitted with incorrect georegistration; 3. varying analyst interpretations of what exactly constitutes a glacier; 4. arbitrary termination of glacier boundaries at political boundaries; 5. arbitrary termination of glacier boundaries at edges of available satellite images.",2007,0, 3688,Adaptive digital synchronization of measuring window in low-cost DSP power quality measurement systems,"The problems with the power quality encountered in cases of low voltage hospital power networks are increasingly common. It is due to the fact that there are usually a few power networks, backup power sources and a wild range of appliances. That situation may cause disturbances in power network and decrease the quality of electric energy. The problem is significant because power supply networks in hospitals provide electricity for very sensitive and expensive measuring apparatus which guards patients' life. These arguments led to creating relatively cheap systems of evaluating energy quality and disturbance detection. It presents not only hardware but also software optimization of the method of measuring the quality of electric energy which meets the basic standards (EN 50160, EN 61000-4- 7, EN 1000-4-30). The article justifies the usage of modern floating-point digital signal processor TMS320C6713 as a computing unit. It is a low budget alternative which can be used in other similar cases. Presented hardware selection has other additional specific features suitable for applying in hospital power supply network. The other part of the paper presents the algorithm of software which was implemented on DSP-system. It includes adaptive digital synchronization of the measuring window with power frequency. Also the algorithm for digital phase locked loop was presented.",2007,0, 3689,"Multiple signal processing techniques based power quality disturbance detection, classification, and diagnostic software","This work presents the development steps of the software PQMON, which targets power quality analysis applications. The software detects and classifies electric system disturbances. Furthermore, it also makes diagnostics about what is causing such disturbances and suggests line of actions to mitigate them. Among the disturbances that can be detected and analyzed by this software are: harmonics, sag, swell and transients. PQMON is based on multiple signal processing techniques. Wavelet transform is used to detect the occurrence of the disturbances. The techniques used to do such feature extraction are: fast Fourier transform, discrete Fourier transform, periodogram, and statistics. Adaptive artificial neural network is also used due to its robustness in extracting features such as fundamental frequency and harmonic amplitudes. The probable causes of the disturbances are contained in a database, and their association to each disturbance is made through a cause-effect relationship algorithm, which is used to diagnose. The software also allows the users to include information about the equipments installed in the system under analysis, resulting in the direct nomination of any installed equipment during the diagnostic phase. In order to prove the effectiveness of software, simulated and real signals were analyzed by PQMON showing its excellent performance.",2007,0, 3690,Visualization for Software Evolution Based on Logical Coupling and Module Coupling,"In large scale software projects, developers make much software during long term. The source codes of software are frequently revised in the projects. The source codes evolve to become complex. Measurements of software complexity have been proposed, such as module coupling and logical coupling. In the case of the module coupling, even if developers copy pieces of source codes to a new module, the module coupling can not detect relationship between the pieces of the source codes although the pieces of the two modules have strong coupling. On the other hand, in the logical coupling, if two modules are accidentally revised at same time by a same developer, the logical coupling will judge strong coupling between the two modules although the modules have no relation. Therefore, we proposed a visualization technique and software complexity metrics for software evolution. A basic idea is that modules including strong module coupling should have strong logical coupling. If a gap between a set of modules including strong module couplings and a set of modules including strong logical couplings is large, the software complexity will be large. In addition, our visualization technique helps developers understand changes of software complexity. As a result of experiments in open source projects, we confirmed that the proposed metrics and visualization technique were able to detect high risky project with many bugs.",2007,0, 3691,A Large-Scale Empirical Comparison of Object-Oriented Cohesion Metrics,"Cohesion is an attribute of software design quality for which many metrics have been proposed. The different proposals have been made largely on theoretical grounds, with little evidence of actual use. This makes it difficult to provide advice to software developers as to how to interpret the measurements any given metric produces. This paper presents the first large-scale empirical study of object- oriented cohesion metrics. We apply 16 metrics from the literature, as well as a number of variations, to 92 open source and industry Java applications ranging in size from a few classes to several thousand, over 100,000 classes in all. Our results show that by and large applications have similar distributions of measurements according to any given metric, but that the distributions can be quite different across metrics. This provides useful information for the ongoing empirical validation efforts for cohesion metrics.",2007,0, 3692,Improving Effort Estimation Accuracy by Weighted Grey Relational Analysis During Software Development,"Grey relational analysis (GRA), a similarity-based method, presents acceptable prediction performance in software effort estimation. However, we found that conventional GRA methods only consider non-weighted conditions while predicting effort. Essentially, each feature of a project may have a different degree of relevance in the process of comparing similarity. In this paper, we propose six weighted methods, namely, non-weight, distance-based weight, correlative weight, linear weight, nonlinear weight, and maximal weight, to be integrated into GRA. Three public datasets are used to evaluate the accuracy of the weighted GRA methods. Experimental results show that the weighted GRA performs better precision than the non-weighted GRA. Specifically, the linearly weighted GRA greatly improves accuracy compared with the other weighted methods. To sum up, the weighted GRA not only can improve the accuracy of prediction but is an alternative method to be applied to software development life cycle.",2007,0, 3693,A Novel Approach of Prioritizing Use Case Scenarios,"Modern softwares are very large and complex. As the size and complexity of software increases, software developers feel an urgent need for a better management of different activities during the course of software development. In this paper, we present an approach of use case scenario prioritization suitable for project planning at an early phase of the software development. We consider only use case model in our work. For prioritization, we focus on how critical a scenario path is, which essentially depends on density of overlapping of sub path of a scenario path with other scenario path(s) of a use case. Our proposed approach provides an analytical solution on use case scenario prioritization and is very much effective in project management related activities as substantiated by our experimental results.",2007,0, 3694,A Model-Driven Simulation for Performance Evaluation of 1xEV-DO,"Finding an appropriate approach to evaluate the capacity of a radio access technology with respect to a wide range of parameters (radio signal quality, quality of service, user mobility, network resources) has become increasingly important in today's wireless network planning. In this paper, we propose a model- based simulation to assess the capabilities of the IxEV- DO radio access technology. Results are expressed in terms of throughputs and signal quality (C/I and Ec/Io) for the requested services and applications.",2007,0, 3695,Extended Model Design for Quality Factor Based Web Service Management,"Recently web services has played a key role in implementing enterprise-level applications based on Service Oriented Architecture (SOA). To achieve this goal, a realistic web service meets both functional and non-functional requirements of its consumers. But currently there is no standard that is capable of accurately representing quality factors of web services. To manage web services, it needs to monitors quality of them periodically or aperiodically when changing their status like availability, performance and security policy. We present an extended web services framework based on SOA structure for providing information about quality of web services and build a prototype- WebServiceBot for applying quality factors. To build the prototype, we firstly define a service description by using annotated WSDL with quality factors. The extended web services framework is used to determine appropriate web services which meet non-functional requirements. Using the proposed framework, we can improve worst-case predictability of applications using totally uncontrollable web services.",2007,0, 3696,Intelligent Selection of Fault Tolerance Techniques on the Grid,"The emergence of computational grids has lead to an increased reliance on task schedulers that can guarantee the completion of tasks that are executed on unreliable systems. There are three common techniques for providing task-level fault tolerance on a grid: retrying, replicating, and checkpointing. While these techniques are varyingly successful at providing resilience to faults, each of them presents a tradeoff between performance and resource cost. As such, tasks having unique urgency requirements would ideally be placed using one of the techniques; for example, urgent tasks are likely to prefer the replication technique, which guarantees timely completion, whereas low priority tasks should not incur any extra resource cost in the name of fault tolerance. This paper introduces a placement and selection strategy which, by computing the utility of each fault tolerance technique in relation to a given task, finds the set of allocation options which optimizes the global utility. Heuristics which take into account the value offered by a user, the estimated resource cost, and the estimated response time of an option are presented. Simulation results show that the resulting allocations have improved fault tolerance, runtime, profit, and allow users to prioritize their tasks.",2007,0, 3697,A Case-based Reasoning with Feature Weights Derived by BP Network,"Case-based reasoning (CBR) is a methodology for problem solving and decision-making in complex and changing environments. This study investigates the performance of a hybrid case-based reasoning method that integrates a multi-layer BP neural network with case-based reasoning (CBR) algorithms for derivatives feature weights. This approach is applied to fault detection and diagnosis (FDD) system involves the examination of several criteria. The correct identification of the underlying mechanism of a fault is an important step in the entire fault analysis process. The trained BP neural network provides the basis to obtain attribute weights, whereas CBR serves as a classifier to identify the fault mechanism. Different parameters of the hybrid methods were varied to study their effect. The results indicate that better performance could be achieved by the proposed hybrid method than that using conventional CBR alone.",2007,0, 3698,A Fast PageRank Convergence Method based on the Cluster Prediction,"In recent years, search engines have already played the key roles among Web applications, and link analysis algorithms are the major methods to measure the important values of Web pages. These algorithms employ the conventional flat Web graph built by Web pages and link relations of Web pages to obtain the relative importance of Web objects. Previous researches have observed that PageRank-like link analysis algorithms have a bias against newly created Web pages. A new ranking algorithm called Page Quality was then proposed to solve this issue. Page Quality predicates future ranking values by the difference rate between the current ranking value and the previous ranking value. In this paper, we propose a new algorithm called DRank to diminish the bias of PageRank-like link analysis algorithms, and attain the better performance than Page Quality. In this algorithm, we model Web graph as a three-layer graph which includes Host Graph, Directory Graph and Page Graph by using the hierarchical structure of URLs and the structure of link relation of Web pages. We calculate the importance of Hosts, Directories and Pages by weighted graph we built and then the clustering distribution of PageRank values of pages within directories is observed. We can then predicate the more accurate values of page importance to diminish the bias of newly created pages by the clustering characteristic of PageRank. Experiment results show that DRank algorithm works well on predicating future ranking values of pages and outperform Page Quality.",2007,0, 3699,A PC-Based System for Automated Iris Recognition under Open Environment,"This paper presents an entirely automatic system designed to realize accurate and fast personal identification from the iris images acquired under open environment. The acquisition device detects the appearance of a user at any moment using an ultrasonic transducer, guides the user in positioning himself in the acquisition range and acquires the best iris image of the user through quality evaluation. Iris recognition is done using the bandpass characteristic of wavelets and wavelet transform principles for detecting singularities to extract iris features and adopting Hamming distance to match iris codes. The authentication service software can enroll a user's iris image into a database and perform verification of a claimed identity or identification of an unknown entity. The identification rate is high and the recognition result is available about 6 s starting at iris image acquisition. This system is promising to be used in applications requiring personal identification.",2007,0, 3700,Fast global elimination algorithm and low-cost VLSI design for motion estimation,"In this paper, an efficient fast full search motion estimation algorithm, which is accelerated by sub-sampled preprocess, is proposed to reduce the computation costs of block matching algorithm in video compressions. The proposed fast global elimination algorithm (FGEA) can skip unnecessary search positions by the preprocessing unit. We employ the preprocessing unit to calculate the sub-sampled sum of subblock absolute difference (sub-sampled SSAD) for each candidate macro-block (MB) in the search range, where the preprocessing can select a number of probable candidate motion blocks. Then we calculate the sum of absolute difference (SAD) from these selected candidate blocks and find out the final motion vector. The proposed FGEA which uses the sub-sampled skill, reduces the computation cost of preprocessing and maintains the same compensated video quality in comparison with the previous GEA, which almost provides the same quality as the well-known full search block matching algorithm (FSBMA). Software simulation results show that the proposed fast block matching motion estimation algorithm can reduce about 38% absolute difference (AD) costs and perform average 1.64 runtime faster than the previous GEA method on the basis of the same compensated video quality with the FSBMA Through VLSI implementations, the realization of the FGEA only needs 22.1 k gate counts to achieve real-time process 30 frames/sec CIF-format video sequences.",2007,0, 3701,An embedded CPU based automatic ranging system for vehicles,"Based on the structure and theory of optical-mechanical-electrical automatic ranging system of vehicles, paper has a discussion and research of the architecture and design of auto ranging system with risk estimation and decision making basing on embedded CPU. The solution of embedded system has got a good real-time performance and it can promote the data processing capability of the system. Paper mainly introduces the defects of SCM (single chip Micyoco) based ranging system and the hardware architecture, software architecture and some other key technology of the embedded system based ranging system, etc. It also introduces the solution of using two lidars light detection and ranging method.",2007,0, 3702,On discrete event diagnosis methods for continuous systems,"Fault detection and isolation is a key component of any safety-critical system. Although diagnosis methods based on discrete event systems have been recognized as a promising framework, they cannot be easily applied to systems with complex continuous dynamics. This paper presents a novel approach for discrete event system diagnosis of continuous systems based on a qualitative abstraction of the measurement deviations from the nominal behavior. We systematically derive a diagnosis model, provide diagnosability analysis, and design a diagnoser. Our results show that the proposed approach is easily applicable and can be used for online diagnosis of abrupt faults in continuous systems.",2007,0, 3703,Commissioning of the ATLAS offline software with cosmic rays,"The ATLAS experiment of the LHC is now taking its first data by collecting cosmic ray events. The full reconstruction chain including all sub-systems (inner detector, calorimeters and muon spectrometer) is being commissioned with this kind of data for the first time. Detailed analysis are being performed in order to provide ATLAS with its first alignment and calibration constants and to study the combined muon performance. Combined monitoring tools and event displays have also been developed to ensure good data quality. A simulation of cosmic events according to the different detector and trigger setups has also been provided to verify it gives a good description of the data.",2007,0, 3704,Validation of GATE Monte Carlo simulation of the performance characteristics of a GE eXplore VISTA small animal PET system,"In this work, we validate the application of GATE (Geant4 Application for Tomographic Emission) Monte Carlo simulation software to model a GE eXplore VISTA small animal PET system. GATE was used to build our three-dimensional PET simulation model. The simulated acquired data were stored in a list-mode file and processed by the same reconstruction scheme as that used in real scanner. To validate the simulation results, several phantoms used in performance assessment experiments were modeled for simulating the system sensitivity, spatial resolution, scatter fraction and count rate response. A realistic voxelized mouse phantom was applied to present the overall quality of our model and compared with mouse images obtained from real exam.",2007,0, 3705,ACCE: Automatic correction of control-flow errors,"Detection of control-flow errors at the software level has been studied extensively in the literature. However, there has not been any published work that attempts to correct these errors. Low-cost correction of CFEs is important for real-time systems where checkpointing is too expensive or impossible. This paper presents automatic correction of control-flow errors (ACCE), an efficient error correction algorithm involving addition of redundant code to the program. ACCE has been implemented by modifying GCC, a widely used C compiler, and performance measurements show that the overhead is very low. Fault injection experiments on SPEC and MiBench benchmark programs compiled with ACCE show that the correct output is produced with high probability and that CFEs are corrected with a latency of a few hundred instructions.",2007,0, 3706,A methodology for detecting performance faults in microprocessors via performance monitoring hardware,"Speculative execution of instructions boosts performance in modern microprocessors. Control and data flow dependencies are overcome through speculation mechanisms, such as branch prediction or data value prediction. Because of their inherent self-correcting nature, the presence of defects in speculative execution units does not affect their functionality (and escapes traditional functional testing approaches) but impose severe performance degradation. In this paper, we investigate the effects of performance faults in speculative execution units and propose a generic, software-based test methodology, which utilizes available processor resources: hardware performance monitors and processor exceptions, to detect these faults in a systematic way. We demonstrate the methodology on a publicly available fully pipelined RISC processor that has been enhanced with the most common speculative execution unit, the branch prediction unit. Two popular schemes of predictors built around a Branch Target Buffer have been studied and experimental results show significant improvements on both cases fault coverage of the branch prediction units increased from 80% to 97%. Detailed experiments for the application of a functional self-testing methodology on a complete RISC processor incorporating both a full pipeline structure and a branch prediction unit have not been previously given in the literature.",2007,0, 3707,High Performance Software-Hardware Network Intrusion Detection System,"Network intrusion detection systems (NIDS) and quality of service (QoS) demands have been steadily increasing over the past few years. Current solutions using software become inefficient running on high speed high volume networks and will end up dropping packets. Hardware solutions are available and result in much higher efficiency but present problems such as flexibility and cost. Our proposed system uses a modified version of Snort, a robust widely deployed open-sourced NIDS. It has been found that Snort spends at least 30%-60% of its processing time doing pattern matching. Our proposed system runs Snort in software until it gets to the pattern matching function and then offloads that processing to the field programmable gate array (FPGA). The software can then go on to other processing while it waits for the results from the FPGA. The hardware is able to process data at upto 1.7 GB/s on one Xilinx XC2VP100 FPGA. The design is scaleable and will allow for multiple FPGAs to be used in parallel to increase the processing speed even further.",2007,0, 3708,A parallel controls software approach for PEP II: AIDA & MATLAB middle layer,"The controls software in use at PEP II (Stanford control program - SCP) had originally been developed in the eighties. It is very successful in routine operation but due to its internal structure it is difficult and time consuming to extend its functionality. This is problematic during machine development and when solving operational issues. Routinely, data has to be exported from the system, analyzed offline, and calculated settings have to be reimported. Since this is a manual process, it is time consuming and error-prone. Setting up automated processes, as is done for MIA (model independent analysis), is also time consuming and specific to each application. Recently, there has been a trend at light sources to use MATLAB [1] as the platform to control accelerators using a ""MATLAB middle layer"" [2] (MML), and so called channel access (CA) programs to communicate with the low level control system (LLCS). This has proven very successful, especially during machine development time and trouble shooting. A special CA code, named AIDA (Accelerator Independent Data Access [3]), was developed to handle the communication between MATLAB, modern software frameworks, and the SCP. The MML had to be adapted for implementation at PEP II. Colliders differ significantly in their designs compared to light sources, which poses a challenge. PEP II is the first collider at which this implementation is being done. We will report on this effort, which is still ongoing.",2007,0, 3709,The LHC beam pipe waveguide mode reflectometer,"The waveguide-mode reflectometer for obstacle detection in the LHC beam pipe has been intensively used for more than 18 months. The ""Assembly"" version is based on the synthetic pulse method using a modern vector network analyzer. It has mode selective excitation couplers for the first TE and TM mode and uses a specially developed waveguide mode dispersion compensation algorithm with external software. In addition there is a similar ""In Situ"" version of the reflectometer which uses permanently installed microwave couplers at the end of each of the nearly 3 km long LHC arcs. During installation a considerable number of unexpected objects have been found in the beam pipes and subsequently removed. Operational statistics and lessons learned are presented and the overall performance is discussed.",2007,0, 3710,Variable block size error concealment scheme based on H.264/AVC non-normative decoder,"As the newest video coding standard, H.264/AVC can achieve high compression efficiency. At the same time, due to the high-efficiently predictive coding and the variable length entropy coding, it is more sensitive to transmission errors. So error concealment (EC) in H.264 is very important when compressed video sequences are transmitted over error-prone networks and erroneously received. To achieve higher EC performance, this paper proposes variable block size error concealment scheme (VBSEC) by utilizing the new concept of variable block size motion estimation (VBSME) in H.264 standard. This scheme provides four EC modes and four sub-block partitions. The whole corrupted macro-block (MB) will be divided into variable block size adaptively according to the actual motion. More precise motion vectors (MV) will be predicted for each sub-block. We also produce a more accurate distortion function based on spatio-temporal boundary matching algorithm (STBMA). By utilizing VBSEC scheme based on our STBMA distortion function, we can reconstruct the corrupted MB in the inter frame more accurately. The experimental results show that our proposed scheme can obtain maximum PSNR gain up to 1.72 dB and 0.48 dB, respectively compared with the boundary matching algorithm (BMA) adopted in the JM11.0 reference software and STBMA.",2007,0, 3711,Hardware friendly background analysis based complexity reduction in H.264/AVC multiple reference frames motion estimation,"In H.264 standard, multiple reference frame motion estimation (MRF- ME) can help to generate small residues and improve the performance. However, MRF-ME is also a computation intensive task for video coding system. Many software oriented fast algorithms have been proposed to shorten MRF-ME process. For hardwired real-time encoder, the division of ME part into two pipeline stages degrades the efficiency of many fast algorithms. This paper gives one hardware friendly MRF-ME algorithm to reduce computation complexity in MRF-ME procedure. The proposed algorithm is based on the analysis of macroblock's (MB's) feature and restricts search range for static background MB. Through experiment results, with negligible video quality degradation, the proposed background analysis based MRF-ME algorithm can averagely reduce 41.75% ME time for sequences with static background. Moreover, the proposed algorithm is compatible to other fast algorithms and friendly to hardware implementation of H.264 real-time encoder.",2007,0, 3712,A policy controlled IPv4/IPv6 network emulation environment,"In QoS enabled IP-based networks, QoS signaling and policy control are used to control the access to network resources and their usage. The IETF proposed standard protocol for policy control is Common Open Policy Service (COPS) protocol, which has also been adopted in 3GPP IP Multimedia Subsystem (IMS) Release 5. This paper presents a prototype for policy-controlled IPv4/IPv6 network emulation environment, in which it is possible to specify the policy control and emulate, over a period of time, QoS parameters such as bandwidth, packet delay, jitter, and packet discard probability for media flows within an IP multimedia session. The policy control is handled by COPS, and IP channel emulation uses two existing network emulation tools, NIST Net and ChaNet, supporting IPv4 and IPv6 protocols, respectively. The scenario-based approach allows reproducible performance measurements and running various experiments by using the same network behavior. A graphical user interface has been developed to make the scenario specification more user-friendly. We demonstrate the functionality of the prototype emulation environment for IPv6 and analyze its performance.",2007,0, 3713,Using pipeline technology to improve web server’s QoS,"To meet increasing QoS requirements of Web applications it is vital to design an efficient Web server. This article describes a pipeline server architecture, which can get intra-request parallelism. Based on control theory, we adopt predictor and feedback control methods to allocate threads of each stage dynamically. As a result, server can satisfy variant workloads adaptively. With the help of these ideas, we implement a novel kernel web server, KETA. Experiment results prove that KETA is an efficient kernel Web server with high performance.",2007,0, 3714,Weighted Centroid Localization in Zigbee-based Sensor Networks,"Localization in wireless sensor networks gets more and more important, because many applications need to locate the source of incoming measurements as precise as possible. Weighted centroid localization (WCL) provides a fast and easy algorithm to locate devices in wireless sensor networks. The algorithm is derived from a centroid determination which calculates the position of devices by averaging the coordinates of known reference points. To improve the calculated position in real implementations, WCL uses weights to attract the estimated position to close reference points provided that coarse distances are available. Due to the fact that Zigbee provides the link quality indication (LQI) as a quality indicator of a received packet, it can also be used to estimate a distance from a node to reference points.",2007,0, 3715,An extensive review on accessing quality information,"The WWW has become one of the fastest growing electronic information sources. In the Web, people are engaging in interaction with more and more diverse information than ever before, so that the problem of information quality is more significant in the Web than any other information system, especially considering the rate of growth in the number of documents. Yet, despite a decade of active research, Information quality lacks comprehensive methodology for its assessment and improvement and a few researches have presented practical way for measuring IQ criteria. This paper attempts to address some of the issues involved in information quality assessment by classifying and discussing existing researches of information quality frameworks. While many of pervious works have concentrated on a dimension of information quality assessment, in our classification has been considered three dimensions: criteria, models and validation methods of the models and criteria. This classification prepares a bed for developing practical and at the same time valid information quality model.",2007,0, 3716,Estimates of Radial Current Error from High Frequency Radar using MUSIC for Bearing Determination,Quality control of surface current measurements from high frequency (HF) radar requires understanding of individual error sources and their contribution to the total error. Radial velocity error due to uncertainty of the bearing determination technique employed by HF radar is observed with both direction finding and phased array techniques. Surface current estimates utilizing Multiple Signal Classification (MUSIC) direction finding algorithm with a compact antenna design are particularly sensitive to the radiation pattern of the receive and transmit antennas. Measuring the antenna pattern is a common and straightforward task that is essential for accurate surface current measurements. Radial current error due to the a distorted antenna pattern is investigated by applying MUSIC to simulated HF radar backscatter for an idealized ocean surface current. A Monte Carlo type treatment of distorted antenna patterns is used to provide statistics of the differences between simulated and estimated surface current. RMS differences between the simulated currents and currents estimated using distorted antenna patterns are 3-12 cm/s greater than those using perfect antenna patterns given a simulated uniform current of 50 cm/s. This type of analysis can be used in conjunction with antenna modeling software to evaluate possible error due to the antenna patterns before installing a HF radar site.,2007,0, 3717,Predicting performance of software systems during feasibility study of software project management,"Software performance is an important nonfunctional attribute of software systems for producing quality software. Performance issues must be considered throughout software project development. Predicting performance early in the life cycle is addressed by many methodologies, but the data collected during feasibility study not considered for predicting performance. In this paper, we consider the data collected (technical and environmental factors) during feasibility study of software project management to predict performance. We derive an algorithm to predict the performance metrics and simulate the results using a case study on banking application.",2007,0, 3718,Software Tool for Real Time Power Quality Disturbance Analysis and Classification,"Real time detection and classification of power quality disturbances is important for quick diagnosis and mitigation of such disturbances. This paper presents the development of a software tool based on MatLab for power quality disturbance analysis and classification. Prior to the development of the software tool, the disturbance signals are captured and processed in real-time using the TMS320C6711DSP starter kit. Digital signal processor is used to provide fast data capture, fast data processing and signal processing flexibility with increased system performance and reduced system cost. The developed software tool can be used for real-time and off-line disturbance analysis by displaying the detected disturbance, % harmonics of a signal, total harmonic distortion and results of the S-transform, fast Fourier transform and continuous wavelet transform analyses. In addition, graphical representation of the input signal, power quality indices including sag and swell magnitudes are also displayed on the graphical user interface. PQ disturbance classification results show that accurate disturbance classification can be obtained with a total % of correct classification of 99.3%. Such a software tool can serve as a simple and reliable means for PQ disturbance detection and classification.",2007,0, 3719,Efficient Development Methodology for Multithreaded Network Application,"Multithreading is becoming increasingly important for modern network programming. In inter-process communication platform, multithreaded applications have much benefit especially to improve application's throughput, responsiveness and latency. However, developing good quality of multithreaded codes is difficult, because threads may interact with each others in unpredictable ways. Although modern compilers can manage threads well, but in practice, synchronization errors (such as: data race errors, deadlocks) required careful management and good optimization method. The goal of this work is to discover common pitfalls in multithreaded network applications, present a software development technique to detect errors and optimize efficiently multithreaded applications. We compare performance of a single threaded network application with multithreaded network applications, use tools called Intelreg VTunetrade Performance Analyzer, Intelreg Thread Checker and our method to efficiently fix errors and optimize performance. Our methodology is divided into three phases: First phase is performance analysis using Intelreg VTunetrade Performance Analyzer with the aim is to identify performance optimization opportunities and detect bottlenecks. In second phase, with Intelreg Thread Checker we allocate data races, memory leakage and debug the multithreaded applications. In third phase, we apply tuning and optimization to the multithreaded applications. With the understanding of the common pitfalls in multithreaded network applications, through the development and debugging methodology aforementioned above, developers are able to optimize and tune their applications efficiently.",2007,0, 3720,DCPD acquisition and analysis for HV storage capacitor based on Matlab,"High-voltage storage capacitor is the key device in weapon system, high reliability is required. In the process of storage and application, effective measurements are needed to ensure the insulation capability of capacitors. Usually, partial discharge (PD) is used to detect the insulation status of capacitors. Under DC there does not exist fundamental parameter Phi , so a new parameter delta (t) is introduced .Adopting data acquisition and analysis system with single trigger based on Matlab software, PD of four typical defect models under DC condition is obtained. Data of DCPD signals was analyzed through q, n, delta (t) distribution figures, from which obvious differences can be obtained. All results can provide data foundation for pattern recognition of high-voltage storage capacitors.",2007,0, 3721,Defect prediction for embedded software,"As ubiquitous computing becomes the reality of our lives, the demand for high quality embedded software in shortened intervals increases. In order to cope with this pressure, software developers seek new approaches to manage the development cycle: to finish on time, within budget and with no defects. Software defect prediction is one area that has to be focused to lower the cost of testing as well as to improve the quality of the end product. Defect prediction has been widely studied for software systems in general, however there are very few studies which specifically target embedded software. This paper examines defect prediction techniques from an embedded software point of view. We present the results of combining several machine learning techniques for defect prediction. We believe that the results of this study will guide us in finding better predictors and models for this purpose.",2007,0, 3722,An application of a rule-based model in software quality classification,"A new rule-based classification model (RBCM) and rule-based model selection technique are presented. The RBCM utilizes rough set theory to significantly reduce the number of attributes, discretation to partition the domain of attribute values, and Boolean predicates to generate the decision rules that comprise the model. When the domain values of an attribute are continuous and relatively large, rough set theory requires that they be discretized. The subsequent discretized domain must have the same characteristics as the original domain values. However, this can lead to a large number of partitions of the attribute's domain space, which in turn leads to large rule sets. These rule sets tend to form models that over-fit. To address this issue, the proposed rule-based model adopts a new model selection strategy that minimizes over-fitting for the RBCM. Empirical validation of the RBCM is accomplished through a case study on a large legacy telecommunications system. The results demonstrate that the proposed RBCM and the model selection strategy are effective in identifying the classification model that minimizes over-fitting and high cost classification errors.",2007,0, 3723,Presentation of Information Synchronized with the Audio Signal Reproduced by Loudspeakers using an AM-based Watermark,Reproducing stego audio signal via a loudspeaker and detecting embedded data from a recorded sound from a microphone are challenging with respect to the application of data hiding. A watermarking technique using subband amplitude modulation was applied to a system that displays text information synchronously with the watermarked audio signal transmitted in the air. The robustness of the system was evaluated by a computer simulation in terms of the correct rate of data transmission under reverberant and noisy conditions. The results showed that the performance of detection and the temporal precision of synchronization were sufficiently high. Objective measurement of the watermarked audio quality using the PEAQ method revealed that the mean objective difference grade obtained from 100 watermarked music samples exhibited an intermediate value between the mean ODGs of 96-kbps and 128-kbps MP3 encoded music samples.,2007,0, 3724,Ensuring the Quality Testing of Web Using a New Methodology,"As Web sites becoming a fundamental component of businesses, quality of service will be one of the top management concerns. Users, normally, does not care about site failures, traffic jams, network bandwidth [1], or other indicators of system failures. To an online customer, quality of service means fast, predictable response service, level of a Web site noted in a real time. User measures the quality by response time, availability, reliability, predictability, and cost. Poor quality implies that the customer will no longer visit the site and hence the organization may loose business. The issues that affect the quality are broken pages and faulty images, CGI-bin error messages, complex colour combinations, no back link, multiple and frequent links, etc. So we try to build a good program that can scan Web site for broken links, broken images, broken pages and other common Web site faults. As Web site cannot test as a whole in one attempt we rely on decomposing the behavior of the Web site into testable components and mapping these components onto testable objects. Moreover we verify our method by using Jmeter tool [2] to measure the performance of Web, which indicate that broken components on a Web site has a bad effect in its performance.",2007,0, 3725,PISRAT: Proportional Intensity-Based Software Reliability Assessment Tool,"In this paper we develop a software reliability assessment tool, called PISRAT: Proportional intensity-based software reliability assessment tool, by using several testing metrics data as well as software fault data observed in the testing phase. The fundamental idea is to use the proportional intensity-based software reliability models proposed by the same authors. PISRAT is written in Java language with 54 classes and 8.0 KLOC, where JDK1.5.0_9 and JFreeChart are used as the development kit and the chart library, respectively. This tool can support (i) the parameter estimation of software reliability models via the method of maximum likelihood, (ii) the goodness-of-fit test under several optimization criteria, (iii) the assessment of quantitative software reliability and prediction performance. To our best knowledge, PISRAT is the first freeware for dynamic software reliability modeling and measurement with time- dependent testing metrics.",2007,0, 3726,Predicting Defective Software Components from Code Complexity Measures,"The ability to predict defective modules can help us allocate limited quality assurance resources effectively and efficiently. In this paper, we propose a complexity- based method for predicting defect-prone components. Our method takes three code-level complexity measures as input, namely Lines of Code, McCabe's Cyclomatic Complexity and Halstead's Volume, and classifies components as either defective or non-defective. We perform an extensive study of twelve classification models using the public NASA datasets. Cross-validation results show that our method can achieve good prediction accuracy. This study confirms that static code complexity measures can be useful indicators of component quality.",2007,0, 3727,A new methodology for Web testing,"As Web sites becoming a fundamental component of businesses, quality of service will be one of the top management concerns. Users, normally, does not care about site failures, traffic jams, network bandwidth, or other indicators of system failures. To an online customer, quality of service means fast, predictable response service, level of a Web site noted in a real time. User measures the quality by response time, availability, reliability, predictability, and cost. Poor quality implies that the customer will no longer visit the site and hence the organization may loose business. The issues that affects the quality are, broken pages and faulty images, CGI-bin error messages, complex colour combinations, no back link, multiple and frequent links, etc. So we try to build a good program that can scan web site for broken links, broken images, broken pages and other common web sit e faults. Because web site cannot test as a whole in one attempt we rely in our implementation to decompose the behavior of the web site into testable components then mapping these components onto testable objects. Then we prove by using Jmeter performance testing tool that broken components on a web site has a bad effect in its performance.",2007,0, 3728,Joint Adaptive GOP and SCD Coding for Improving H.264 Scalable Video Coding,"In order to concurrently provide different video services for diverse access devices, the scalable quality and complexity services in a single bit stream offered by H.264-based scalable video coding (H.264/SVC) has gained a lot of attentions recently. However, the motion-compensated temporal filtering (MCTF) structure consumes a large amount of operation time when combined with Core Experiment 2 (CE2), which implements adaptive GOP detection (AGD). Besides, a simple scene change detection (SCD) method is required to decrease the mis-position motion estimation and yield robust AGD. In this paper, we suggest a video content variation (VCV) feature to implement simple AGD and SCD methods by using block-based classification before executing MCTF. Based on the JSVM 7.0 software, the simulation results show that the proposed method not only reduces about 67% of the operation time consumed but also improves about 0.1 dB PSNR on average as compared to the CE2, where the proposed SCD can attain a detection rate of 89% to help for performance improvement.",2007,0, 3729,Modeling of ITU-T G.729 codec with bit-width optimization for intensive computation blocks,"In this work, ITU-T G.729 speech codec is precisely modeled using basic building blocks of SIMULINK such as simple adders and multipliers. Therefore, a golden model of the codec which best suits as a reference for its hardware implementation is developed. In a custom hardware design, the precision of the computations, i.e. the optimum number of the bits of the computational blocks should be calculated. The optimum number of the bits of the computational blocks is the minimum word-width that maintains the quality of the output with minimum chip area. Three blocks are selected to extract their bit-true models. The optimum bit-width of the coefficients and the internal computations are measured. These blocks are the pre-processing block, the open loop pitch estimator, and the fixed codebook search block. The former is important in terms of error propagation to the output and the two others are the blocks that demand the most computational power respectively. As such, these blocks are suitable choices to be hardware implemented in a hardware-software co-design project.",2007,0, 3730,Multiple Parametric Circuit Analysis Tool for Detectability Estimation,In this paper a software tool is presented which is capable of producing testability metrics for analog and mixed-signal circuits. These metrics are obtained by performing probabilistic analysis techniques. The presented tool acts as a shell utilizing the power of state of the art external schematic and simulation programs and offers to the user a graphical interface for circuit design and test. This interface is transparent to the user elaborately hiding all the complex task manipulations. Preliminary results are presented showing the effectiveness of the proposed scheme.,2007,0, 3731,Component Based Proactive Fault Tolerant Scheduling in Computational Grid,"Computational Grids have the capability to provide the main execution platform for high performance distributed applications. Grid resources having heterogeneous architectures, being geographically distributed and interconnected via unreliable network media are extremely complex and prone to different kinds of errors, failures and faults. Grid is a layered architecture and most of the fault tolerant techniques developed on grids use its strict layering approach. In this paper, we have proposed a cross-layer design for handling faults proactively. In a cross-layer design, the top- down and bottom-up approach is not strictly followed, and a middle layer can communicate with the layer below or above it [1]. At each grid layer there would be a monitoring component that would decide on predefined factors that the reliability of that particular layer is high, medium or low. Based on Hardware Reliability Rating (HRR) and Software Reliability Rating (SRR), the Middleware Monitoring Component / Cross- Layered Component (MMC/CLC) would generate a Combined Rating (CR) using CR calculation matrix rules. Each grid participating node will have a CR value generated through cross layered communication using HMC, MMC/CLC and SMC. All grid nodes will have their CR information in the form of a CR table and high rated machines would be selected for job execution on the basis of minimum CPU load along with different intensities of check pointing. Handling faults proactively at each layer of grid using cross communication model would result in overall improved dependability and increased performance with less overheads of check pointing.",2007,0, 3732,Power System Flicker Analysis and Numeric Flicker Meter Emulation,"This paper presents a methodology for flicker propagation analysis, numeric IEC flickermeter emulation and flicker source modeling. The main results of this study are: ldr To create a distribution load model by which the flicker propagation from HV to MV can be studied, ldr To build numeric IEC flickermeter with improved algorithm (demodulator and nonlinear classification), ldr To build simplified actual disturbance source models (electric arc furnace, welding machine, motor starter, etc), which can be used in frequency domain and load-flow flicker assessment. The complexity of nonlinear models for simulating the dynamic behavior of arc furnaces and welding machines is well known. For this reason, previous works are generally based on time domain solutions. In this paper, a simplified approach has been studied by using RMS value-based modeling. The models have been validated against experimental results and site measurements. After integration of these models and a numeric flickermeter in a frequency domain software, it is possible to simulate almost all low frequency electrical disturbances with very short computing time and to survey interactions among them, for example, between voltage dip and flicker.",2007,0, 3733,New EPS state estimation algorithms based on the technique of test equations and PMU measurements,"Large system blackouts that have occurred in different countries in the first five years of the new millenium prove the need of updating and improvement of software for EPS monitoring and control (SCADA/EMS-applications). The updating of software for EPS monitoring and control (SCADA/EMS-applications) has become possible on a qualitatively new level owing to WAMS (wide-area measurement system), that allows the EPS state to be controlled synchronously and with high accuracy. The phasor measurement units (PMU) are the main measurement equipment in these systems. One of the applications, which will be significantly affected by the introduction of PMU, is the state estimation (SE). State estimation is an important procedure that provides reliable and qualitative information for EPS control. The state estimation results in calculation of steady state (current conditions) of EPS on the basis of measured state variables and data on the state of scheme topology. The measurements used for state estimation include mainly the measurements received from SCADA system. As compared to a standard set of measurements received from SCADA, PMU installed in the node can measure voltage phasor in this node and current phasors in some or all branches adjacent to this node with high accuracy. The use of additional high accuracy measurements improves observability of calculated network, enhances the efficiency of methods for bad data detection, increases accuracy and reliability of the obtained estimates.",2007,0, 3734,Estimation of the Distributed Generation Impacts on the Angle Stability of the Two-Machine Scheme,"The paper discusses the results of the case studies devoted to the impact of distributed generators connected to a distribution network on the transient stability of transmission network, with the critical fault clearing time applied as an performance index. The case studies are based on the two- machine scheme, employing also a model of the Baltic power system. The authors analyse a wide technological range of the distributed generators and a variety of the modelled operating conditions. The transient stability is estimated using EUROSTAG software for simulation of power systems.",2007,0, 3735,Utilizing data mining algorithms for identification and reconstruction of sensor faults: a Thermal Power Plant case study,"This paper describes a procedure of identifying sensor faults and reconstructing the erroneous measurements. Data mining algorithms are successfully applied for deriving models that estimate the value of one variable based on correlated others. The estimated values can then be used instead of the recorded ones of a measuring instrument with false reading. The aim is to reassure the correctness of data entered to an optimization software application under development for the Thermal Power Plants of Western Macedonia, Greece.",2007,0, 3736,Power Network Port Measurements Synchronization,"This work introduces a new procedure for software synchronization of electrical power signals collected over a large geographical area. The idea is to use not necessarily synchronized, but stable, local clocks, and synchronize the sparsely collected data windows by an offline procedure. An approach for the estimation of non monitored internal system buses serves as a basis for the synchronization process. The electrical power network description yields the desired phase corrections, compensating the loss of data window synchronism. Typical sub determined situations are avoided by a proposed alternative network monitoring scheme, illustrated by a case example. Simulation results emphasize the method robustness with respect to usual network parameters uncertainties.",2007,0, 3737,Standstill frequency response measurement of generator parameters,"Connell Wagner has estimated the parameters of large generators on a number of occasions and this paper will outline the method and results of this work with respect to using standstill frequency response methods and load rejection methods. A number of issues involving the testing methods, ability to determine parameters from tests and the applicability to current modelling software will be discussed.",2007,0, 3738,Impact of harmonics on tripping time of overcurrent relays,Theoretical and experimental analyses are used to investigate the effects of harmonics on the operation of overcurrent relays. Two analytical approaches based on relay characteristics provided by the manufacturer and simulations using PSCAD software package are used to estimate tripping times under non-sinusoidal operating conditions. The tests were conducted such that the order and the magnitude of harmonics could be controlled by employing a computer-based single-phase harmonic source. Computed and measured tripping times are compared and suggestions on the application of overcurrent relays in harmonically polluted environments are provided.,2007,0, 3739,CSM-autossociators combination for degraded machine printed character recognition,"This paper presents an OCR method that combines the complementary similarity measure (CSM) method with a set of autossociators for degraded character recognition. In the serial combination, the first classifier must achieve lower errors and be very well suited for rejection, whereas the second classifier must allow only low errors and rejects. We introduce a rejection criterion mode used as a quality measurement of the degraded character which makes the CSM-based classifier very powerful and very well suited for rejection. We report experimental results for a comparison of three methods: the CSM method, the autoassociator-based classifier and the proposed combined architecture. Experimental results show an achievement of 99.59% of recognition rate on poor quality bank check characters, which confirm that the proposed approach can be successfully used for effective degraded printed character recognition.",2007,0, 3740,Compiler-assisted architectural support for program code integrity monitoring in application-specific instruction set processors,"(ASIPs) are being increasingly used in mobile embedded systems, the ubiquitous networking connections have exposed these systems under various malicious security attacks, which may alter the program code running on the systems. In addition, soft errors in microprocessors can also change program code and result in system malfunction. At the instruction level, all code modifications are manifested as bit flips. In this work, we present a generalized methodology for monitoring code integrity at run-time in ASIPs, where both the instruction set architecture (ISA) and the underlying microarchitecture can be customized for a particular application domain. Based on the microoperation-based monitoring architecture that we have presented in previous work, we propose a compiler-assisted and application-controlled management approach for the monitoring architecture. Experimental results show that compared with the OS-managed scheme and other compiler-assisted schemes, our approach can detect program code integrity compromises with much less performance degradation.",2007,0, 3741,Interactive Mining of Functional MRI Data,"Discovery of the image voxels of the brain that represent real activity is, in general, very difficult because of a weak signal-to-noise ratio and the presence of artifacts. The first tests of the classical data mining algorithms in this field showed low performances and weak quality of recognition. In this article, a new interactive data-driven approach to functional magnetic resonance imagery mining is presented, allowing the observation of cerebral activity. Several non-supervised classification algorithms have been developed and tested on sequences of fMRI images. The results of the tests have shown that the number of classes, signal-to-noise ratio, and volumes of activated and explored zones have a strong influence on the classifier performances.",2007,0, 3742,Early Models for System-Level Power Estimation,"Power estimation and verification have become important aspects of System-on-Chip (SoC) design flows. However, rapid and effective power modeling and estimation technologies for complex SoC designs are not widely available. As a result, many SoC design teams focus the bulk of their efforts on using detailed low-level models to verify power consumption. While such models can accurately estimate power metrics for a given design, they suffer from two significant limitations: (1) they are only available late in the design cycle, after many architectural features have already been decided, and (2) they are so detailed that they impose severe limitations on the size and number of workloads that can be evaluated. While these methods are useful for power verification, architects require information much earlier in the design cycle, and are therefore often limited to estimating power using spreadsheets where the expected power dissipation of each module is summed up to predict total power. As the model becomes more refined, the frequency that each module is exercised may be added as an additional parameter to further increase the accuracy. Current spreadsheets, however, rely on aggregate instruction counts and do not incorporate either time or input data and thus have inherent inaccuracies. Our strategy for early power estimation relies on (i) measurements from real silicon, (ii) models built from those measurements models that predict power consumption for a variety of processor micro-architectural structures and (iii)FPGA-based implementations of those models integrated with an FPGA-based performance simulator/emulator. The models will be designed specifically to be implemented within FPGAs. The intention is to integrate the power models with FPGA-based full-system, functional and performance simulators/emulators that will provide timing and functional information including data values. The long term goal is to provide relative power accuracy and power trends useful to arc- - hitects during the architectural phase of a project, rather than precise power numbers that would require far more information than is available at that time. By implementing the power models in an FPGA and driving those power models with a system simulator/emulator that can feed the power models real data transitions generated by real software running on top of real operating systems, we hope to both improve the quality of early stage power estimation and improve power simulation performance.",2007,0, 3743,Evaluating the EASY-backfill job scheduling of static workloads on clusters,"This research aims at improving our understanding of backfilling job scheduling algorithms. The most frequently used algorithm, EASY-backfilling, was selected for a performance evaluation by scheduling static workloads of parallel jobs on a computer cluster. To achieve the aim, we have developed a batch job scheduler for Linux clusters, implemented several scheduling algorithms including ARCA and EASY-Backfilling, and carried out their performance evaluation by running well known MPI applications on a real cluster. Our performance evaluation carried out for EASY-Backfilling serves two purposes. First, the performance results obtained from our evaluation can be used to validate other researcherspsila results generated by simulation, and second, the methodology used in our evaluation has alleviated many problems existed in the simulations presented in the current literature.",2007,0, 3744,Reliability-aware resource allocation in HPC systems,"Failures and downtimes have severe impact on the performance of parallel programs in a large scale High Performance Computing (HPC) environment. There were several research efforts to understand the failure behavior of computing systems. However, the presence of multitude of hardware and software components required for uninterrupted operation of parallel programs make failure and reliability prediction a challenging problem. HPC run-time systems like checkpoint frameworks and resource managers rely on the reliability knowledge of resources to minimize the performance loss due to unexpected failures. In this paper, we first analyze the Time Between Failure (TBF) distribution of individual nodes from a 512-node HPC system. Time varying distributions like Weibull, lognormal and gamma are observed to have better goodness-of-fit as compared to exponential distribution. We then present a reliability-aware resource allocation model for parallel programs based on one of the time varying distributions and present reliability-aware resource allocation algorithms to minimize the performance loss due to failures. We show the effectiveness of reliability-aware resource allocation algorithms based on the actual failure logs of the 512 node system and parallel workloads obtained from LANL and SDSC. The simulation results indicate that applying reliability-aware resource allocation techniques reduce the overall waste time of parallel jobs by as much as 30%. A further improvement by 15% in waste time is possible by considering the job run lengths in reliability-aware scheduling.",2007,0, 3745,Hybrid predictive base station (HPBS) selection procedure in IEEE 802.16e-based WMAN,"According to the mobility framework of IEEE 802.16e, a mobile station (MS) should scan the neighbouring base stations (BSs), for selecting the best BS for a potential handover activity. However, the standard does not specify the number of BSs to be scanned leaving room for unnecessary scanning. Moreover, prolonged scanning also interrupts data transmissions thus degrading the QoS of an ongoing connection. Reducing unnecessary scanning is an important issue. This paper proposes a scheme to reduce the number of BSs to scan (thus improving the overall handover performance). Simulation results have shown that this hybrid predictive BS selection scheme for potential scanning activities is more effective than the conventional IEEE 802.16e handover scheme in terms of handover delay and resource wastages.",2007,0, 3746,A Fuzzy Multi-Criteria Decision Approach for Enhanced Auto-Tracking of Seismic Events,"The main activity carried out by the geophysicist when interpreting seismic data, in terms of both importance and time spent is tracking (or picking) seismic events. In practice, this activity turns out to be rather challenging, particularly when the targeted event is interrupted by discontinuities such as geological faults or exhibits lateral changes in seismic character. In recent years, several automated schemes, known as auto-trackers, have been developed to assist the interpreter in this tedious and time-consuming task. The automatic tracking tool available in modern interpretation software packages often employs artificial neural networks (ANNs) to identify seismic picks belonging to target events through a pattern recognition process. The ability of ANNs to track horizons across discontinuities largely depends on how reliably data patterns characterise these horizons. While seismic attributes are commonly used to characterise amplitude peaks forming a seismic horizon, some researchers in the field claim that inherent seismic information is lost in the attribute extraction process and advocate instead the use of raw data (amplitude samples). This paper investigates the performance of ANNs using either characterisation methods, and demonstrates how the complementarity of both seismic attributes and raw data can be exploited in conjunction with other geological information in a fuzzy inference system (FIS) to achieve an enhanced auto-tracking performance.",2007,0, 3747,Performance under failures of high-end computing,"Modern high-end computers are unprecedentedly complex. Occurrence of faults is an inevitable fact in solving large-scale applications on future Petaflop machines. Many methods have been proposed in recent years to mask faults. These methods, however, impose various performance and production costs. A better understanding of faults' influence on application performance is necessary to use existing fault tolerant methods wisely. In this study, we first introduce some practical and effective performance models to predict the application completion time under system failures. These models separate the influence of failure rate, failure repair, checkpointing period, checkpointing cost, and parallel task allocation on parallel and sequential execution times. To benefit the end users of a given computing platform, we then develop effective fault-aware task scheduling algorithms to optimize application performance under system failures. Finally, extensive simulations and experiments are conducted to evaluate our prediction models and scheduling strategies with actual failure trace.",2007,0, 3748,Application development on hybrid systems,"Hybrid systems consisting of a multitude of different computing device types are interesting targets for high-performance applications. Chip multiprocessors, FPGAs, DSPs, and GPUs can be readily put together into a hybrid system; however, it is not at all clear that one can effectively deploy applications on such a system. Coordinating multiple languages, especially very different languages like hardware and software languages, is awkward and error prone. Additionally, implementing communication mechanisms between different device types unnecessarily increases development time. This is compounded by the fact that the application developer, to be effective, needs performance data about the application early in the design cycle. We describe an application development environment specifically targeted at hybrid systems, supporting data-flow semantics between application kernels deployed on a variety of device types. A specific feature of the development environment is the availability of performance estimates (via simulation) prior to actual deployment on a physical system.",2007,0, 3749,Performance and cost optimization for multiple large-scale grid workflow applications,"Scheduling large-scale applications on the Grid is a fundamental challenge and is critical to application performance and cost. Large-scale applications typically contain a large number of homogeneous and concurrent activities which are main bottlenecks, but open great potentials for optimization. This paper presents a new formulation of the well-known NP-complete problems and two novel algorithms that addresses the problems. The optimization problems are formulated as sequential cooperative games among workflow managers. Experimental results indicate that we have successfully devised and implemented one group of effective, efficient, and feasible approaches. They can produce soultuins of significantly better performance and cost than traditional algorithms. Our algorithms have considerably low time complexity and can assign 1,000,000 activities to 10,000 processors within 0.4 second on one Opteron processor. Moreover, the solutions can be practically performed by workflow managers, and the violation of QoS can be easily detected, which are critical to fault tolerance.",2007,0, 3750,Investigation of Hyper-NA Scanner Emulation for Photomask CDU Performance,"As the semiconductor industry moves toward immersion lithography using numerical apertures above 1.0 the quality of the photomask becomes even more crucial. Photomask specifications are driven by the critical dimension (CD) metrology within the wafer fab. Knowledge of the CD values at resist level provides a reliable mechanism for the prediction of device performance. Ultimately, tolerances of device electrical properties drive the wafer linewidth specifications of the lithography group. Staying within this budget is influenced mainly by the scanner settings, resist process, and photomask quality. Tightening of photomask specifications is one mechanism for meeting the wafer CD targets. The challenge lies in determining how photomask level metrology results influence wafer level imaging performance. Can it be inferred that photomask level CD performance is the direct contributor to wafer level CD performance? With respect to phase shift masks, criteria such as phase and transmission control are generally tightened with each technology node. Are there other photomask relevant influences that effect wafer CD performance? A comprehensive study is presented supporting the use of scanner emulation based photomask CD metrology to predict wafer level within chip CD uniformity (CDU). Using scanner emulation with the photomask can provide more accurate wafer level prediction because it inherently includes all contributors to image formation related to the 3D topography such as the physical CD, phase, transmission, sidewall angle, surface roughness, and other material properties. Emulated images from different photomask types were captured to provide CD values across chip. Emulated scanner image measurements were completed using an AIMS(TM)45-193i with its hyper-NA, through-pellicle data acquisition capability including the Global CDU Map(TM) software option for AIMS(TM) tools. The through-pellicle data acquisition capability is an essential prerequisite for capturing final CD- - U data (after final clean and pellicle mounting) before the photomask ships or for re-qualification at the wafer fab. Data was also collected on these photomasks using a conventional CD-SEM metrology system with the pellicles removed. A comparison was then made to wafer prints demonstrating the benefit of using scanner emulation based photomask CD metrology.",2007,0, 3751,Automated Contingency Management for Propulsion Systems,"Increasing demand for improved reliability and survivability of mission-critical systems is driving the development of health monitoring and Automated Contingency Management (ACM) systems. An ACM system is expected to adapt autonomously to fault conditions with the goal of still achieving mission objectives by allowing some degradation in system performance within permissible limits. ACM performance depends on supporting technologies like sensors and anomaly detection, diagnostic/prognostic and reasoning algorithms. This paper presents the development of a generic prototype test bench software framework for developing and validating ACM systems for advanced propulsion systems called the Propulsion ACM (PACM) Test Bench. The architecture has been implemented for a Monopropellant Propulsion System (MPS) to demonstrate the validity of the approach. A Simulink model of the MPS has been developed along with a fault injection module. It has been shown that the ACM system is capable of mitigating the failures by searching for an optimal strategy. Furthermore, few relevant experiments have been presented to show proof of concepts.",2007,0, 3752,"On power control for wireless sensor networks: System model, middleware component and experimental evaluation","In this paper, we investigate strategies for radio power control for wireless sensor networks that guarantee a desired packet error probability. Efficient power control algorithms are of major concern for these networks, not only because the power consumption can be significantly decreased but also because the interference can be reduced, allowing for higher throughput. An analytical model of the Received Signal Strength Indicator (RSSI), which is link quality metric, is proposed. The model relates the RSSI to the Signal to Interference plus Noise Ratio (SINR), and thus provides a connection between the powers and the packet error probability. Two power control mechanisms are studied: a Multiplicative-Increase Additive-Decrease (MIAD) power control described by a Markov chain, and a power control based on the average packet error rate. A component-based software implementation using the Contiki operating system is provided for both the power control mechanisms. Experimental results are reported for a test-bed with Telos motes.",2007,0, 3753,A Strategy for fault tolerant control in Networked Control Systems in the presence of Medium Access Constraints,"This paper deals with the problem of fault-tolerant control of a Network Control System (NCS) for the case in which the sensors, actuators and controller are interconnected via various Medium Access Control protocols which define the access scheduling and collision arbitration policies in the network and employing the so-called periodic communication sequence. A new procedure for controlling a system over a network using the concept of an NCS-Information-Packet is described which comprises an augmented vector consisting of control moves and fault flags. The size of this packet is used to define a Completely Fault Tolerant NCS. The fault-tolerant behaviour and control performance of this scheme is illustrated through the use of a process model and controller. The plant is controlled over a network using Model-based Predictive Control and implemented via MATLAB© and LABVIEW© software.",2007,0, 3754,VLSI friendly edge gradient detection based multiple reference frames motion estimation optimization for H.264/AVC,"In H.264/AVC standard, motion estimation can be processed on multiple reference frames (MRF) to improve the video coding performance. The computation is also increased in proportion to the reference frame number. Many software oriented fast multiple reference frames motion estimation (MRF-ME) algorithms have been proposed. However, for the VLSI real-time encoder, the heavy computation of fractional motion estimation (FME) makes the integer motion estimation (IME) and FME must be scheduled in two macro block (MB) pipeline stages, which makes many fast MRF-ME algorithms inefficient. In this paper, one edge gradient detection based algorithm is provided to reduce the computation of MRF-ME. The image being rich of texture and sharp edges contains much high frequency signal and this nature makes MRF-ME essential. Through analyzing the edges' gradient, we just perform MRF-ME on those blocks with sharp edges, so the redundant ME computation can be efficiently reduced. Experimental results show that average 26.43% computation can be saved by our approach with the similar coding quality as the reference software. This proposed algorithm is friendly to hardwired encoder implementation. Moreover, the provided fast algorithms can be combined with other fast ME algorithms to further improve the performance.",2007,0, 3755,Software developed for obtaining GPS-derived total electron content values,"This paper describes a complex technique with its built-in cycle slip correction procedures that have been developed for ionospheric space research to obtain high-quality and high-precision GPS-derived total electron content (TEC) values. Thus, to correct GPS anomalies while the signatures of space weather features detected in the dual-frequency 30-s rate GPS data are preserved is the main aim of this technique. Its main requirement is to complete fully automatically all the tasks required to turn the observational data to the desired final product. Its major tasks include curve fitting, cycle slip detection and correction in the slant relative TEC data, residual error detection and correction in the vertical TEC data, and vertical TEC data filtering for quantifying data smoothness and GPS phase fluctuations. A detailed description of these two data correction methods is given. Validation tests showing weaknesses and strengths of the methods developed are also included and discussed. Versatility and accuracy of the methods are demonstrated with interesting and real-world examples obtained from smooth midlatitude and from dynamic low- and high-latitude data. Results indicate that errors can be detected and corrected more reliably in the vertical TEC data than in the slant TEC data because of the lower rate of change of vertical TEC over a 30-s sampling period. Future work includes the development of a complex software package wherein the individual FORTRAN algorithms, described in this paper, will be incorporated into one main (FORTRAN, Matlab, or C++) program to provide professional and customized GPS data processing for ionospheric space research.",2007,0, 3756,Long-term observations of Schumann resonances at Modra Observatory,"The paper presents a summary of more than 4 years of continuous Schumann resonance (SR) monitoring of the vertical electric component at Modra Observatory. Principal parameters (peak frequency, amplitude, and quality factor) are determined for four resonances from 7 to 30 Hz, i.e., for modes one through four. Attention is also given to the less frequently compiled mode four. The resonance parameters are computed from 48 daily measurements and are represented as the mean monthly values for each time of the detection. Fitting of spectral peaks by Lorentz function is the main method used in data postprocessing. Diurnal, seasonal variations and the indication of interannual variations of these parameters (especially peak frequency) are discussed. The results are compared with other observatory measurements. Our observations confirm that variations in peak frequency of the lower-SR modes can be attributed mainly to the source-observer distance effect.",2007,0, 3757,Using the Conceptual Cohesion of Classes for Fault Prediction in Object-Oriented Systems,"High cohesion is a desirable property of software as it positively impacts understanding, reuse, and maintenance. Currently proposed measures for cohesion in Object-Oriented (OO) software reflect particular interpretations of cohesion and capture different aspects of it. Existing approaches are largely based on using the structural information from the source code, such as attribute references, in methods to measure cohesion. This paper proposes a new measure for the cohesion of classes in OO software systems based on the analysis of the unstructured information embedded in the source code, such as comments and identifiers. The measure, named the Conceptual Cohesion of Classes (C3), is inspired by the mechanisms used to measure textual coherence in cognitive psychology and computational linguistics. This paper presents the principles and the technology that stand behind the C3 measure. A large case study on three open source software systems is presented which compares the new measure with an extensive set of existing metrics and uses them to construct models that predict software faults. The case study shows that the novel measure captures different aspects of class cohesion compared to any of the existing cohesion measures. In addition, combining C3 with existing structural cohesion metrics proves to be a better predictor of faulty classes when compared to different combinations of structural cohesion metrics.",2008,1, 3758,"Defect Prediction using Combined Product and Project Metrics - A Case Study from the Open Source ""Apache"" MyFaces Project Family","The quality evaluation of open source software (OSS) products, e.g., defect estimation and prediction approaches of individual releases, gains importance with increasing OSS adoption in industry applications. Most empirical studies on the accuracy of defect prediction and software maintenance focus on product metrics as predictors that are available only when the product is finished. Only few prediction models consider information on the development process (project metrics) that seems relevant to quality improvement of the software product. In this paper, we investigate defect prediction with data from a family of widely used OSS projects based both on product and project metrics as well as on combinations of these metrics. Main results of data analysis are (a) a set of project metrics prior to product release that had strong correlation to potential defect growth between releases and (b) a combination of product and project metrics enables a more accurate defect prediction than the application of one single type of measurement. Thus, the combined application of project and product metrics can (a) improve the accuracy of defect prediction, (b) enable a better guidance of the release process from project management point of view, and (c) help identifying areas for product and process improvement.",2008,1, 3759,A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction,"In this paper we present a comparative analysis of the predictive power of two different sets of metrics for defect prediction. We choose one set of product related and one set of process related software metrics and use them for classifying Java files of the Eclipse project as defective respective defect-free. Classification models are built using three common machine learners: logistic regression, naive Bayes, and decision trees. To allow different costs for prediction errors we perform cost-sensitive classification, which proves to be very successful: >75% percentage of correctly classified files, a recall of >80%, and a false positive rate <30%. Results indicate that for the Eclipse data, process metrics are more efficient defect predictors than code metrics.",2008,1, 3760,The influence of organizational structure on software quality,"Often software systems are developed by organizations consisting of many teams of individuals working together. Brooks states in the Mythical Man Month book that product quality is strongly affected by organization structure. Unfortunately there has been little empirical evidence to date to substantiate this assertion. In this paper we present a metric scheme to quantify organizational complexity, in relation to the product development process to identify if the metrics impact failure-proneness. In our case study, the organizational metrics when applied to data from Windows Vista were statistically significant predictors of failure-proneness. The precision and recall measures for identifying failure-prone binaries, using the organizational metrics, was significantly higher than using traditional metrics like churn, complexity, coverage, dependencies, and pre-release bug measures that have been used to date to predict failure-proneness. Our results provide empirical evidence that the organizational metrics are related to, and are effective predictors of failure-proneness.",2008,1, 3761,Predicting defects using network analysis on dependency graphs,"In software development, resources for quality assurance are limited by time and by cost. In order to allocate resources effectively, managers need to rely on their experience backed by code complexity metrics. But often dependencies exist between various pieces of code over which managers may have little knowledge. These dependencies can be construed as a low level graph of the entire system. In this paper, we propose to use network analysis on these dependency graphs. This allows managers to identify central program units that are more likely to face defects. In our evaluation on Windows Server 2003, we found that the recall for models built from network measures is by 10% points higher than for models built from complexity metrics. In addition, network measures could identify 60% of the binaries that the Windows developers considered as critical-twice as many as identified by complexity metrics.",2008,1, 3762,Overhead Analysis of Scientific Workflows in Grid Environments,"Scientific workflows are a topic of great interest in the grid community that sees in the workflow model an attractive paradigm for programming distributed wide-area grid infrastructures. Traditionally, the grid workflow execution is approached as a pure best effort scheduling problem that maps the activities onto the grid processors based on appropriate optimization or local matchmaking heuristics such that the overall execution time is minimized. Even though such heuristics often deliver effective results, the execution in dynamic and unpredictable grid environments is prone to severe performance losses that must be understood for minimizing the completion time or for the efficient use of high-performance resources. In this paper, we propose a new systematic approach to help the scientists and middleware developers understand the most severe sources of performance losses that occur when executing scientific workflows in dynamic grid environments. We introduce an ideal model for the lowest execution time that can be achieved by a workflow and explain the difference to the real measured grid execution time based on a hierarchy of performance overheads for grid computing. We describe how to systematically measure and compute the overheads from individual activities to larger workflow regions and adjust well-known parallel processing metrics to the scope of grid computing, including speedup and efficiency. We present a distributed online tool for computing and analyzing the performance overheads in real time based on event correlation techniques and introduce several performance contracts as quality-of-service parameters to be enforced during the workflow execution beyond traditional best effort practices. We illustrate our method through postmortem and online performance analysis of two real-world workflow applications executed in the Austrian grid environment.",2008,0, 3763,RivWidth: A Software Tool for the Calculation of River Widths From Remotely Sensed Imagery,"RivWidth is an implementation in ITT Visual Information Solutions IDL of a new algorithm that automates the calculation of river widths using raster-based classifications of inundation extent derived from remotely sensed imagery. The algorithm utilizes techniques of boundary definition to extract a river centerline, derives a line segment that is orthogonal to this line at each centerline pixel, and then computes the total river width along each orthogonal. The output of RivWidth is comparable in quality to measurements derived using manual techniques; yet, it continuously generates thousands of width values along an entire stream course, even in multichannel river systems. Uncertainty in RivWidth principally depends on the quality of the water classification used as an input, though pixel resolution and the values of input parameters play lesser roles. Source code for RivWidth can be obtained by visiting http://pavelsky.googlepages.com/rivwidth.",2008,0, 3764,Software Reliability Analysis and Measurement Using Finite and Infinite Server Queueing Models,"Software reliability is often defined as the probability of failure-free software operation for a specified period of time in a specified environment. During the past 30 years, many software reliability growth models (SRGM) have been proposed for estimating the reliability growth of software. In practice, effective debugging is not easy because the fault may not be immediately obvious. Software engineers need time to read, and analyze the collected failure data. The time delayed by the fault detection & correction processes should not be negligible. Experience shows that the software debugging process can be described, and modeled using queueing system. In this paper, we will use both finite, and infinite server queueing models to predict software reliability. We will also investigate the problem of imperfect debugging, where fixing one bug creates another. Numerical examples based on two sets of real failure data are presented, and discussed in detail. Experimental results show that the proposed framework incorporating both fault detection, and correction processes for SRGM has a fairly accurate prediction capability.",2008,0, 3765,Classifying Software Changes: Clean or Buggy?,"This paper introduces a new technique for predicting latent software bugs, called change classification. Change classification uses a machine learning classifier to determine whether a new software change is more similar to prior buggy changes or clean changes. In this manner, change classification predicts the existence of bugs in software changes. The classifier is trained using features (in the machine learning sense) extracted from the revision history of a software project stored in its software configuration management repository. The trained classifier can classify changes as buggy or clean, with a 78 percent accuracy and a 60 percent buggy change recall on average. Change classification has several desirable qualities: 1) The prediction granularity is small (a change to a single file), 2) predictions do not require semantic information about the source code, 3) the technique works for a broad array of project types and programming languages, and 4) predictions can be made immediately upon the completion of a change. Contributions of this paper include a description of the change classification approach, techniques for extracting features from the source code and change histories, a characterization of the performance of change classification across 12 open source projects, and an evaluation of the predictive power of different groups of features.",2008,0, 3766,Electronic Scheme for Multiplexed Dynamic Behavior Excitation and Detection of Piezoelectric Silicon-Based Micromembranes,"A new concept for a precise compensation of the static capacitance of piezoelectric silicon-based micromembranes is proposed. Combining analog and digital field-programmable gate array hardware elements with specific software treatment, this system enables the parallel excitation and detection of the resonant frequencies (and the quality factors) of matrices of piezoelectric micromembranes integrated on the same chip. The frequency measurement stability is less than 1 ppm (1-2 Hz) with a switching capability of 4 micromembranes/sec and a measurement bandwidth of 1.5 MHz. The real-time multiplexed tracking of the resonant frequency and quality factor on several micromembranes is performed in different liquid media, showing the high capability of measurement on dramatically attenuated signals. Prior to these measurements, calibration in air is done making use of silica microbeads successive depositions onto piezoelectric membranes surface. The mass sensitivity in air is, thus, estimated at, in excellent agreement with the theoretical corresponding value.",2008,0, 3767,Component-based performance-sensitive real-time embedded software,"The software complexity is continuously increasing and the competition in the software market is becoming more intensive than ever. Therefore, it is crucial to improve the software quality, and meanwhile, minimize software development cost, and reduce software delivery time in order to gain competitive advantages. Recently, Component-Based Software Development (CBSD) was proposed and has now been applied in various industry and business applications as a possible way to achieve this goal. As verified by numerous practical applications in different fields, CBSD is able to increase software development productivity as well as improve software quality. Modern embedded real-time systems have both strict functional and non-functional requirements and they are essentially safety-critical, real-time, and embedded software-intensive systems. In particular, the crucial end-to-end quality-of-service (QoS) properties should be assured in embedded systems such as timeliness and fault tolerance. Herein, I first introduce the modern component technologies and commonly used component models. Then, the middleware in distributed real-time embedded systems is discussed. Further, adaptive system resource management is elaborated upon. Finally, the prospects of a component-based approach in implementing modern embedded real-time software is discussed and future research directions are suggested.",2008,0, 3768,Safety Issues in Modern Bus Standards,Selecting and designing bus standards for safety-critical applications requires careful analysis of error-detection and fault-handling mechanisms. This analysis must be based on the revision of standard specifications and an experimental evaluation that covers representative fault classes. The traditional approach to evaluating a bus design measures its two most important performance parameters: data throughput and data latency.,2008,0, 3769,A High Performance Online Storage System for the LHCb Experiment,"This document describes the architecture of the LHCb storage system, and discusses the key criteria which formed the basis of the evaluation of the system. The configuration and implementation of the current solution and its capabilities are also described, accompanied by performance figures.",2008,0, 3770,Data Quality Monitoring Framework for the ATLAS Experiment at the LHC,"Data quality monitoring (DQM) is an integral part of the data taking process of HEP experiments. DQM involves automated analysis of monitoring data through user-defined algorithms and relaying the summary of the analysis results to the shift personnel while data is being processed. In the online environment, DQM provides the shifter with current run information that can be used to overcome problems early on. During the offline reconstruction, more complex analysis of physics quantities is performed by DQM, and the results are used to assess the quality of the reconstructed data. The ATLAS data quality monitoring framework (DQMF) is a distributed software system providing DQM functionality in the online environment. The DQMF has a scalable architecture achieved by distributing the execution of the analysis algorithms over a configurable number of DQMF agents running on different nodes connected over the network. The core part of the DQMF is designed to have dependence only on software that is common between online and offline (such as ROOT) and therefore the same framework can be used in both environments. This paper describes the main requirements, the architectural design, and the implementation of the DQMF.",2008,0, 3771,The CMS Muon System and Its Performance in the CMS Cosmic Challenge,"The CMS Muon System exploits three different detection technologies in both the central and forward regions, in order to provide a good muon identification and efficient event selection via a multi-level trigger. In summer 2006 the CMS solenoid has been switched on for the first time, generating successfully the designed magnetic field of B = 4 T. During this period also a slice of the CMS detector was operational (the ""CMS Cosmic Challenge"") including a subset of detectors from all three muon subsystems along with the muon hardware alignment. A dedicated CMS cosmic muon trigger had to be set-up, which was based on the Level-1 trigger inputs from the muon subsystems. The CMS subsystems were integrated in the central DAQ, detector control and data quality monitoring. Cosmic data were taken in different trigger configurations in order to study the behavior of muon detectors in the magnetic field, combined detector operations and trigger synchronization. The recorded data are used to study alignment with tracks, to verify and tune the reconstruction software. The layout and status of the CMS muon system is presented and results of the recent cosmic challenge data taking are discussed.",2008,0, 3772,Memory-centric video processing,"This work presents a domain-specific memory subsystem based on a two-level memory hierarchy. It targets the application domain of video post-processing applications including video enhancement and format conversion. These applications are based on motion compensation and/or broad class of content adaptive filtering to provide the highest quality of pictures. Our approach meets the required performance and has sufficient flexibility for the application domain. It especially aims at the implementation-wise most challenging applications: compute-intensive and bandwidth-demanding applications that provide the highest quality at high picture resolutions. The lowest level of the memory hierarchy, closest to the processing element, the L0 scratchpad, is organized specifically to enable fast retrieval of an arbitrarily positioned 2-D block of pixels to the processing element. To guarantee the performance, most of its addressing logic is hardwired, leaving a user a set of API for initialization and storing/loading the data to/from the L0 scratchpad. The next level of the memory hierarchy, the L1 scratchpad, minimizes the off-chip memory bandwidth requirements. The L1 scratchpad is organized specifically to enable efficient aligned block-based accesses. With lower data rates compared to the L0 scratchpad and aligned block access, software-based addressing is used to enable full flexibility. The two-level memory hierarchy exploits prefetching to further improve the performance.",2008,0, 3773,A Framework for Estimating the Impact of a Distributed Software System's Architectural Style on its Energy Consumption,"The selection of an architectural style for a given software system is an important factor in satisfying its quality requirements. In battery-powered environments, such as mobile and pervasive systems, efficiency with respect to energy consumption has increasingly been recognized as an important quality attribute. In this paper, we present a framework that (1) facilitates early estimation of the energy consumption induced by an architectural style in a distributed software system, and (2) consequently enables an engineer to use energy consumption estimates along with other quality attributes in determining the most appropriate style for a given distributed application. We have applied the framework on five distributed systems styles to date, and have evaluated it for precision and accuracy using a particular middleware platform that supports the implementation of those styles. In a large number of application scenarios, our framework exhibited excellent precision, in that it was consistently able to correctly rank the five styles and estimate the relative differences in their energy consumptions. Moreover, the framework has also proven to be accurate: its estimates were within 7% of the different style implementations ' actually measured energy consumptions.",2008,0, 3774,Model-Based Gaze Direction Estimation in Office Environment,"In this paper, we present a model-based approach for gaze direction estimation in office environment. An overlapped elliptical model is used in detection of head, and Bayesian network model is used in estimation of gaze direction. The head consists of two regions which are face and hair region, and it can be represented by two overlapped ellipses. We use its spatial layout based on relative angle of two ellipses and size ratio of two ellipses as prior information for gaze direction estimation. In an image, the face regions are detected based on color and shape information, the hair regions are detected based on color information. The head is tracked by mean shift algorithm and adjustment method for image sequence. The performance of the proposed approach is illustrated on various image sequences obtained from office environment, and we show goodness of gaze direction estimation quality.",2008,0, 3775,A Display Simulation Toolbox for Image Quality Evaluation,"The output of image coding and rendering algorithms are presented on a diverse array of display devices. To evaluate these algorithms, image quality metrics should include more information about the spatial and chromatic properties of displays. To understand how to best incorporate such display information, we need a computational and empirical framework to characterize displays. Here we describe a set of principles and an integrated suite of software tools that provide such a framework. The display simulation toolbox (DST) is an integrated suite of software tools that help the user characterize the key properties of display devices and predict the radiance of displayed images. Assuming that pixel emissions are independent, the DST uses the sub-pixel point spread functions, spectral power distributions, and gamma curves to calculate display image radiance. We tested the assumption of pixel independence for two liquid crystal device (LCD) displays and two cathode-ray tube (CRT) displays. For the LCD displays, the independence assumption is reasonably accurate. For the CRT displays it is not. The simulations and measurements agree well for displays that meet the model assumptions and provide information about the nature of the failures for displays that do not meet these assumptions.",2008,0, 3776,Dynamic coupling measurement of object oriented software using trace events,"Software metrics are increasingly playing a central role in the planning and control of software development projects. Coupling measures have important applications in software development and maintenance. They are used to reason about the structural complexity of software and have been shown to predict quality attributes such as fault-proneness, ripple effects of changes and changeability. Coupling or dependency is the degree to which each program module relies on each one of the other modules. Coupling measures characterize the static usage dependencies among the classes in an object-oriented system. Traditional coupling measures take into account only ""static"" couplings. They do not account for ""dynamic"" couplings due to polymorphism and may significantly underestimate the complexity of software and misjudge the need for code inspection, testing and debugging. This is expected to result in poorer predictive accuracy of the quality models that utilize static coupling measurement. In this paper, We propose dynamic coupling measurement techniques. First the source code is introspected and all the functions are added with some trace events. Then the source code is compiled and allowed to run. During runtime the trace events are logged. This log report provides the actual function call information (AFCI) during the runtime. Based on AFCI the source code is filtered to arrive the actual runtime used source code (ARUSC). The ARUSC is then given for any standard coupling technique to get the dynamic coupling.",2008,0, 3777,An Adaptive Digital Front-End for Multimode Wireless Receivers,"The ongoing evolution from 2G to 3G and beyond requires multimode operation of wireless cellular transceivers. This paper discusses the design of an adaptive wireless receiver for multimode operation and power efficiency. This is achieved by sharing digital signal processing stages for all operation modes and adapting them such that performance requirements are met at a minimum power consumption. The flexibility a digital front-end introduces in combination with an analog zero-intermediate-frequency receiver is studied with respect to the main cellular communication standards (Global System for Mobile Communication, Enhanced Data Rates for Global Evolution, Wide-Band Code-Division Multiple Access (WCDMA)/High-Speed Downlink Packet Access, and CDMA2000 and satellite navigation systems (Galileo), with considerations of impacts on the capability of an implementation of a software-defined radio. The proposed approach is verified by simulations exhibiting an error-vector magnitude of 2.9% in Universal Mobile Telecommunications System mode while the estimated power consumption can be reduced by 62% versus full featured mode at reasonable degradation in modulation quality.",2008,0, 3778,Performance of Three Mode-Meter Block-Processing Algorithms for Automated Dynamic Stability Assessment,The frequency and damping of electromechanical modes offer considerable insight into the dynamic stability properties of a power system. The performance properties of three mode-estimation block-processing algorithms from the perspective of near real-time automated stability assessment are demonstrated and examined. The algorithms are: the extended modified Yule Walker (YW); extended modified Yule Walker with spectral analysis (YWS); and sub-space system identification (N4SID). The YW and N4SID have been introduced in previous publications while the YWS is introduced here. Issues addressed include: stability assessment requirements; automated subset selecting identified modes; using algorithms in an automated format; data assumptions and quality; and expected algorithm estimation performance.,2008,0, 3779,Performance analysis of multi-homed transport protocols with network failure tolerance,"The performance of multi-homed transport protocols tolerant of network failure is studied. It evaluates the performance of different retransmission policies combined with path failure detection thresholds, infinite or finite receive buffers for various path bandwidths, delays and loss rate conditions through stream control transmission protocol simulation. The results show that retransmission policies perform differently with different path failure detection threshold configurations. It identifies that retransmission of all data on an alternate path with the path failure detection threshold set to zero performs the best in symmetric path conditions but its performance degrades acutely in asymmetric path conditions even when the alternate path delay is shorter than the primary path delay. It illustrates that retransmission of all data on the same path with the path failure detection threshold set to one or zero gives the most stable performance in all path configurations.",2008,0, 3780,An On-Demand Test Triggering Mechanism for NoC-Based Safety-Critical Systems,"As embedded and safety-critical applications begin to employ many-core SoCs using sophisticated on-chip networks, ensuring system quality and reliability becomes increasingly complex. Infrastructure IP has been proposed to assist system designers in meeting these requirements by providing various services such as testing and error detection, among others. One such service provided by infrastructure IP is concurrent online testing (COLT) of SoCs. COLT allows system components to be tested in-field and during normal operation of the SoC However, COLT must be used judiciously in order to minimize excessive test costs and application intrusion. In this paper, we propose and explore the use of an anomaly-based test triggering unit (ATTU) for on-demand concurrent testing of SoCs. On-demand concurrent testing is a novel solution to satisfy the conflicting design constraints of fault-tolerance and performance. Ultimately, this ensures the necessary level of design quality for safety-critical applications. To validate this approach, we explore the behavior of the ATTU using a NoC-based SoC simulator. The test triggering unit is shown to trigger tests from test infrastructure IP within 1 ms of an error occurring in the system while detecting 81% of errors, on average. Additionally, the ATTU was synthesized to determine area and power overhead.",2008,0, 3781,Quantified Impacts of Guardband Reduction on Design Process Outcomes,"The value of guardband reduction is a critical open issue for the semiconductor industry. For example, due to competitive pressure, foundries have started to incent the design of manufacturing-friendly ICs through reduced model guardbands when designers adopt layout restrictions. The industry also continuously weighs the economic viability of relaxing process variation limits in the technology roadmap [2]. Our work gives the first-ever quantification of the impact of modeling guardband reduction on outcomes from the synthesis, place and route (SP&R) implementation flow. We assess the impact of model guard- band reduction on various metrics of design cycle time and design quality, using open-source cores and production (specifically, ARM/TSMC) 90 nm and 65 nm technologies and libraries. Our experimental data clearly shows the potential design quality and turnaround time benefits of model guardband reduction. For example, we typically (i.e., on average) observe 13% standard-cell area reduction and 12% routed wirelength reduction as the consequence of a 40% reduction in library model guardband; 40% is the amount of guardband reduction reported by IBM for a variation-aware timing methodology [8]. We also assess the impact of guardband reduction on design yield. Our results suggest that there is justification for the design, EDA and process communities to enable guardband reduction as an economic incentive for manufacturing-friendly design practices.",2008,0, 3782,Greensand Mulling Quality Determination Using Capacitive Sensors,"Cast iron foundries typically use molds made from a mixture of sand, clay, and water called greensand. The mixing of the clay and water into the sand requires the clay to be smeared around the sand particles in a very thin layer, which allows the mold to have strength while allowing gas to escape. This mixing is called mulling and takes place in a machine called a muller. At present, the industry uses electrical resistance measurements to determine water quantity in the muller. The resistance measurements can not accurately predict the quality of mulling due to binding of the water to sodium and calcium ions in the clay. Poorly mixed greensand has a high resistance when the water is concentrated in a few areas, then a medium mixed greensand has a lower resistance because the water present between the sensors, and a well mulled sand has a higher resistance when the clay binds the water. This paper investigates the feasibility of using capacitive sensors to measure mulling quality using the simulation software Ansoft Maxwell. A second investigation of this paper is to find the ability of capacitance sensors to determine the drying effect of molds delayed in the casting process.",2008,0, 3783,Momentum-Based Motion Detection Methodology for Handoff in Wireless Networks,"This paper presents a novel motion detection scheme by using the momentum of received signal sstrength (MRSS) to improve the quality of handoff in a general wireless network. MRSS can detect the motion state of a mobile node (MN) without assistance of any positioning service. Although MRSS is sensitive in detecting user's motion, it is static and fails to detect quickly the motion changes of users. Thus, a novel motion state dependent MRSS scheme called dynamic MRSS (DMRSS) algorithm is proposed to address this issue. Extensive simulation experiments were conducted to study performance of our presented algorithms. The simulation results show that MRSS and DMRSS can be used to assist a handoff algorithm in substantially reducing unnecessary handoff and saving power.",2008,0, 3784,SOSRAID-6: A Self-Organized Strategy of RAID-6 in the Degraded Mode,"The distinct benefit of RAID-6 is that it provides higher reliability than the other RAID levels for tolerating double disk failures. Whereas, when a disk fails the read/write operations on the failed disk will be redirected to all the surviving disks, which will increase the burden of the surviving disks, the probability of the disk failure and the energy consumption along with the degraded performance issue. In this paper, we present a Self-Organized Strategy (SOS) to improve the performance of RAID-6 in the degraded mode. SOS organizes the data on the failed disks to the corresponding parity locations on first access. Then the later accesses to the failed disks will be redirected to the parity locations rather than all the surviving disks. Besides the performance improvement the SOSRAID-6 reduces the failure probability of the survived disks and is more energy efficient compared with the Traditional RAID-6. With the theoretical evaluation we find that the SOSRAID-6 is more powerful than the TRAID-6.",2008,0, 3785,eMuse: QoS Guarantees for Shared Storage Servers,"Storage consolidation as a perspective paradigm inevitably leads to the extensive installations of shared storage servers in product environments. However, owing to the dynamics of both workloads and storage systems, it is pragmatic only if each workload accessing common storage servers can surely possess a specified minimum share of system resources even when competing with other workloads and consequently obtain predictable quality of service (QoS). This paper presents an I/O scheduling framework for shared storage servers. The eMuse algorithm in the framework employs a dynamic assignment mechanism that not only accommodates a weighted bandwidth share for every active workload, but also fulfills their latency requirements through a fair queuing policy. Experimental results demonstrate that our scheduling framework can accomplish performance isolation among multiple competing workloads as well as the effective utilization of system resources.",2008,0, 3786,Semi-quantitative Modeling for Managing Software Development Processes,"Software process modeling has become an essential technique for managing software development processes. However, purely quantitative process modeling requires a detailed understanding and accurate measurement of software process, which relies on reliable and precise history data. This paper presents a semi-quantitative process modeling approach to model and manage software development processes. It allows for the existence of uncertainty and contingency during software development, and facilitates a manager's qualitative and quantitative estimates and assessments of process progress. We demonstrate its value and flexibility by developing semi-quantitative models of the test-and-fix process of incremental software development. Results conclude that the semi-quantitative process modeling approach can support process or project management activities, including estimating, planning, tracking and decision making throughout the software development cycle.",2008,0, 3787,Checklist Based Reading's Influence on a Developer's Understanding,"This paper addresses the influence the checklist based reading inspection technique has on a developer's ability to modify inspected code. Traditionally, inspections have been used to detect defects within the development life cycle. This research identified a correlation between the number of defects detected and the successful code extensions for new functionality unrelated to the defects. Participants reported that having completed a checklist inspection, modifying the code was easier because the inspection had given them an understanding of the code that would not have existed otherwise. The results also showed a significant difference in how developers systematically modified code after completing a checklist inspection when compared to those who had not performed a checklist inspection. This study has shown that applying software inspections for purposes other than defect detection include software understanding and comprehension.",2008,0, 3788,Automated Usability Testing Using HUI Analyzer,"In this paper, we present an overview of HUI Analyzer, a tool intended for automating usability testing. The tool allows a user interface's expected and actual use to be captured unobtrusively, with any mismatches indicating potential usability problems being highlighted. HUI Analyzer also supports specification and checking of assertions governing a user interface's layout and actual user interaction. Assertions offer a low cost means of detecting usability defects and are intended to be checked iteratively during a user interface's development. Hotspot analysis is a feature that highlights the relative use of GUI components in a form. This is useful in informing form layout, for example to collocate heavily used components thereby reducing unnecessary scrolling or movement. Based on evaluation, we have found HUI Analyzer's performance in detecting usability defects to be comparable to conventional formal user testing. However the time taken by HUI Analyzer to automatically process and analyze user interactions is much less than that for formal user testing.",2008,0, 3789,A Design Approach for Soft Error Protection in Real-Time Embedded Systems,"The decreasing line widths employed in semiconductor technologies means that soft errors are an increasing problem in modern system on a chip designs. Approaches adopted so far have focused on recovery after detection. In real-time systems, though, that can easily lead to missed deadlines. This paper proposes a preventative approach. Specifically a design methodology that uses metrics in design space exploration that highlight where in the structure of the systems model and at what point in its behaviour, protection is needed against soft errors. The approach does not eliminate the impact of soft errors completely, but aims to significantly reduce their impact.",2008,0, 3790,Test Suite for the LHCb Muon Chambers Quality Control,"This paper describes the apparatus and the procedures implemented to test the front-end (FE) electronics of the LHCb muon detector multi wire proportional chambers (MWPC). Aim of the test procedure is to diagnose every FE channel of a given chamber by performing an analysis of the noise rate versus threshold and of the performances at the operational thresholds. Measurements of the key noise parameters, obtained while performing quality tests on the MWPC chambers before the installation on the experiment, are presented. The test suite proved to be an automatic, fast and user-friendly system for mass production tests of chambers. It provided the electronic identification of every chamber and FE board, and the storage and bookkeeping of test results that will be made available to the experiment control system during data taking.",2008,0, 3791,Thermal Balancing Policy for Streaming Computing on Multiprocessor Architectures,"As feature sizes decrease, power dissipation and heat generation density exponentially increase. Thus, temperature gradients in multiprocessor systems on chip (MPSoCs) can seriously impact system performance and reliability. Thermal balancing policies based on task migration have been proposed to modulate power distribution between processing cores to achieve temperature flattening. However, in the context of MPSoC for multimedia streaming computing, where timeliness is critical, the impact of migration on quality of service must be carefully analyzed. In this paper we present the design and implementation of a lightweight thermal balancing policy that reduces on-chip temperature gradients via task migration. This policy exploits run-time temperature and load information to balance the chip temperature. Moreover, we assess the effectiveness of the proposed policy for streaming computing architectures using a cycle-accurate thermal-aware emulation infrastructure. Our results using a real-life software defined radio multitask benchmark show that our policy achieves thermal balancing while keeping migration costs bounded.",2008,0, 3792,SLOT: A Fast and Accurate Technique to Estimate Available Bandwidth in Wireless IEEE 802.11,"Accuracy of available estimated bandwidth and convergence delay algorithm are researchersâ€?challenges in wireless network measurements. Currently a variety of tools are developed to estimate the end-to-end network available bandwidth such as TOPP (Train Of Packet Pair), SLoPS (Self Loading Periodic Stream), Spurce and Variable Packet Size etc. These tools are important to improve the QoS (Quality of Service) of demanding applications. However the performances of the previous tools are very different in term of probing time and estimation accuracy. In this paper we present a fast and accurate bandwidth estimation technique named SLOT. By using NS2 the performance of SLOT technique will be evaluated and the obtain results are analysed by MATLAB software and compared with the TOPP and the SLoPS ones. We show that SLOT has shorter probing time than the TOPP one, and it provides more accurate available estimated bandwidth than the SLoPS one.",2008,0, 3793,A Coverage-Based Handover Algorithm for High-speed Data Service,"4G supports various types of services. The coverage of high-speed data service is smaller than that of low-speed data service, which make the high-speed data service users occur dropping before reaching the handover area. In order to solve the problem, this paper proposes A Coverage-based Handover Algorithm for High-speed Data Service (CBH), which extends the coverage of high-speed data service by reducing source rate and makes these users acquire "";transient coverage"";. Meanwhile, FSES (Faint Sub-carrier Elimination Strategy) is introduced, which utilizes the "";transient handover QoS""; and reduces the handover effect to target cell for high-speed data service users. The simulation results shows that the new algorithm can improve the whole system performance, reduce the handover dropping probability and new call blocking probability, enhance the resource utilization ratio.",2008,0, 3794,CiCUTS: Combining System Execution Modeling Tools with Continuous Integration Environments,"System execution modeling (SEM) tools provide an effective means to evaluate the quality of service (QoS) of enterprise distributed real-time and embedded (DRE) systems. SEM tools facilitate testing and resolving performance issues throughout the entire development life-cycle, rather than waiting until final system integration. SEM tools have not historically focused on effective testing. New techniques are therefore needed to help bridge the gap between the early integration capabilities of SEM tools and testing so developers can focus on resolving strategic integration and performance issues, as opposed to wrestling with tedious and error-prone low-level testing concerns. This paper provides two contributions to research on using SEM tools to address enterprise DRE system integration challenges. First, we evaluate several approaches for combining continuous integration environments with SEM tools and describe CiCUTS, which combines the CUTS SEM tool with the CruiseControl.NET continuous integration environment. Second, we present a case study that shows how CiCUTS helps reduce the time and effort required to manage and execute integration tests that evaluate QoS metrics for a representative DRE system from the domain of shipboard computing. The results of our case study show that CiCUTS helps developers and testers ensure the performance of an example enterprise DRE system is within its QoS specifications throughout development, instead of waiting until system integration time to evaluate QoS.",2008,0, 3795,Estimation of Defects Based on Defect Decay Model: ED^{3}M,"An accurate prediction of the number of defects in a software product during system testing contributes not only to the management of the system testing process but also to the estimation of the product's required maintenance. Here, a new approach called ED3M is presented that computes an estimate of the total number of defects in an ongoing testing process. ED3M is based on estimation theory. Unlike many existing approaches the technique presented here does not depend on historical data from previous projects or any assumptions about the requirements and/or testers' productivity. It is a completely automated approach that relies only on the data collected during an ongoing testing process. This is a key advantage of the ED3M approach, as it makes it widely applicable in different testing environments. Here, the ED3M approach has been evaluated using five data sets from large industrial projects and two data sets from the literature. In addition, a performance analysis has been conducted using simulated data sets to explore its behavior using different models for the input data. The results are very promising; they indicate the ED3M approach provides accurate estimates with as fast or better convergence time in comparison to well-known alternative techniques, while only using defect data as the input.",2008,0, 3796,Reengineering Idiomatic Exception Handling in Legacy C Code,"Some legacy programming languages, e.g., C, do not provide adequate support for exception handling. As a result, users of these legacy programming languages often implement exception handling by applying an idiom. An idiomatic style of implementation has a number of drawbacks: applying idioms can be fault prone and requires significant effort. Modern programming languages provide support for structured exception handling (SEH) that makes idioms largely obsolete. Additionally, aspect-oriented programming (AOP) is believed to further reduce the effort of implementing exception handling. This paper investigates the gains that can be achieved by reengineering the idiomatic exception handling of a legacy C component to these modern techniques. First, we will reengineer a C component such that its exception handling idioms are almost completely replaced by SEH constructs. Second, we will show that the use of AOP for exception handling can be beneficial, even though the benefits are limited by inconsistencies in the legacy implementation.",2008,0, 3797,Visual Detection of Design Anomalies,"Design anomalies, introduced during software evolution, are frequent causes of low maintainability and low flexibility to future changes. Because of the required knowledge, an important subset of design anomalies is difficult to detect automatically, and therefore, the code of anomaly candidates must be inspected manually to validate them. However, this task is time- and resource-consuming. We propose a visualization-based approach to detect design anomalies for cases where the detection effort already includes the validation of candidates. We introduce a general detection strategy that we apply to three types of design anomaly. These strategies are illustrated on concrete examples. Finally we evaluate our approach through a case study. It shows that performance variability against manual detection is reduced and that our semi-automatic detection has good recall for some anomaly types.",2008,0, 3798,Modularity-Oriented Refactoring,"Refactoring, in spite of widely acknowledged as one of the best practices of object-oriented design and programming, still lacks quantitative grounds and efficient tools for tasks such as detecting smells, choosing the most appropriate refactoring or validating the goodness of changes. This is a proposal for a method, supported by a tool, for cross-paradigm refactoring (e.g. from OOP to AOP), based on paradigm and formalism-independent modularity assessment.",2008,0, 3799,Assessing quality of web based systems,"This paper proposes an assessment model for Web-based systems in terms of non-functional properties of the system. The proposed model consists of two stages: (i) deriving quality metrics using goal-question-metric (GQM) approach; and (ii) evaluating the metrics to rank a Web based system using multi-element component comparison analysis technique. The model ultimately produces a numeric rating indicating the relative quality of a particular Web system in terms of selected quality attributes. We decompose the quality objectives of the web system into sub goals, and develop questions in order to derive metrics. The metrics are then assessed against the defined requirements using an assessment scheme.",2008,0, 3800,An evaluation method for aspectual modeling of distributed software architectures,"Dealing with crosscutting requirements in software development usually makes the process more complex. Modeling and analyzing of these requirements in the software architecture facilitate detecting architectural risks early. Distributed systems have more complexity and so these facilities are much useful in development of such systems. Aspect oriented Architectural Description Languages (ADD) have emerged to represent solutions for discussed problems; nevertheless, imposing radical changes to existing architectural modeling methods is not easily acceptable by architects. Software architecture analysis methods, furthermore, intend to verify that the quality requirements have been addressed properly. In this paper, we enhance ArchC# through utilization of aspect features with an especial focus on Non-Functional Requirements (NFR). ArchC# is mainly focused on describing architecture of distributed systems; in addition, it unifies software architecture with an object- oriented implementation to make executable architectures. Moreover, in this paper, a comparative analysis method is presented for evaluation of the result. All of these materials are illustrated along with a case study.",2008,0, 3801,Derivation of Fault Tolerance Measures of Self-Stabilizing Algorithms by Simulation,"Fault tolerance measures can be used to distinguish between different self-stabilizing solutions to the same problem. However, derivation of these measures via analysis suffers from limitations with respect to scalability of and applicability to a wide class of self-stabilizing distributed algorithms. We describe a simulation framework to derive fault tolerance measures for self-stabilizing algorithms which can deal with the complete class of self-stabilizing algorithms. We show the advantages of the simulation framework in contrast to the analytical approach not only by means of accuracy of results, range of applicable scenarios and performance, but also for investigation of the influence of schedulers on a meta level and the possibility to simulate large scale systems featuring dynamic fault probabilities.",2008,0, 3802,An Innovative Transient-Based Protection Scheme for MV Distribution Networks with Distributed Generation,This paper presents a new transient based scheme to detect single phase to ground faults (or grounded faults in general) in distribution systems with high penetration of distributed generation. This algorithm combines the fault direction based approach and the distance estimation based approach in order to determine the faulted section. The wavelet coefficients of the transient fault currents measured at the interconnection points of the network are used to determine the direction of fault currents and to estimate the distance of the fault. The simulations have been carried out by using DigSilent software package and the results have been processed in MATLAB using the Wavelet Toolbox.,2008,0, 3803,Quality-Aware Retrieval of Data Objects from Autonomous Sources for Web-Based Repositories,"The goal of this paper is to develop a framework for designing good data repositories for Web applications. The central theme of our approach is to employ statistical methods to predict quality metrics. These prediction quantities can be used to answer important questions such as: How soon should the local repository be synchronized to have a quality of at least 90% precision with certain confidence level? Suppose the local repository was synchronized three days ago, how many objects could have been deleted at the remote source since then?",2008,0, 3804,The Infamous Ratio Measure,Examines the use of ratios and other derived metrics in software measurement.,2008,0, 3805,Surveillance and Quality Evaluation System for Real-Time Audio Signal Streaming,"This paper deals with the design of an algorithm and application for an automatic system of real-time audio signal quality surveillance on personal computers. This system works without any knowledge of how the audio signal is processed. The audio signal quality and possible signal failures are monitored by being compared with the original signal in the time-frequency domain. The system supports several audio formats from mono to 5.1 and downmix for comparison between different audio formats. For high performance, SIMD technologies are used. The whole system consists of several local surveillance applications (clients) and a server application with SNMP client that monitors the local application states using the TCP/IP network.",2008,0, 3806,Modeling Resource Sharing Dynamics of VoIP Users over a WLAN Using a Game-Theoretic Approach,"We consider a scenario in which users share an access point and are mainly interested in VoIP applications. Each user is allowed to adapt to varying network conditions by choosing the transmission rate at which VoIP traffic is received. We denote this adaptation process by end-user congestion control, our object of study. The two questions that we ask are: (1) what are the performance consequences of letting the users to freely choose their rates? and (2) how to explain the adaptation process of the users? We set a controlled lab experiment having students as subject to answer the first question, and we extend an evolutionary game-theoretic model to address the second. Our partial answers are the following: (1) free users with local information can reach an equilibrium which is close to optimal from the system perspective. However, the equilibrium can be unfair; (2) the adaptation of the users can be explained using a game theoretic model. We propose a methodology to parameterize the latter, which involves active network measurements, simulations and an artificial neural network to estimate the QoS perceived by the users in each of the states of the model.",2008,0, 3807,Speech privacy for modern mobile communication systems,"Speech privacy techniques are used to scramble clear speech into an unintelligible signal in order to avoid eavesdropping. Some analog speech-privacy equipments (scramblers) have been replaced by digital encryption devices (comsec), which have higher degree of security but require complex implementations and large bandwidth for transmission. However, if speech privacy is wanted in a mobile phone using a modern commercial codec, such as the AMR (adaptive multirate) codec, digital encryption may not be an option due to the fact that it requires internal hardware and software modifications. If encryption is applied before the codec, poor voice quality may result, for the vocoder would handle digitally encrypted signal resembling noise. On the other hand, analog scramblers may be placed before the voice encoder without causing much penalty to its performance. Analog scramblers are intended in applications where the degree of security is not too critical and hardware modifications are prohibitive due to its high cost. In this article we investigate the use of different techniques of voice scramblers applied to mobile communications vocoders. We present our results in terms of LPC and cepstral distances, and PESQ values.",2008,0, 3808,Quality Attributes - Architecting Systems to Meet Customer Expectations,"This paper addresses the use of quality attributes as a mechanism for making objective decisions about architectural tradeoffs and for providing reasonably accurate predictions about how well candidate architectures will meet customer expectations. Typical quality attributes important to many current systems of interest include: performance, dependability, security, and safety. This paper begins with an examination of how quality attributes and architectures are related, including some the seminal work in the area, and a survey of the current standards addressing product quality and evaluation. The implications for both the customer and the system developer of employing a quality- attribute-based approach to architecture definition and trade-off are then briefly explored. The paper also touches on the relationship of an architectural quality-attribute-based approach to engineering process and process maturity. Lastly the special concerns of architecting for system assurance are addressed.",2008,0, 3809,System of Systems Issues for the 2008 U. S. National Healthcare Information Network Remote Patient Monitoring Requirements,"This paper describes a number of new system of systems engineering (SoSE) issues that must be addressed in order to design and deploy a safe, secure, and private informatics infrastructure for remote patient monitoring systems that are interoperable with the emerging U.S. National Healthcare Information Network (NHIN). Motivations for NHIN's ambitious remote patient monitoring goals - such as reducing the cost of care and improving medical care quality for chronically ill patients - are introduced, and the technological requirements and challenges that arise are described. This paper demonstrates the use of SoSE modeling, simulation, verification and validation techniques similar to the emerging draft of INCOSE's SoSE Engineering Guide to improve the success of such projects. The tools used include event driven modeling and software simulation tools to assist in the design and evaluation of predictive software models that simulate key safety and performance aspects of proposed remote patient monitoring systems.",2008,0, 3810,The Role of System Behavior in Diagnostic Performance,"The diagnostic performance of system built-in-test has historically suffered from deficiencies such as high false alarm rates, high undetected failure rates and high fault isolation ambiguity. In general these deficiencies impose a burden on maintenance resources and can affect mission readiness and effectiveness. Part of the problem has to do with the blurred distinction between physical faults and the test failures used to detect those faults. A greater part of the problem has to do with the test limits used to establish pass/fail criteria. If the limits do not reflect system behavior that is to be expected, given its current no fault (or fault) status, then a test fail result can often be a false alarm, and a test pass result can often constitute an undetected fault. A model based approach to prediction of system behavior can do much to alleviate the problem.",2008,0, 3811,A Methodology for Performance Modeling of Distributed Event-Based Systems,"Distributed event-based systems (DEBS) are gaining increasing attention in new application areas such as transport information monitoring, event-driven supply-chain management and ubiquitous sensor-rich environments. However, as DEBS increasingly enter the enterprise and commercial domains, performance and quality of service issues are becoming a major concern. While numerous approaches to performance modeling and evaluation of conventional request/reply-based distributed systems are available in the literature, no general approach exists for DEBS. This paper is the first to provide a comprehensive methodology for workload characterization and performance modeling of DEBS. A workload model of a generic DEBS is developed and operational analysis techniques are used to characterize the system traffic and derive an approximation for the mean event delivery latency. Following this, a modeling technique is presented that can be used for accurate performance prediction. The paper is concluded with a case study of a real life system demonstrating the effectiveness and practicality of the proposed approach.",2008,0, 3812,Automatic simulation of transmission line backflashover rate base on ATP,"The design of such a novel software application that can be used to calculate the backflashover rate of transmission line is presented. The software application was developed by combining ATP with VC++. The software structure and the input/output interface are provided, as well as the implementation details. In the calculation, compares the insulators' volt-time curves to the overvoltage to judge the insulators' behavior, considers the system voltage and the induced voltage's influence on backflash lightning withstanding level. A total line backflashover rate calculation example shows that it's convenient for estimating the backflashover rate of a total line and studying the effect of lightning performance improvement methods such as reducing grounding resistance with the software application. This software application's design way is suitable for other transient calculation which using ATP.",2008,0, 3813,Three-phase four-wire DSTATCOM based on a three-dimensional PWM algorithm,"A modified voltage space vector pulse-width modulated (PWM) algorithm for three-phase four-wire distribution static compensator (DSTATCOM) is described in this paper. The mathematical model of shunt-connected three-leg inverter in three-phase four-wire system is studied in a-b-c frame. The p-q-o theory based on the instantaneous reactive power theory is applied to detect the reference current. A fast and generalized applicable three-dimensional space vector modulation (3DSVM) is proposed for controlling a three-leg inverter. The reference voltage vector is decomposed into an offset vector and a two-level vector. So identification of neighboring vectors and dwell times calculation are all settled by a general two-level 3DSVM control. This algorithm can also be applied to multilevel inverter. The zero-sequence component of each vector is considered in order to implement the neutral current compensation. The simulation is performed by EMTDC/PSCAD software. The neutral current, harmonics current, unbalance current and reactive current can be compensated. The result shows that the validity of the proposed 3DSVM can be applied to compensate power quality problems in three-phase four-wire system.",2008,0, 3814,Interactive Software and Hardware Faults Diagnosis Based on Negative Selection Algorithm,"Both hardware and software of computer systems are subject to faults. However, traditional methods, ignoring the relationship between software fault and hardware fault, are ineffective to diagnose complex faults between software and hardware. On the basis of defining the interactive effect to describe the process of the interactive software and hardware fault, this paper present a new matrix-oriented negative selection algorithm to detect faults. Furthermore, the row vector distance and matrix distance are constructed to measure elements between the self set and detector set. The experiment on a temperature control system indicates that the proposed algorithm has good fault detection ability, and the method is applicable to diagnose interactive software and hardware faults with small samples.",2008,0, 3815,Intelligibility and Space-based Voice with Relaxed Delay Constraints,"In this paper, we leverage the long range end-to-end scenarios envisioned for lunar and beyond voice conversations by allowing non-traditional additional processing delay for quasi-real-time voice conversations. The concept of improving the quality of end-to-end voice conversations for long delay environments is considered by utilizing Luby transforms on the conjugate-structure algebraic-code-excited linear-prediction (CS-ACELP) codec. In addition, this paper also examines the use of automated speech recognition software as a means of generating a quantifiable metric for speech intelligibility in the spirit of the diagnostic rhyme test (DRT).",2008,0, 3816,Verification of a Byzantine-Fault-Tolerant Self-Stabilizing Protocol for Clock Synchronization,"This paper presents the mechanical verification of a simplified model of a rapid byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system except for the presence of sufficient good nodes, thus making the weakest possible assumptions and producing the strongest results. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the symbolic model verifier (SMV). The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space.",2008,0, 3817,Automated Generation and Assessment of Autonomous Systems Test Cases,"Verification and validation testing of autonomous spacecraft routinely culminates in the exploration of anomalous or faulted mission-like scenarios. Prioritizing which scenarios to develop usually comes down to focusing on the most vulnerable areas and ensuring the best return on investment of test time. Rules-of-thumb strategies often come into play, such as injecting applicable anomalies prior to, during, and after system state changes; or, creating cases that ensure good safety-net algorithm coverage. Although experience and judgment in test selection can lead to high levels of confidence about the majority of a system's autonomy, it's likely that important test cases are overlooked. One method to fill in potential test coverage gaps is to automatically generate and execute test cases using algorithms that ensure desirable properties about the coverage. For example, generate cases for all possible fault monitors, and across all state change boundaries. Of course, the scope of coverage is determined by the test environment capabilities, where a faster-than-real-time, high-fidelity, software-only simulation would allow the broadest coverage. Even real-time systems that can be replicated and run in parallel, and that have reliable set-up and operations features provide an excellent resource for automated testing. Making detailed predictions for the outcome of such tests can be difficult, and when algorithmic means are employed to produce hundreds or even thousands of cases, generating predicts individually is impractical, and generating predicts with tools requires executable models of the design and environment that themselves require a complete test program. Therefore, evaluating the results of large number of mission scenario tests poses special challenges. A good approach to address this problem is to automatically score the results based on a range of metrics. Although the specific means of scoring depends highly on the application, the use of formal scoring - metrics has high value in identifying and prioritizing anomalies, and in presenting an overall picture of the state of the test program. In this paper we present a case study based on automatic generation and assessment of faulted test runs for the Dawn mission, and discuss its role in optimizing the allocation of resources for completing the test program.",2008,0, 3818,Lights-Out Scenario Testing for the New Horizons Autonomous Operations Subsystem,"New horizons is a NASA sponsored mission to explore Pluto and its largest moon Charon. The new horizons spacecraft, designed, built and operated by the Johns Hopkins University Applied Physics Laboratory (APL), was successfully launched in January 2006 and will perform its primary mission at Pluto in the summer of 2015. To support this mission, the spacecraft is equipped with onboard software that provides a rule based expert system for performing autonomous fault detection and recovery. This system has been updated nine times since launch and is continuously being tested to ascertain its performance in various spacecraft states. The test approach for the autonomous fault protection subsystem is to perform a combination of unit-level tests and full system scenario tests. For the scenario tests, we have developed a ""lights out"" test method and have been using it to reduce the time required to run each fault scenario test. This approach reduces the time it takes to develop a test, reduces the number of man hours required to run the test, and decouples the initial spacecraft state from the mechanisms used to inject faults. Decoupling the initial state from the fault injection allows for easy expansion in the number of initial state/fault combinations that can be tested. This significantly improves the test coverage of the scenario test suite. Using this approach, we will be able to run more tests and increase our working knowledge of the performance of the fault protection subsystem. This paper describes the evolution, benefits and cautions of the ""lights out"" scenario test process.",2008,0, 3819,An Introspection Framework for Fault Tolerance in Support of Autonomous Space Systems,"This paper describes a software system designed for the support of future autonomous space missions by providing an infrastructure for runtime monitoring, analysis, and feedback. The objective of this research is to make mission software executing on parallel on-board architectures fault tolerant through an introspection mechanism that provides automatic recovery minimizing the loss of function and data. Such architectures are essential for future JPL missions because of their increased need for autonomy along with enhanced on-board computational capabilities while in deep space or time-critical situations. The standard framework for introspection described in this paper integrates well with existing flight software architectures and can serve as an enabling technology for the support of such systems. Furthermore, it separates the introspection capability from applications and the underlying system, providing a generic framework that can be also applied to a broad range of problems beyond fault tolerance, such as behavior analysis, intrusion detection, performance tuning, and power management.",2008,0, 3820,False Alarm Mitigation of Vibration Diagnostic Systems,"False alarms in legacy aircraft diagnostic systems have negatively impacted fleet maintenance costs and mission readiness. As the industry moves towards more advanced prognostic and health management (PHM) solutions, a reduction in false alarms is needed to reduce the cost and readiness burdens that have plagued legacy systems. It is therefore important to understand why these false alarms occur and how they are generated so appropriate mitigation solutions can be included in next-generation diagnostic systems. This paper examines four major sources of false alarms in the development of vibration diagnostics (faulty sensor performance, transient system operating conditions, improper health indicator selection, and inadequate fault detection logic) and details a solution designed to mitigate their impact. An overview of the developed false alarm statistics toolbox for PHM (FAST PHMtrade) software is also provided to illustrate how the software guides design engineers through the processes of verifying data, processing for diagnostic features, analyzing feature performances, developing ""virtual"" features through fusion, and deriving statistically optimized feature thresholds. The developed approach will improve the overall performance, robustness, and reliability of vibration diagnostic and prognostics systems.",2008,0, 3821,Software Maintenance Implications on Cost and Schedule,"Software Maintenance Implications on Cost and Schedule. The dictionary defines maintenance as, ""The work of keeping something in proper order."" However, this definition does not necessarily fit for software. Software maintenance is different from hardware maintenance because software doesn't physically wear out, but often gets less useful with age. Software is typically delivered with undiscovered flaws. Therefore, software maintenance is: ""The process of modifying existing operational software while leaving its primary functions intact."" Maintenance typically exceeds fifty percent of the systems' life cycle cost. Additionally, software is highly dependent on defined maintenance rigor and operational life expectancy. Software maintenance generally includes sustaining engineering and new function development; corrective changes (fixing bugs); adapting to new requirements (OS upgrade, new processor); perfecting or improving existing functions (improve speed, performance); enhancing application with (minor) new functions (new feature.) Since software maintenance costs can be somewhat set by definition, the implications on cost and schedule must be evaluated. Development decisions, processes, and tools can impact maintenance costs. But, generally even a perfectly delivered system quickly needs upgrades. While software maintenance can be treated as a level of effort activity, there are consequences on quality, functionality, reliability, cost and schedule that can be mitigated through the use of parametric estimation techniques.",2008,0, 3822,A Methodology for Performance Modeling and Simulation Validation of Parallel 3-D Finite Element Mesh Refinement With Tetrahedra,"The design and implementation of parallel finite element methods (FEMs) is a complex and error-prone task that can benefit significantly by simulating models of them first. However, such simulations are useful only if they accurately predict the performance of the parallel system being modeled. The purpose of this contribution is to present a new, practical methodology for validation of a promising modeling and simulation approach for parallel 3-D FEMs. To meet this goal, a parallel 3-D unstructured mesh refinement model is developed and implemented based on a detailed software prototype and parallel system architecture parameters in order to simulate the functionality and runtime behavior of the algorithm. Estimates for key performance measures are derived from these simulations and are validated with benchmark problem computations obtained using the actual parallel system. The results illustrate the potential benefits of the new methodology for designing high performance parallel FEM algorithms.",2008,0, 3823,Benchmarking Classification Models for Software Defect Prediction: A Proposed Framework and Novel Findings,"Software defect prediction strives to improve software quality and testing efficiency by constructing predictive classification models from code attributes to enable a timely identification of fault-prone modules. Several classification models have been evaluated for this task. However, due to inconsistent findings regarding the superiority of one classifier over another and the usefulness of metric-based classification in general, more research is needed to improve convergence across studies and further advance confidence in experimental results. We consider three potential sources for bias: comparing classifiers over one or a small number of proprietary data sets, relying on accuracy indicators that are conceptually inappropriate for software defect prediction and cross-study comparisons, and, finally, limited use of statistical testing procedures to secure empirical findings. To remedy these problems, a framework for comparative software defect prediction experiments is proposed and applied in a large-scale empirical comparison of 22 classifiers over 10 public domain data sets from the NASA Metrics Data repository. Overall, an appealing degree of predictive accuracy is observed, which supports the view that metric-based classification is useful. However, our results indicate that the importance of the particular classification algorithm may be less than previously assumed since no significant performance differences could be detected among the top 17 classifiers.",2008,0, 3824,FEDC: Control Flow Error Detection and Correction for Embedded Systems without Program Interruption,"This paper proposes a new technique called CFEDC to detect and correct control flow errors (CFEs) without program interruption. The proposed technique is based on the modification of application software and minor changes in the underlying hardware. To demonstrate the effectiveness of CFEDC, it has been implemented on an OpenRISC 1200 as a case study. Analytical results for three workload programs show that this technique detects all CFEs and corrects on average about 81.6% of CFEs. These figures are achieved with zero error detection /correction latency. According to the experimental results, the overheads are generally low as compared to other techniques; the performance overhead and the memory overhead are on average 8.5% and 9.1%, respectively. The area overhead is about 4% and the power dissipation increases by the amount of 1.5% on average.",2008,0, 3825,Quantitative Assessment of Enterprise Security System,In this paper we extend a model-based approach to security management with concepts and methods that provide a possibility for quantitative assessments. For this purpose we introduce security metrics and explain how they are aggregated using the underlying model as a frame. We measure numbers of attack of certain threats and estimate their likelihood of propagation along the dependencies in the underlying model. Using this approach we can identify which threats have the strongest impact on business security ob jectives and how various security controls might differ with regard to their effect in reducing these threats.,2008,0, 3826,Admon: ViSaGe Administration And Monitoring Service For Storage Virtualization in Grid Environment,"This work is part of the ViSaGe project. This project concerns the storage virtualization applied in the grid environment. Its goal is to create and manage virtual storage resources by aggregating geographically distributed physical storage resources shared on a Grid. To these shared resources, a quality of service will be associated and related to the data storage performance. In a Grid environment, sharing resources may improve performances if these resources are well managed and if the management software obtains sufficient knowledge about the grid resources workload (computing nodes, storage nodes and links). The grid resources workload is mainly perceived by a monitoring system. Several existing monitoring systems are available for monitoring the grid resources and applications. Each one provides informations according to its aim. For example, Network Weather Service [6] is designed to monitor grid computing nodes and links. On the other hand, Netlogger [8] monitors grid resources and applications in order to detect application's bottleneck. These systems are useful for a post mortem analysis. However, in ViSaGe, we need a system that analyzes the necessary nodes state during execution time. This system is represented by the ViSaGe Administration and monitoring service ""Admon"". In this paper, we present our scalable distributed system: Admon. Admon traces applications, and calculates the system resources consumption (CPU, Disks, Links). Its originality consists in providing a new opportunity to improve virtual data storage performance by using workload's constraints (e.g., maximum CPU usage percentage). Many constraints will be discussed (such as time's constraint). These constraints allow performance improvement by assigning nodes to the ViSaGe's jobs in an effective manner.",2008,0, 3827,Efficient Approximate Wordlength Optimization,"In this paper, the problem of bounding the performance of good wordlength combinations for fixed-point digital signal processing flowgraphs is addressed. By formulating and solving an approximate optimization problem, a lower bounding curve on attainable cost/quality combinations is rapidly calculated. This curve and the associated wordlength combinations are useful in several situations, and can serve as starting points for real design searches. A detailed design example that utilizes these concepts is given.",2008,0, 3828,Adaptive Fault Management of Parallel Applications for High-Performance Computing,"As the scale of high-performance computing (HPC) continues to grow, failure resilience of parallel applications becomes crucial. In this paper, we present FT-Pro, an adaptive fault management approach that combines proactive migration with reactive checkpointing. It aims to enable parallel applications to avoid anticipated failures via preventive migration and, in the case of unforeseeable failures, to minimize their impact through selective checkpointing. An adaptation manager is designed to make runtime decisions in response to failure prediction. Extensive experiments, by means of stochastic modeling and case studies with real applications, indicate that FT-Pro outperforms periodic checkpointing, in terms of reducing application completion times and improving resource utilization, by up to 43 percent.",2008,0, 3829,Localization of IP Links Faults Using Overlay Measurements,"Accurate fault detection and localization is essential to the efficient and economical operation of ISP networks. In addition, it affects the performance of Internet applications such as VoIP, and online gaming. Fault detection algorithms typically depend on spatial correlation to produce a set of fault hypotheses, the size of which increases by the existence of lost and spurious symptoms, and the overlap among network paths. The network administrator is left with the task of accurately locating and verifying these fault scenarios, which is a tedious and time-consuming task. In this paper, we formulate the problem of finding a set of overlay paths that can debug the set of suspected faulty IP links. These overlay paths are chosen from the set of existing measurement paths, which will make overlay measurements meaningful and useful for fault debugging. We study the overlap among overlay paths using various real-life Internet topologies of the two major service carriers in the U.S. We found that with a reasonable number of concurrent failures, it is possible to identify the location of the IP links faults with 60% to 95% success rate. Finally, we identify some interesting research problems in this area.",2008,0, 3830,Fault Tolerance Management for a Hierarchical GridRPC Middleware,"The GridRPC model is well suited for high performance computing on grids thanks to efficiently solving most of the issues raised by geographically and administratively split resources. Because of large scale, long range networks and heterogeneity, Grids are extremely prone to failures. GridRPC middleware are usually managing failures by using 1) TCP or other link network layer provided failure detector, 2) automatic checkpoints of sequential jobs and 3) a centralized stable agent to perform scheduling. Most recent developments have provided some new mechanisms like the optimal Chandra & Toueg & Aguillera failure detector, most numerical libraries now providing their own optimized checkpoint routine and distributed scheduling GridRPC architectures. In this paper we aim at adapting to these novelties by providing the first implementation and evaluation in a grid system of the optimal fault detector, a novel and simple checkpoint API allowing to manage both service provided checkpoint and automatic checkpoint (even for parallel services) and a scheduling hierarchy recovery algorithm tolerating several simultaneous failures. All those mechanisms are implemented and evaluated on a real grid in the DIET middleware.",2008,0, 3831,Fault Tolerance and Recovery of Scientific Workflows on Computational Grids,"In this paper, we describe the design and implementation of two mechanisms for fault-tolerance and recovery for complex scientific workflows on computational grids. We present our algorithms for over-provisioning and migration, which are our primary strategies for fault-tolerance. We consider application performance models, resource reliability models, network latency and bandwidth and queue wait times for batch-queues on compute resources for determining the correct fault-tolerance strategy. Our goal is to balance reliability and performance in the presence of soft real-time constraints like deadlines and expected success probabilities, and to do it in a way that is transparent to scientists. We have evaluated our strategies by developing a Fault-Tolerance and Recovery (FTR) service and deploying it as a part of the Linked Environments for Atmospheric Discovery (LEAD) production infrastructure. Results from real usage scenarios in LEAD show that the failure rate of individual steps in workflows decreases from about 30% to 5% by using our fault-tolerance strategies.",2008,0, 3832,Application Resilience: Making Progress in Spite of Failure,"While measures such as raw compute performance and system capacity continue to be important factors for evaluating cluster performance, such issues as system reliability and application resilience have become increasingly important as cluster sizes rapidly grow. Although efforts to directly improve fault-tolerance are important, it is also essential to accept that application failures will inevitably occur and to ensure that progress is made despite these failures. Application monitoring frameworks are central to providing application resilience. As such, the central theme of this paper is to address the impact that application monitoring detection latency has on the overall system performance. We find that immediate fault detection is not necessary in order to obtain substantial improvement in performance. This conclusion is significant because it implies that less complex, highly portable, and predominately less expensive failure detection schemes would provide adequate application resilience.",2008,0, 3833,Integrating Static Analysis into a Secure Software Development Process,"Software content has grown rapidly in all manner of electronic systems. Meanwhile, society has become increasingly dependent upon the safe and secure operation of these electronic systems. We depend on software for our telecommunications, critical infrastructure, avionics, financial systems, medical information systems, automobiles, and more. Unfortunately, our ability to develop secure software has not improved at the same rate, resulting in increasing reliability and security vulnerabilities. The increase in software vulnerability poses a serious threat to national and homeland security. Vulnerabilities have caused or contributed to blackouts, air traffic control failures, traffic light system breaches, and other well publicized security breaches in critical infrastructure. This threat demands new approaches to secure software development. Static analysis has emerged as a promising technology for improving the security of software and systems. Static analysis tools analyze software to find defects that may go undetected using traditional techniques, such as compilers, human code reviews, and testing. A number of limitations, however, have prevented widespread adoption in software development. Static analysis tools often take prohibitively long to execute and are not well integrated into the software development environment. This paper will introduce a new approach - the integrated static analyzer (ISA) - that solves many of these problems. Specific metrics will be provided to demonstrate how the new approach makes the use of static analysis tools practical and effective for everyday embedded software development. In addition to traditional analysis, the ISA approach enables detection of a new class of security flaws not otherwise practicable.",2008,0, 3834,"SNAP, Small-world Network Analysis and Partitioning: An open-source parallel graph framework for the exploration of large-scale networks","We present SNAP (small-world network analysis and partitioning), an open-source graph framework for exploratory study and partitioning of large-scale networks. To illustrate the capability of SNAP, we discuss the design, implementation, and performance of three novel parallel community detection algorithms that optimize modularity, a popular measure for clustering quality in social network analysis. In order to achieve scalable parallel performance, we exploit typical network characteristics of small-world networks, such as the low graph diameter, sparse connectivity, and skewed degree distribution. We conduct an extensive experimental study on real-world graph instances and demonstrate that our parallel schemes, coupled with aggressive algorithm engineering for small-world networks, give significant running time improvements over existing modularity-based clustering heuristics, with little or no loss in clustering quality. For instance, our divisive clustering approach based on approximate edge betweenness centrality is more than two orders of magnitude faster than a competing greedy approach, for a variety of large graph instances on the Sun Fire T2000 multicore system. SNAP also contains parallel implementations of fundamental graph-theoretic kernels and topological analysis metrics (e.g., breadth-first search, connected components, vertex and edge centrality) that are optimized for small- world networks. The SNAP framework is extensible; the graph kernels are modular, portable across shared memory multicore and symmetric multiprocessor systems, and simplify the design of high-level domain-specific applications.",2008,0, 3835,Efficient software checking for fault tolerance,"Dramatic increases in the number of transistors that can be integrated on a chip make processors more susceptible to radiation-induced transient errors. For commodity chips which are cost- and energy-constrained, software approaches can play a major role for fault detection because they can be tailored to fit different requirements of reliability and performance. However, software approaches add a significant performance overhead because they replicate the instructions and add checking instructions to compare the results. In order to make software checking approaches more attractive, we use compiler techniqes to identify the ""unnecessary"" replicas and checking instructions. In this paper, we present three techniques. The first technique uses boolean logic to identify code patterns that correspond to outcome tolerant branches. The second technique identifies address checks before loads and stores that can be removed with different degrees of fault coverage. The third technique identifies the checking instructions and shadow registers that are unnecessary when the register file is protected in hardware. By combining the three techniques, the overheads of software approaches can be reduced by an average 50%.",2008,0, 3836,PLP: Towards a realistic and accurate model for communication performances on hierarchical cluster-based systems,"Today, due to many reasons, such as the inherent heterogeneity, the diversity, and the continuous evolving of actual computational supports, writing efficient parallel applications on such systems represents a great challenge. One way to answer this problem is to optimize communications of such applications. Our objective within this work is to design a realistic model able to accurately predict the cost of communication operations on execution environments characterized by both heterogeneity and hierarchical structure. We principally aim to guarantee a good quality of prediction with a neglected additional overhead. The proposed model was applied on point-to-point and collective communication operations and showed by achieving experiments on a hierarchical cluster-based system with heterogeneous resources that the predicted performances are close to measured ones.",2008,0, 3837,Performance assessment of OMG compliant data distribution middleware,"Event-driven architectures (EDAs) are widely used to make distributed mission critical software systems more- efficient and scalable. In the context of EDAs, data distribution service (DDS) is a recent standard by the object management group that offers a rich support for quality- of-service and balances predictable behavior and implementation efficiency. The DDS specification does not outline how messages are delivered, so several architectures are nowadays available. This paper focuses on performance assessment of OMG DDS-compliant middleware technologies. It provides three contributions to the study of evaluating the performance of DDS implementations: 1) describe the challenges to be addressed; 2) propose possible solutions; 3) define a representative workload scenario for evaluating the performance and scalability of DDS platforms. At the end of the paper, a case study of DDS performance assessment, performed with the proposed benchmark, is presented.",2008,0, 3838,Constructing Node-Disjoint Paths in Biswapped Networks (BSNs),"The recent biswapped networks (BSNs) offer a systematic scheme for building large, scalable, modular, and robust parallel architectures of arbitrary basis networks. BSNs are related to well-known swapped or OTIS networks, and are promising because of their attractive performance attributes including structural symmetry and algorithmic efficiency. In this paper we present an efficient general algorithm for constructing a maximal number of node-disjoint paths between two distinct nodes in a BSN built of an arbitrary basis network and analyze its performance. From this algorithm, we prove that a BSN is maximally fault tolerant if its basis network is connected. As an example application, we show that this algorithm finds the desirable node disjoint paths of length at most 1.5D+3 in 0(log3N) time when applied to an N-node BSN of diameter D built of a cube, and also estimate the average performance concerning the maximum path length by computer simulation.",2008,0, 3839,"Relationships between Test Suites, Faults, and Fault Detection in GUI Testing","Software-testing researchers have long sought recipes for test suites that detect faults well. In the literature, empirical studies of testing techniques abound, yet the ideal technique for detecting the desired kinds of faults in a given situation often remains unclear. This work shows how understanding the context in which testing occurs, in terms of factors likely to influence fault detection, can make evaluations of testing techniques more readily applicable to new situations. We present a methodology for discovering which factors do statistically affect fault detection, and we perform an experiment with a set of test-suite- and fault-related factors in the GUI testing of two fielded, open-source applications. Statement coverage and GUI-event coverage are found to be statistically related to the likelihood of detecting certain kinds of faults.",2008,0, 3840,On the Predictability of Random Tests for Object-Oriented Software,"Intuition suggests that random testing of object-oriented programs should exhibit a significant difference in the number of faults detected by two different runs of equal duration. As a consequence, random testing would be rather unpredictable. We evaluate the variance of the number of faults detected by random testing over time. We present the results of an empirical study that is based on 1215 hours of randomly testing 27 Eiffel classes, each with 30 seeds of the random number generator. Analyzing over 6 million failures triggered during the experiments, the study provides evidence that the relative number of faults detected by random testing over time is predictable but that different runs of the random test case generator detect different faults. The study also shows that random testing quickly finds faults: the first failure is likely to be triggered within 30 seconds.",2008,0, 3841,Prioritizing User-Session-Based Test Cases for Web Applications Testing,"Web applications have rapidly become a critical part of business for many organizations. However, increased usage of Web applications has not been reciprocated with corresponding increases in reliability. Unique characteristics, such as quick turnaround time, coupled with growing popularity motivate the need for efficient and effective Web application testing strategies. In this paper, we propose several new test suite prioritization strategies for Web applications and examine whether these strategies can improve the rate of fault detection for three Web applications and their preexisting test suites. We prioritize test suites by test lengths, frequency of appearance of request sequences, and systematic coverage of parameter-values and their interactions. Experimental results show that the proposed prioritization criteria often improve the rate of fault detection of the test suites when compared to random ordering of test cases. In general, the best prioritization metrics either (1) consider frequency of appearance of sequences of requests or (2) systematically cover combinations of parameter-values as early as possible.",2008,0, 3842,The Use of Intra-Release Product Measures in Predicting Release Readiness,"Modern business methods apply micro management techniques to all aspects of systems development. We investigate the use of product measures during the intra-release cycles of an application as a means of assessing release readiness. The measures include those derived from the Chidamber and Kemerer metric suite and some coupling measures of our own. Our research uses successive monthly snapshots during systems re-structuring, maintenance and testing cycles over a two year period on a commercial application written in C++. We examine the prevailing trends which the measures reveal at both component class and application level. By applying criteria to the measures we suggest that it is possible to evaluate the maturity and stability of the application thereby facilitating the project manager in making an informed decision on the application's fitness for release.",2008,0, 3843,An Empirical Study on Bayesian Network-based Approach for Test Case Prioritization,"A cost effective approach to regression testing is to prioritize test cases from a previous version of a software system for the current release. We have previously introduced a new approach for test case prioritization using Bayesian Networks (BN) which integrates different types of information to estimate the probability of each test case finding bugs. In this paper, we enhance our BN-based approach in two ways. First, we introduce a feedback mechanism and a new change information gathering strategy. Second, a comprehensive empirical study is performed to evaluate the performance of the approach and to identify the effects of using different parameters included in the technique. The study is performed on five open source Java objects. The obtained results show relative advantage of using feedback mechanism for some objects in terms of early fault detection. They also provide insight into costs and benefits of the various parameters used in the approach.",2008,0, 3844,The Role of Stability Testing in Heterogeneous Application Environment,"This paper presents an approach to system stability tests performed in Motorola private radio networks (PRN) department. The stability tests are among crucial elements of the department's testing strategy, such as functional testing, regression testing and stress testing. The gravity of the subject is illustrated with an example of a serious system defect: memory leak, whicht was detected in Solaris operating system during system stability tests. The paper provides technical background essential to understand the problem, as well as emphasizes the role of the tests in the problem solving. The following approaches to memory leaks detection are discussed: code review, memory debugging and system stability tests. The article presents several guidelines on stability test implementation and mentions the crucial elements of the PRN Department testing strategy: load definition, testing period and the system monitoring method.",2008,0, 3845,Collaborative Target Detection in Wireless Sensor Networks with Reactive Mobility,"Recent years have witnessed the deployments of wireless sensor networks in a class of mission-critical applications such as object detection and tracking. These applications often impose stringent QoS requirements including high detection probability, low false alarm rate and bounded detection delay. Although a dense all-static network may initially meet these QoS requirements, it does not adapt to unpredictable dynamics in network conditions (e.g., coverage holes caused by death of nodes) or physical environments (e.g., changed spatial distribution of events). This paper exploits reactive mobility to improve the target detection performance of wireless sensor networks. In our approach, mobile sensors collaborate with static sensors and move reactively to achieve the required detection performance. Specifically, mobile sensors initially remain stationary and are directed to move toward a possible target only when a detection consensus is reached by a group of sensors. The accuracy of final detection result is then improved as the measurements of mobile sensors have higher signal-to-noise ratios after the movement. We develop a sensor movement scheduling algorithm that achieves near-optimal system detection performance within a given detection delay bound. The effectiveness of our approach is validated by extensive simulations using the real data traces collected by 23 sensor nodes.",2008,0, 3846,Adaptation of high level behavioral models for stuck-at coverage analysis,"There has been increasing effort in the years for defining test strategies at the behavioral level. Due to the lack of suitable coverage metrics and tools to assess the quality of a testbench, these strategies have not been able to play an important role in stuck-at fault simulation. The work we are presenting here proposes a new coverage metric that employs back-annotation of post-synthesis design properties into pre-synthesis simulation models to estimate the stuck-at fault coverage of a testbench. The effectiveness of this new metric is evaluated for several example circuits. The results show that the new metric provides a good evaluation of high level testbenches for detection of stuck-at faults.",2008,0, 3847,Adaptive Kalman Filtering for anomaly detection in software appliances,"Availability and reliability are often important features of key software appliances such as firewalls, web servers, etc. In this paper we seek to go beyond the simple heartbeat monitoring that is widely used for failover control. We do this by integrating more fine grained measurements that are readily available on most platforms to detect possible faults or the onset of failures. In particular, we evaluate the use of adaptive Kalman Filtering for automated CPU usage prediction that is then used to detect abnormal behaviour. Examples from experimental tests are given.",2008,0, 3848,A Survey of Automated Techniques for Formal Software Verification,"The quality and the correctness of software are often the greatest concern in electronic systems. Formal verification tools can provide a guarantee that a design is free of specific flaws. This paper surveys algorithms that perform automatic static analysis of software to detect programming errors or prove their absence. The three techniques considered are static analysis with abstract domains, model checking, and bounded model checking. A short tutorial on these techniques is provided, highlighting their differences when applied to practical problems. This paper also surveys tools implementing these techniques and describes their merits and shortcomings.",2008,0, 3849,Assessment of a New High-Performance Small-Animal X-Ray Tomograph,"We have developed a new X-ray cone-beam tomograph for in vivo small-animal imaging using a flat panel detector (CMOS technology with a microcolumnar CsI scintillator plate) and a microfocus X-ray source. The geometrical configuration was designed to achieve a spatial resolution of about 12 lpmm with a field of view appropriate for laboratory rodents. In order to achieve high performance with regard to per-animal screening time and cost, the acquisition software takes advantage of the highest frame rate of the detector and performs on-the-fly corrections on the detector raw data. These corrections include geometrical misalignments, sensor non-uniformities, and defective elements. The resulting image is then converted to attenuation values. We measured detector modulation transfer function (MTF), detector stability, system resolution, quality of the reconstructed tomographic images and radiated dose. The system resolution was measured following the standard test method ASTM E 1695 -95. For image quality evaluation, we assessed signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) as a function of the radiated dose. Dose studies for different imaging protocols were performed by introducing TLD dosimeters in representative organs of euthanized laboratory rats. Noise figure, measured as standard deviation, was 50 HU for a dose of 10 cGy. Effective dose with standard research protocols is below 200 mGy, confirming that the system is appropriate for in vivo imaging. Maximum spatial resolution achieved was better than 50 micron. Our experimental results obtained with image quality phantoms as well as with in-vivo studies show that the proposed configuration based on a CMOS flat panel detector and a small micro-focus X-ray tube leads to a compact design that provides good image quality and low radiated dose, and it could be used as an add-on for existing PET or SPECT scanners.",2008,0, 3850,Effective and Open System for Wavelengths Monitoring,"Today there are very few open source optical network monitoring solutions available for network operators or optical paths users. Most of the available optical monitoring solutions are proprietary and single vendor solutions only (firmware). Even the existing software is often limited in functionality to one-domain homogeneous network environment only and extremely difficult to apply in many operators multi-domain heterogeneous environment. Unfortunately, network operators as well as users of optical paths (or client trails in SDH) can only successfully monitor and use wavelengths infrastructures spanning over multiple domains if they have efficient monitoring and fault detection tools available to them. In this paper we present an effective and open source tool for monitoring wavelengths. Our approach recognises and facilitates the ability of independent networks to set policies while facilitating the status and performance monitoring by users interested in optical paths connecting distributed resources.",2008,0, 3851,Notice of Violation of IEEE Publication Principles
Dynamic Binding Framework for Adaptive Web Services,"Notice of Violation of IEEE Publication Principles

""Dynamic Binding Framework for Adaptive Web Services,""
by A. Erradi, P. Maheshwari
in the Proceedings of the Third International Conference on Internet and Web Applications and Services, 2008. ICIW '08, June 2008, pp. 162-167

After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

This paper contains significant portions of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.

""Foundations of Software Engineering,""
by A. Michlmayr, F. Rosenberg, C. Platzer, M. Treiber, S. Dustdar
in the Proceedings of the 2nd International Workshop on Service Oriented Software Engineering: In Conjunction with the 6th ESEC/FSE Joint Meeting, Sept 2007, pp. 22-28

Dynamic selection and composition of autonomous and loosely-coupled Web services is increasingly used to automate business processes. The typical long-running characteristic of business processes imposes new management challenges such as dynamic adaptation of running process instances. To address this, we developed a policy-based framework, named manageable and adaptable service compositions (MASC) , to declaratively specify policies that govern: (1) discovery and selection of services to be used, (2) monitoring to detect the need for adaptation, (3) reconfiguration and adaptation of the process to handle special cases (e.g., context-dependant behavior) and recover from typical faults in service-based processes. The identified constructs are executed by a lightweight service-oriented management middleware named MASC middleware. We implemented a MASC proof-of-concept prototype and evaluated it on stock trading case study scenarios. We conducted exten- sive studies to demonstrate the feasibility of the proposed techniques and illustrate the benefits of our approach in providing adaptive composite services using the policy-based approach. Our performance and scalability studies indicate that MASC middleware is scalable and the introduced overhead are acceptable.",2008,0, 3852,Improving Efficiency of IC Burn-In Testing,"Burn-in (i.e., electronic test performed under elevated temperature and other stress conditions) plays an important role in integrated circuits (ICs) manufacturing process to ensure required high quality and reliability of the produced semiconductors before shipping them to final users. Burn-in aims at accelerating detection and screening out of so-called 'infant mortalities' (early-life latent failures). It is normally associated with lengthy test time and high cost, often making it a bottleneck of the entire IC manufacturing process. It is no surprise therefore, that much attention and efforts have been dedicated towards possible ways of improving efficiency of the entire burn-in process. This paper looks at improving efficiency of practical burn-in testing process in the IC fabrication setting. The paper presents development of applied algorithms and relevant embedded software the burn-in and binning processes automation. In addition to the improved burn-in test efficiency, the developed tools lead also to yield improvement by providing more comprehensive test results data logging and analysis.",2008,0, 3853,Prediction-based Haptic Data Reduction and Compression in Tele-Mentoring Systems,"In this paper, a novel haptic data reduction and compression technique to reduce haptic data traffic in networked haptic tele-mentoring systems is presented. The suggested method follows a two-step procedure: (1) haptic data packets are not transmitted when they can be predicted within a predefined tolerable error; otherwise, (2) data packets are compressed prior to transmission. The prediction technique relies on the least-squares method. Knowledge from human haptic perception is incorporated into the architecture to assess the perceptual quality of the prediction results. Packet-payload compression is performed using uniform quantization and adaptive Golomb-Rice codes. The preliminary experimental results demonstrate the algorithm's effectiveness as great haptic data reduction and compression is achieved, while preserving the overall quality of the tele-mentoring environment.",2008,0, 3854,Traffic and Quality Characterization of Single-Layer Video Streams Encoded with the H.264/MPEG-4 Advanced Video Coding Standard and Scalable Video Coding Extension,"The recently developed H.264/AVC video codec with scalable video coding (SVC) extension, compresses non-scalable (single-layer) and scalable video significantly more efficiently than MPEG-4 Part 2. Since the traffic characteristics of encoded video have a significant impact on its network transport, we examine the bit rate-distortion and bit rate variability-distortion performance of single-layer video traffic of the H.264/AVC codec and SVC extension using long CIF resolution videos. We also compare the traffic characteristics of the hierarchical B frames (SVC) versus classical B frames. In addition, we examine the impact of frame size smoothing on the video traffic to mitigate the effect of bit rate variabilities. We find that compared to MPEG-4 Part 2, the H.264/AVC codec and SVC extension achieve lower average bit rates at the expense of significantly increased traffic variabilities that remain at a high level even with smoothing. Through simulations we investigate the implications of this increase in rate variability on (i) frame losses when transmitting a single video, and (ii) on a bufferless statistical multiplexing scenario with restricted link capacity and information loss. We find increased frame losses, and rate-distortion/rate-variability/encoding complexity tradeoffs. We conclude that solely assessing bit rate-distortion improvements of video encoder technologies is not sufficient to predict the performance in specific networked application scenarios.",2008,0, 3855,Integrated Evaluation Model for Software Process Modeling Methods,"In order to help developers choose suitable modeling method according to specific modeling environment and requirement for achieving the best modeling effect, it is important to make reasonable assessment for software process modeling methods. An evaluation system for software process modeling methods is presented, and an evaluation model for evaluating modeling methods is established using method that combines uncertain analytic hierarchy process (AHP) with fuzzy integrated evaluation technology. Further, by an example it proves that using the model can evaluate modeling methods soundly, so it has practical value in software project development.",2008,0, 3856,Development of a Weather Forecasting Code: A Case Study,"Computational science is increasingly supporting advances in scientific and engineering knowledge. The unique constraints of these types of projects result in a development process that differs from the process more traditional information technology projects use. This article reports the results of the sixth case study conducted under the support of the Darpa High Productivity Computing Systems Program. The case study aimed to investigate the technical challenges of code development in this environment, understand the use of development tools, and document the findings as concrete lessons learned for other developers' benefit. The project studied here is a major component of a weather forecasting system of systems. It includes complex behavior and interaction of several individual physical systems (such as the atmosphere and the ocean). This article describes the development of the code and presents important lessons learned.",2008,0, 3857,Structure and Interpretation of Computer Programs,"Call graphs depict the static, caller-callee relation between ""functions "" in a program. With most source/target languages supporting functions as the primitive unit of composition, call graphs naturally form the fundamental control flow representation available to understand/develop software. They are also the substrate on which various inter- procedural analyses are performed and are integral part of program comprehension/testing. Given their universality and usefulness, it is imperative to ask if call graphs exhibit any intrinsic graph theoretic features - across versions, program domains and source languages. This work is an attempt to answer these questions: we present and investigate a set of meaningful graph measures that help us understand call graphs better; we establish how these measures correlate, if any, across different languages and program domains; we also assess the overall, language independent software quality by suitably interpreting these measures.",2008,0, 3858,Full automated packaging of high-power diode laser bars,"Full automated packaging of high power diode laser bars on passive or micro channel heat sinks requires a high precision measurement and handling technology. The metallurgic structure of the solder and intrinsic stress of the laser bar are largely influenced by the conditions of the mounting process. To avoid thermal deterioration the tolerance for the overhang between laser bar and heat sink is about a few microns maximum. Due to an increase of growing applications and growing number of systems there is a need for automatic manufacturing not just for cost efficiency but also for yield and product quality reasons. In this paper we describe the demands on fully automated packaging, the realized design and finally the test results of bonded devices. The design of the automated bonding systems includes an air cushioned, 8 axes system on a granite frame. Each laser bar is picked up by a vacuum tool from a special tray or directly out of the gel pak. The reflow oven contains a ceramic heater with low thermal capacity and reaches a maximum of 400degC with a heating rate up to 100 K/s and a cooling rate up to 20 K/s. It is suitable for all common types of heat sinks and submounts which are fixed onto the heater by vacuum. The soldering process is performed under atmospheric pressure, during the oven is filled up with inert gas. Additionally, reactive gases can be used to proceed the reduction of the solder. Three high precision optical sensors for distance measurement detect the relative position of laser bar and heat sink. The high precision alignment uses a special algorithm for final positioning. For the alignment of the tilt and roll angles between the laser bar and the heat sink two optical distance sensors and the two goniometers below the oven are used. To detect the angular orientation of the heat sinks upper surface a downwards looking optical sensor system is used. The upwards pointing optical sensor mounted is used to measure the orientation of the laser bars lo- wer side. These measurements provide the data needed to calculate the angles that the heat sink needs to be tilted and rolled by the two goniometers, in order to get its upper surface parallel to the lower surface of the laser bar. For the measurement of the laser bar overhang and yaw an optical distance sensor is mounted in front of the oven. Overhang and yaw are aligned by using high precision rotary and translation stages. A software tool calculates the displacement necessary to get a parallel orientation and a desired overhang of the laser bar relative to the heat sink. A post bonding accuracy of +/- 1 micron and of +/- 0,2 mrad respectively is achieved. To demonstrate the performance and reliability of the bonding system the bonded devices were characterized by tests like smile test, shear test, burn in test. The results will be presented as well as additional aspects of automated manufacturing like part identification and part tracking.",2008,0, 3859,Clustering Analysis for the Management of Self-Monitoring Device Networks,"The increasing computing and communication capabilities of multi-function devices (MFDs) have enabled networks of such devices to provide value-added services. This has placed stringent QoS requirements on the operations of these device networks. This paper investigates how the computational capabilities of the devices in the network can be harnessed to achieve self-monitoring and QoS management. Specifically, the paper investigates the application of clustering analysis for detecting anomalies and trends in events generated during device operation, and presents a novel decentralized cluster and anomaly detection algorithm. The paper also describes how the algorithm can be implemented within a device overlay network, and demonstrates its performance and utility using simulated as well as real workloads.",2008,0, 3860,Goal-Centric Traceability: Using Virtual Plumblines to Maintain Critical Systemic Qualities,"Successful software development involves the elicitation, implementation, and management of critical systemic requirements related to qualities such as security, usability, and performance. Unfortunately, even when such qualities are carefully incorporated into the initial design and implemented code, there are no guarantees that they will be consistently maintained throughout the lifetime of the software system. Even though it is well known that system qualities tend to erode as functional and environmental changes are introduced, existing regression testing techniques are primarily designed to test the impact of change upon system functionality rather than to evaluate how it might affect more global qualities. The concept of using goal-centric traceability to establish relationships between a set of strategically placed assessment models and system goals is introduced. This paper describes the process, algorithms, and techniques for utilizing goal models to establish executable traces between goals and assessment models, detect change impact points through the use of automated traceability techniques, propagate impact events, and assess the impact of change upon systemic qualities. The approach is illustrated through two case studies.",2008,0, 3861,Module Prototype for Online Failure Prediction for the IBM Blue Gene/L,"The growing complexity of scientific applications has led to the design and deployment of large-scale parallel systems. The IBM Blue Gene/L can hold in excess of 200 K processors and it has been designed for high performance and reliability. However, failures in this large-scale parallel system are a major concern, since it has been demonstrated that a failure will significantly reduce the performance of the system.",2008,0, 3862,Quality of service investigation for multimedia transmission over UWB networks,"In this paper, the Quality of Service (QoS) for multimedia traffic of the Medium Access Control (MAC) protocol for Ultra Wide-Band (UWB) networks is investigated. A protocol is proposed to enhance the network performance and increase its capacity. This enhancement comes from using Wise Algorithm for Link Admission Control (WALAC). The QoS of multimedia transmission is determined in terms of average delay, loss probability, utilization, and the network capacity.",2008,0, 3863,Performance engineering of replica voting protocols for high assurance data collection systems,"Real-time data collection in a distributed embedded system requires dealing with failures such as data corruptions by malicious devices and arbitrary message delays in the network. Replication of data collection devices is employed to deal with such failures, with voting among the replica devices to move a correct data to the end-user. Here, the data being voted upon can be large-sized and/or take long time to be compiled (such as images in a terrain surveillance system and transaction histories in an intrusion detection system). The goal of our paper is to engineer the voting protocols to achieve good performance while meeting the reliability requirements of data delivery in a high assurance setting. The performance metrics are the data transfer efficiency (DTE) and the time-to-complete a data delivery (TTC). DTE captures the network bandwidth wasted and/or the energy drain in wireless-connected devices; whereas, TTC depicts the degradation in user-level QoS due to delayed and/or missed data deliveries. So, improving both DTE and TTC is a goal of our performance engineering exercise. Our protocol-level optimizations focus on reducing: i) the movement of user-level data between voters, ii) the number of voting actions/messages generated, and iii) the latency caused by the voting itself. The paper describes these optimizations, along with the experimental results from a prototype voting system.",2008,0, 3864,Qos based handover layer for a multi-RAT mobile terminal in UMTS and Wi-MAX networks,"To meet the growing demand for broadband wireless connectivity, integrating 3G and IEEE 802.16 networks is a clear and viable solution. In this paper, we present a novel mobile client architecture called HDEL (handover decision and execution layer) that provides seamless mobility while maintaining connectivity across heterogeneous wireless networks and provides better quality of service (QoS) support. Our Proposed HDEL will monitor the QoS parameters and ensure seamless mobility to support continuous data transfer while the subscriber is moving across UMTS and Wi-MAX networks. This HDEL consists of different functional layers that are used for measuring the basic signal strength as well as the QoS parameters like delay, jitter, packet loss and throughput. To ensure QoS support during seamless handover, the proposed HDEL incorporates the procedures for activating a QoS session and ensures proper translation for network-specific QoS classification.",2008,0, 3865,A QoS routing protocol for delay-sensitive applications in mobile ad hoc networks,"The Ad hoc On Demand Distance Vector (AODV) routing protocol is intended for use by mobile nodes in ad hoc networks. To provide quality of service, extensions can be added to the messages used during route discovery. These extensions specify the service requirements which must be met by nodes rebroadcasting a route request (RREQ) or returning a route reply (RREP) for a destination. In order to provide quality delivery to delay sensitive applications such as voice and video, it is extremely important that mobile ad hoc networks provide quality of service (QoS) support in terms of bandwidth and delay. Most existing routing protocols for mobile ad hoc networks are designed to search for the shortest path with minimum hop count. However, the shortest routes do not always provide the best performance, especially when there are congested nodes along these routes. In this paper we propose an on demand delay based quality of service (QoS) routing protocol (AODV-D) to ensure that delay does not exceed a maximum value for mobile ad hoc networks. This protocol will take into consideration MAC layer channel contention information and number of packets in the interface queue in addition to minimum hops. The protocol is implemented and simulated using GlomoSim simulator. Performance comparisons of the proposed AODV-D protocol against AODV, and QS-AODV is presented and shown that the proposed algorithm performs well.",2008,0, 3866,Pre-emption based call admission control with QoS and dynamic bandwidth reservation for cellular networks,"Call admission protocol (CAC) is a very important process in the provision of good quality of service (QoS) in cellular mobile networks. With micro/Pico cellular architectures that are now used to provide higher capacity, the cell size decreases with a drastic increase in the handoff rate. In this paper, we present modeling and simulation results to help in better understanding of the performance and efficiency of CAC in cellular networks. Handoff prioritization is a common characteristic, which is achieved through the threshold bandwidth reservation policy framework. Combined with this framework, we use pre-emptive call admission scheme and elastic bandwidth allocation for data calls in order to gain a near optimal QoS. In this paper, we also use a genetic algorithm (GA) based approach to optimize the fitness function, which we obtained by calculating the mean square error of predicted rejection values and the actual ones. The predicted values are calculated using a linear model, which relates the rejection ratios with different threshold values.",2008,0, 3867,Online Dependability Assessment through Runtime Monitoring and Prediction,"Computer science and engineering have, in comparison to other areas, focused on abstraction and technology, chasing the ever-changing artifacts, namely, computer and communication hardware and software. At the beginning, a focus was on functionality, a decade later on performance, next the focus was widened by including cost minimization and later quality-of-service.The result is that the time for modeling, measurements and assessment is limited and once some hypotheses are verified in part, they frequently become obsolete due new technologies, environments and applications. Not surprisingly, computer science community can boast a relatively small number of useful ""laws"" and principles not only due to dynamicity but also due to immense diversity of applications. Nevertheless, we are an optimistic folk, always trying to solve all the world's problems, generalize to death (proposing, rarely useful, general models, meta-models and meta-meta-models) as in, for example, software development process where there are no limitations so one can easily promise anything and defy any laws of feasibility or reasonableness (unlike hardware engineers who are constrained by physics).",2008,0, 3868,Checklist Inspections and Modifications: Applying Bloom's Taxonomy to Categorise Developer Comprehension,"Software maintenance can consume up to 70% of the effort spent on a software project, with more than half of this devoted to understanding the system. Performing a software inspection is expected to contribute to comprehension of the software. The question is: at what cognition levels do novice developers operate during a checklist-based code inspection followed by a code modification? This paper reports on a pilot study of Bloom's taxonomy levels observed during a checklist-based inspection and while adding new functionality unrelated to the defects detected. Bloom's taxonomy was used to categorise think-aloud data recorded while performing these activities. Results show the checklist-based reading technique facilitates inspectors to function at the highest cognitive level within the taxonomy and indicates that using inspections with novice developers to improve cognition and understanding may assist integrating developers into existing project teams.",2008,0, 3869,Evaluating the Reference and Representation of Domain Concepts in APIs,"As libraries are the most widespread form of software reuse, the usability of their APIs substantially influences the productivity of programmers in all software development phases. In this paper we develop a framework to characterize domain-specific APIs along two directions: 1) how can the API users reference the domain concepts implemented by the API; 2) how are the domain concepts internally represented in the API. We define metrics that allow the API developer for example to assess the conceptual complexity of his API and the non-uniformity and ambiguities introduced by the API's internal representations of domain concepts, which makes developing and maintaining software that uses the library difficult and error-prone. The aim is to be able to predict these difficulties already during the development of the API, and based on this feedback be able to develop better APIs up front, which will reduce the risks of these difficulties later.",2008,0, 3870,Multi Background Memory Testing,"To coverage Pattern Sensitive Faults (PSF) the multiple run March test algorithms have been used. The key element of multiple run March test algorithms are memory backgrounds. In this paper the constructive algorithm for optimal set of memory backgrounds selection is proposed. The backgrounds selection it based on the binary vectors dissimilarity measures. The optimal solutions have been obtained for the cases of two, three and four runs memory testing. Theoretical and experimental analysis has been done which allow to prove the efficiency of proposed technique.",2008,0, 3871,A novel approach for image fusion based on Markov Random Fields,Markov random field (MRF) model is a powerful tool to model image characteristics accurately and has been successfully applied to a large number of image processing applications. This paper investigates the problem of image fusion based on MRF models. A fusion algorithm under the maximum a posteriori (MAP) criterion is developed under this model. Experimental results are provided to demonstrate the improvement of fusion performance by our algorithm.,2008,0, 3872,A scalable Software Defined Radio receiver for video and audio broadcasting on personal computers,"Software defined radios (SDR) can be found in every area of wireless signal processing. SDRs realized so far are mostly part of communication technology, including audio broadcast receivers. Image- and video processing in SDRs is not common due to the fact that this type of real-time application requires a huge amount of processing resources. It can be foreseen that SDRs for video processing will be the next attractive approach for flexible and cost-effective media systems. We started with functional diagrams of the required modules for TV receivers, selected suitable algorithms to estimate their computational complexity and investigated their resource-quality scaling properties. Thereafter benchmark tests were done for a variety of common general purpose processors (GPPs), showing that real-time performance will be possible in the near future.",2008,0, 3873,A fast CIS still image stabilization method without parallax and moving object problems,"In this paper, we propose a fast still image stabilization method for CMOS image sensors (CIS) which can solve the parallax and moving object problems in image registration. CIS still image stabilization is a software technique that prevents motion blurs caused by minute shaking of a CIS camera, and it is realized by seamlessly registering several rapidly exposed CIS images. Our method consists of two stages of image registration: a global registration and then, a local registration. Since a CIS image can be distorted by the motion of a CIS camera having a rolling shutter mechanism, we present a way of removing CIS distortions in the global registration. Also, the local misalignments are reduced using a mesh-based local registration. Since the parallax or moving object regions cannot be correctly aligned in the image registration process, we propose a two-stage post-processing method that removes the misaligned regions by measuring the registration quality based on the labeling method. The computational load in the image stabilization is reduced by selectively performing our local registration based on a real-time global motion estimation method. The experiment results show the necessity of CIS distortion analysis in the CIS image registration and the effectiveness of our stabilization method even for the moving object and parallax problems.",2008,0, 3874,A Dangerousness-Based Investigation Model for Security Event Management,"The current landscape of security management solutions for large scale networks is limited by the lack of supporting approaches capable to deal with the huge number of alarms and events that are generated on current networks. In this paper we propose a security management architecture, capable to reconstruct causal dependencies from captured network and service alarms. The key idea is based on mapping events in semantic spaces, where a novel algorithm can determine such dependencies. We have implemented a prototype and tested it on a operational network within an outsourced security management suite protecting multiple networks.",2008,0, 3875,Improving Learning Objects Quality with Learning Styles,"The traditional learning style theory is being used in the development of e-learning materials progressively. Usually the process of working with learning styles begins with the development of tests or questionnaires that are focus on determining the preferences of students to learn, and then prepare the material according to them. But once the content is implemented there is no tool to measure or estimate if it complies with the desired learning styles. We use a different approach and in this paper we perform the evaluation of existing pedagogical materials in order to determine the dominant learning style theory if there is any. Then the experimental result shows the classification of all the contents.",2008,0, 3876,New technique for fault location in interconnected networks using phasor measurement unit,"Application of PMU for fault location is conducted through a driven algorithm and is applied to different study systems through computer numerical simulation.The algorithm estimates the fault location based on synchronized phasors from both ends of the transmission line whether phase measurement units are installed to both ends or to only one end and the other end is calculated from synchronized phasors from another side. This algorithm allows for accurate estimation of fault location irrespective of fault resistance, load currents, and source impedance. A computer simulation using PSCAD program of the transmission line under study with various fault types and different locations is carried out. A transformation modal is used in the algorithm. Different fault types are simulated with different fault locations to more than one line in the Egyptian network, which has phase measurement units installed according to a selected allocating technique. The results obtained from applying the considered technique shows high levels of accuracy in locating the fault of different faults types. Hence, it is strongly recommended to use the PMU in the Egyptian Network.",2008,0, 3877,Grey Theory Based Evaluation for SAR Image Denoising,"To evaluate the quality of denoised SAR images, this paper discusses some performance parameters, and proposes a grey-theory-based method. The main idea of the method is to extract some comparative sequences and a reference sequence, respectively standing for quality of the denoised SAR images and the noise-free image, after all the denoised images involved are concerned. And then, an overall performance index is acquired from the grey relational degrees between the two kinds of sequences. The feature of the method is that it not only does not require any real images, but also is inexpensive in processing time. Pilot experiments show that the method is effective and precise. Potential application includes automatic selection of the optimal algorithm and intellectual comparison among SAR denoising algorithms.",2008,0, 3878,A Generic Metamodel For Security Policies Mutation,We present a new approach for mutation analysis of security policies test cases. We propose a metamodel that provides a generic representation of security policies access control models and define a set of mutation operators at this generic level. We use Kermeta to build the metamodel and implement the mutation operators. We also illustrate our approach with two successful instantiation of this metamodel: we defined policies with RBAC and OrBAC and mutated these policies.,2008,0, 3879,Building a reliable internet core using soft error prone electronics,"This paper describes a methodology for building a reliable internet core router that considers the vulnerability of its electronic components to single event upset (SEU). It begins with a set of meaningful system level metrics that can be related to product reliability requirements. A specification is then defined that can be effectively used during the system architecture, silicon and software design process. The system can then be modeled at an early stage to support design decisions and trade-offs related to potentially costly mitigation strategies. The design loop is closed with an accelerated measurement technique using neutron beam irradiation to confirm that the final product meets the specification.",2008,0, 3880,The Checkpoint Interval Optimization of Kernel-Level Rollback Recovery Based on the Embedded Mobile Computing System,"Due to the limited resources of the embedded mobile computing systems, such as wearable computers, PDAs or sensor nodes, reducing the overhead of the software implemented fault tolerance mechanism is a key factor in reliability design. Two checkpoint interval optimization techniques of kernel level rollback recovery mechanism are discussed. Step checkpointing algorithm modulates checkpoint intervals on-line according to the characteristics of software or hardware environment-dependent system that the failure rate fluctuates acutely shortly after the system fails. Checkpoint size monitoring and threshold-control technique adjusts the checkpoint interval by predicting the amount of data to be saved. Combining these two techniques can effectively improve the performance of the embedded mobile computer system.",2008,0, 3881,Test results for the WAAS Signal Quality Monitor,"The signal quality monitor (SQM) is an integrity monitor for the wide area augmentation system (WAAS). The monitor detects L1 signal waveform deformation of a GPS or a geosynchronous (GEO) satellite monitored by WAAS should that event occur. When a signal deformation occurs, the L1 correlation function measured by the receiver becomes distorted. The distortion will result in an error in the L1 pseudorange. The size of the error depends on the design characteristics of the user receiver. This paper describes test results for the WAAS SQM conducted using prototype software. There are two groups of test cases: the nominal testing and the negative path testing. For nominal test cases, recorded data are collected from a test facility in four 5-day periods. These four data sets include SQM correlation values for SV-receiver pairs, and satellite error bounds for satellites. They are used as input to the prototype. The prototype processes these data sets, executes the algorithm, and records test results. Parameters such as the ""maximum median-adjusted detection metric over threshold"" (i.e., the maximum detection test), ""UDRE forwarded from upstream integrity monitors,"" and ""UDRE supported by SQM"" are shown and described. The magnitude of the maximum detection test for all GPS and GEO satellites are also shown. For negative path testing, this paper describes two example simulated signal deformation test cases. A 2-day data set is collected from the prototype. A few example ICAO signal deformations are simulated based on this data set and are inserted in different time slots in the 2-day period. The correlator measurements for selected satellites are pre- processed to simulate the signal deformation. The results demonstrate the sensitivity of the Signal Quality Monitor to the simulated deformation, and shows when the event is detected and subsequently cleared. It also shows that the SQM will not adversely affect WAAS performance.",2008,0, 3882,System-Level Performance Estimation for Application-Specific MPSoC Interconnect Synthesis,"We present a framework for development of streaming applications as concurrent software modules running on multi-processors system-on-chips (MPSoC). We propose an iterative design space exploration mechanism to customize MPSoC architecture for given applications. Central to the exploration engine is our system-level performance estimation methodology, that both quickly and accurately determine quality of candidate architectures. We implemented a number of streaming applications on candidate architectures that were emulated on an FPGA. Hardware measurements show that our system-level performance estimation method incurs only 15% error in predicting application throughput. More importantly, it always correctly guides design space exploration by achieving 100% fidelity in quality-ranking candidate architectures. Compared to behavioral simulation of compiled code, our system-level estimator runs more than 12 times faster, and requires 7 times less memory.",2008,0, 3883,Measuring Package Cohesion Based on Context,"Packages play a critical role to understand, construct and maintain large-scale software systems. As an important design attribute, cohesion can be used to predict the quality of packages. Although a number of package cohesion metrics have been proposed in the last decade, they mainly converge on intra-package data dependences between components, which are inadequate to represent the semantics of packages in many cases. To address this problem, we propose a new cohesion metric for package called SCC on the assumption that two components are related tightly if they have similar contexts. Compared to existing works, SCC uses the common context of two components to infer whether they have close relation or not, which involves both inter- and intra- package data dependences. It is hence able to reveal semantic relations between components. We demonstrate the effectiveness of SCC by case studies.",2008,0, 3884,ORTEGA: An Efficient and Flexible Software Fault Tolerance Architecture for Real-Time Control Systems,"Fault tolerance is an important aspect in real-time computing. In real-time control systems, tasks could be faulty due to various reasons. Faulty tasks may compromise the performance and safety of the whole system and even cause disastrous consequences. In this paper, we describe ORTEGA (On-demand Real-TimE GuArd), a new software fault tolerance architecture for real-time control systems. ORTEGA has high fault coverage and reliability. Compared with existing real-time fault tolerance architectures, such as Simplex, ORTEGA allows more efficient resource utilizations and enhances flexibility. These advantages are achieved through the on-demand detection and recovery of faulty tasks. ORTEGA is applicable to most industrial control applications where both efficient resource usage and high fault coverage are desired.",2008,0, 3885,Predictable Code and Data Paging for Real Time Systems,"There is a need for using virtual memory in real-time applications: using virtual addressing provides isolation between concurrent processes; in addition, paging allows the execution of applications whose size is larger than main memory capacity, which is useful in embedded systems where main memory is expensive and thus scarce. However, virtual memory is generally avoided when developing real-time and embedded applications due to predictability issues. In this paper we propose a predictable paging system in which the page loading and page eviction points are selected at compile-time. The contents of main memory is selected using an Integer Linear Programming (ILP) formulation. Our approach is applied to code, static data and stack regions of individual tasks. We show that the time required for selecting memory contents is reasonable for all applications including the largest ones, demonstrating the scalability of our approach. Experimental results compare our approach with a previous one, based on graph coloring. It shows that quality of page allocation is generally improved, with an average improvement of 30% over the previous approach. Another comparison with a state-of-the-art demand-paging system shows that predictability does not come at the price of performance loss.",2008,0, 3886,Reputation-Based Service Discovery in Multi-agents Systems,"Reputation has recently received considerable attention within a number of disciplines such as distributed artificial intelligence, economics, evolutionary biology, and among others. Most papers about reputation provide an intuitive approach to reputation which appeals to common experiences without clarifying whether their use of reputation is similar or different from those used by others. DF provides a Yellow Pages service. Agents in the FIPA-compliant agent system can provide services to others, and store these services in the DF of the multiagent system. However, existing DF cannot detect the fake service which is registered by malicious agent. So,a user may search these fault services. In this paper, we analyze the DFpsilas problem and propose the solution. We describe the Reputation mechanism for searching these fake services. Reputation function assumes the presence of other agent who can provide ratings for other agents that are reflective of the performance or behavior of the corresponding agents.",2008,0, 3887,Towards autonomic risk-aware security configuration,"Security of a network depends on a number of dynamically changing factors. These include emergence of new vulnerabilities and threats, policy structure and network traffic. Due to the dynamic nature of these factors, identifying security metrics that measure objectively the quality of security configuration pose a major challenge. Moreover, this evaluation must be done dynamically to handle real time changes in the threat toward the network. In this paper, we extend our security metric framework that identifies and quantifies objectively the most significant security risk factors, which include existing vulnerabilities, historical trend of vulnerabilities of remotely accessible services, prediction of potential vulnerabilities for any general network service and their estimated severity and finally propagation of an attack within the network. We have implemented this framework as a user-friendly tool called Risk based prOactive seCurity cOnfiguration maNAger (ROCONA) and showed how this tool simplifies security configuration management using risk measurement and mitigation.",2008,0, 3888,Real-time problem localization for synchronous transactions in HTTP-based composite enterprise applications,"Loosely-coupled composite enterprise applications based on modern Web technologies are becoming increasingly popular. While composing such applications is appealing for a number of reasons, the distributed nature of the applications makes problem determination difficult. Stringent service level agreements in these environments require rapid localization of failing and poorly performing services. We present in this paper a method that performs real-time transaction level problem determination by tracking synchronous transaction flows in HTTP based composite enterprise applications. Our method relies on instrumentation of service requests and responses to transmit downstream path and monitoring information in realtime. Further, our method applies change-point based techniques on monitored information at the point of origin of a transaction, and quickly detects anomalies in the performance of invoked services. Since our method performs transaction level monitoring, it avoids the pitfalls associated with techniques that use aggregate performance metrics. Additionally, since we use change-point based techniques to detect problems, our method is more robust than error-prone static threshold based techniques.",2008,0, 3889,Assessing Web Applications Consistently: A Context Information Approach,"In order to assess Web applications in a more consistent way we have to deal not only with non-functional requirement specification, measurement and evaluation (M&E) information but also with the context information about the evaluation project. When organizations record the collected data from M&E projects, the context information is very often neglected. This can jeopardize the validity of comparisons among similar evaluation projects. We highlight this concern by introducing a quality in use assessment scenario. Then, we propose a solution by representing the context information as a new add-in to the INCAMI M&E framework. Finally, we show how context information can improve Web application evaluations, particularly, data analysis and recommendation processes.",2008,0, 3890,Available Bandwidth Estimation in Wireless Ad Hoc Network: Accuracy and Probing Time,"Accuracy of available estimated bandwidth and convergence delay algorithm are researchers' challenges in wireless network measurements. Currently a variety of tools are developed to estimate the end-to-end network available bandwidth such as TOPP (train of packet pair), SLoPS (self loading periodic stream), Spurce and Variable Packet Size etc. These tools are important to improve the QoS (quality of service) of demanding applications. However the performances of the previous tools are very different in term of probing time and estimation accuracy. In this paper we study two active probing techniques: TOPP and SLoPS. TOPP provides mo.re accurate estimation of available bandwidth than the second one. However SLoPS technique has faster response delay than TOPP. Thus, we decided to combine the two techniques to get benefit from their respective advantages and propose a new and fast bandwidth estimation technique providing the same accuracy as TOPP. To valid this new technique, NS2 was used. The performance of SLOT technique will be evaluated and the obtain results are analysed by MATLAB software and compared with the TOPP and the SLoPS ones. We show that SLOT has shorter probing time than the TOPP one, and it provides more accurate available estimated bandwidth than the SLoPS one.",2008,0, 3891,Specifying Flexible Charging Rules for Composable Services,"Where services are offered on a commercial basis, the manner in which charges for service usage are calculated is of key importance. Services typically have associated with them a charging scheme specifying the rules for charge calculation; schemes can range from the simple (such fixed charge per service invocation) to highly complex (where charges are calculated dynamically in order to influence customer demand and thereby optimise overall system performance). Typically, charging schemes are manually configured and verified prior to services being made available to customers - typically, a time consuming and expensive process. In environments where service compositions can be rapidly built and offered to customers, manual specification of a charging scheme for the service composition becomes untenable. In this paper we describe how charge modification rules associated with individual services can be used to flexibly govern how a service is charged for when it is used in the context of a composed service.",2008,0, 3892,wsrbench: An On-Line Tool for Robustness Benchmarking,"Testing Web services for robustness is a difficult task. In fact, existing development support tools do not provide any practical mean to assess Web services robustness in the presence of erroneous inputs. Previous works proposed that Web services robustness testing should be based on a set of robustness tests (i.e., invalid Web services call parameters) that are applied in order to discover both programming and design errors. Web services can be classified based on the failure modes observed. In this paper we present and discuss the architecture and use of an on-line tool that provides an easy interface for Web services robustness testing. This tool is publicly available and can be used by both web services providers (to assess the robustness of their Web services code) and consumers (to select the services that best fit their requirements). The tool is demonstrated by testing several Web services available in the Internet.",2008,0, 3893,Modeling Service Quality for Dynamic QoS Publishing,"As Web services are widely adopted, the nonfunctional requirements (i.e. QoS constraints) have become a significant factor in implementing trustworthy and predictable service applications. In this paper, we analyze the QoS publishing necessity, and categorize QoS parameters to refine their characteristics. Then we propose a conceptual publishing model, which is adaptive for the distributed and decentralized service-oriented computing environment. We further introduce the perception service quality measurement models, and extend P-E model for web services. The extended model takes both dynamic QoS parameters and customer expectations into consideration. A compensation mechanism is also incorporated into the model for trade-off when nonfunctional requirements cannot be fully satisfied.",2008,0, 3894,Towards a User-perceived Service Availability Metric,"Web services availability has been regarded as one of the key properties for (critical) service-oriented applications. Based on analyzing the limitations of current metrics for ""User-perceived"" availability, we propose a service status based availability metric and a corresponding estimating approach. Experiments demonstrate that this metric and the corresponding estimation approach could get reasonable measurement on web services' availability.",2008,0, 3895,3C-Silicon Carbide Nanowire FET: An Experimental and Theoretical Approach,"Experimental and simulated I-V characteristics of silicon carbide (SiC) nanowire-based field-effect transistors (NWFETs) are presented. SiC NWs were fabricated by using the vapor-liquid-solid mechanism in a chemical vapor deposition system. The diameter of fabricated SiC NWs varied from 60 up to 100 nm while they were some micrometers long. Their I-V characteristics were simulated with SILVACO software, and special attention was paid to explore the role of NW doping level and NW/dielectric interface quality. The fabricated SiC-based NWFETs exhibit a mediocre gating effect and were not switched-off by varying the gate voltage. Based on the simulations, this is a result of the high unintentional doping (estimated at 1times1019 cm-3) and the poor NW/dielectric interface quality. Moreover, a homemade algorithm was used to investigate the ideal properties of SiC-based NWFETs in ballistic transport regime, with NW lengths of 5-15 nm and a constant diameter of 4 nm for which the carrier transport is fully controlled by quantum effects. This algorithm self-consistently solves the Poisson equation with the quantum nonequilibrium Green function formalism. In the ballistic regime, devices with undoped SiC NWs exhibit superior theoretical performances (transconductance: ~43.2times10-6 A/V and ION/IOFF=1.6times105 for a device with 9-nm NW length) based on their simulated characteristics.",2008,0, 3896,Statistical Measurement Analysis of Automated Test Systems,"This paper addressed the application of the statistical paired t-test of observed parametric data, nested GR&R analysis for evaluating ATS and TPS capability, and ANOVA analysis on ATS instrument measurement variation. The applicability of these methods provides a sound approach for analyzing data results and can be further expanded across other TPSs with additional data for improved estimation. In addition, further research should address the additional instruments and UUTs to model the entire the ATS.",2008,0, 3897,Defect Prevention and Detection in Software for Automated Test Equipment,"This paper describes a test application development tool designed with a high degree of defect prevention and detection built-in. While this tool is specific to a particular tester, the PT3800, the approach that it uses may be employed for other ATE. The PT3800 tester is the successor of a more than 20 year-old tester, the PT3300. The development of the PT3800 provided an opportunity to improve the test application development experience. The result was the creation of a test application development tool known as the PT3800 AM creation, revision and archiving tool, or PACRAT (AM refers to automated media, specifically test application source code). This paper details the built-in defect prevention and detection techniques employed by PACRAT.",2008,0, 3898,APART: Low Cost Active Replication for Multi-tier Data Acquisition Systems,"This paper proposes APART (a posteriori active replication), a novel active replication protocol specifically tailored for multi-tier data acquisition systems. Unlike existing active replication solutions, APART does not rely on a-priori coordination schemes determining a same schedule of events across all the replicas, but it ensures replicas consistency by means of an a-posteriori reconciliation phase. The latter is triggered only in case the replicated servers externalize their state by producing an output event towards a different tier. On one hand, this allows coping with non-deterministic replicas, unlike existing active replication approaches. On the other hand, it allows attaining striking performance gains in the case of silent replicated servers, which only sporadically, yet unpredictably, produce output events in response to the receipt of a (possibly large) volume of input messages. This is a common scenario in data acquisition systems, where sink processes, which filter and/or correlate incoming sensor data, produce output messages only if some application relevant event is detected. Further, the APART replica reconciliation scheme is extremely lightweight as it exploits the cross-tier communication pattern spontaneously induced by the application logic to avoid explicit replicas coordination messages.",2008,0, 3899,Historical Value-Based Approach for Cost-Cognizant Test Case Prioritization to Improve the Effectiveness of Regression Testing,"Regression testing has been used to support software testing activities and assure the acquirement of appropriate quality through several versions of a software program. Regression testing, however, is too expensive because it requires many test case executions, and the number of test cases increases sharply as the software evolves. In this paper, we propose the Historical Value-Based Approach, which is based on the use of historical information, to estimate the current cost and fault severity for cost-cognizant test case prioritization. We also conducted a controlled experiment to validate the proposed approach, the results of which proved the proposed approachpsilas usefulness. As a result of the proposed approach, software testers who perform regression testing are able to prioritize their test cases so that their effectiveness can be improved in terms of average percentage of fault detected per cost.",2008,0, 3900,Which Spot Should I Test for Effective Embedded Software Testing?,"Today, the embedded industry is changing fast - systems have become larger, more complex, and more integrated. Embedded system consists of heterogeneous layers such as hardware, HAL, device driver, OS kernel, and application. These heterogeneous layers are usually customized for special purpose hardware. Therefore, various hardware and software components of embedded system are mostly integrated together under unstable status. That is, there are more possibilities of faults in all layers unlike package software.In this paper, we propose the embedded software interface as two essential parts: interface function that represents the statement of communication between heterogeneous layers, and interface variable that represents software and/or hardware variable which are defined in different layer from integrated software and used to expected output for decision of fault. Also, we statically investigate various views of embedded software interface and demonstrate that proposed interface should be new criterion for effective embedded software testing.",2008,0, 3901,A New Method for Measuring Single Event Effect Susceptibility of L1 Cache Unit,"Cache SEE susceptibility measurements are required for predicting processorpsilas soft error rate in space missions. Previous dynamic or static real beam test based approaches are only tenable for processors which have optional cache operating modes such as disable(bypass)/enable, frozen, etc. As L1 cache are indispensable to the processorpsilas total performance, some newly introduced processors no longer have such cache management schemes, thus make the existed methods inapplicable. We propose a novel way to determine cache SEE susceptibility for any kind of processors, whether cache bypass mode supported or not, by combining heavy ion dynamic testing with software implemented fault injection approaches.",2008,0, 3902,Early Reliability Prediction: An Approach to Software Reliability Assessment in Open Software Adoption Stage,Conventional software reliability models are not adequate to assess the reliability of software system in which OSS (Open Source Software) adopted as a new feature add-on because OSS can be modified while the inside of COTS(Commercial Off-The-Shelf) products cannot be changed. This paper presents an approach to software reliability assessment of OSS adopted software system in the early stage. We identified the software factors that affect the reliability of software system when a large software system adopts OSS and assess software reliability using those factors. They are code modularity and code maintainability in software modules related with system requirements. We used them to calculate the initial fault rate with weight index (correlated value between requirement and module) which represents the degree of code modification. We apply the proposed initial fault rate to reliability model to assess software reliability in the early stage of a software life cycle. Early software reliability assessment in OSS adoption helps to make an effective development and testing strategies for improving the reliability of the whole system.,2008,0, 3903,Artificial Neural Network Classification of Power Quality Disturbances Using Time-Frequency Plane in Industries,"The rapid growth of industrialization has increased use of power systems tremendously in process and software industries. Therefore, there is thrust on industries to improve power quality with power monitoring and protection system by classifying voltage and current disturbances. This paper presents a new approach for classifying the events that represent or lead to the degradation of power quality. A neural network with feed forward structure is chosen as the classifier with modified Fisherpsilas Discriminant Ratio Kernel for feature extraction. The potential of developing a more powerful fuzzy classification method based on this algorithm is also discussed. This novel combination of methods shows promise for further development of a fully automated power quality monitoring system.",2008,0, 3904,New condition monitoring techniques for reliably drive systems operation,"The dominant application of electronics today is to process information. The computer industry is the biggest user of semiconductor devices and consumer electronics. Due to the successful development of semiconductors, electronic system and controls have gained wide acceptance in power and computing technology and due to the continuous use of drive systems (rotating machines, controlling thyristors and associated electronic components) in industry and in power stations, and the need to keep such systems running reliably, the detection of defects and anomalies is of increasing importance, and on-line monitoring to detect any fault in these systems is now a strong possibility and certainly periodic monitoring of a drive systems in strategic situations. The principal aim of the paper is to use both software and hardware and develop a fault diagnosis knowledge-based system, which will analyze and manipulate the output obtained from sensors using a microcomputer for acquiring the plant condition data and subsequently interpreting them, data collected can be analyzed using suitable computer programs, and any trends can be identified and compared with the knowledge base. The probability of certain condition can then be diagnosed and compared, providing the necessary information on which subsequent decisions can be based and provide any necessary alarms to the operator. To achieve this objective, the simulation and experimental technique considered is to use sensors placed in the wedges closing the stator slots to sense the induced voltage. The induced voltage and for each fault is shown to have a unique voltage pattern, thus, the fault identification through voltage pattern recognition were the basic rules for the development of the knowledge base. The predicted results are verified by measurements on a model system in which known faults can be established.",2008,0, 3905,A wavelet transform technique for de-noising partial discharge signals,"Partial discharge (PD) diagnosis is essential to identify the nature of insulation defects causing discharge. The problem of PD signal recognition has been approached in a number of ways. Most of the approaches are based on laboratory experiments or on signals acquired during off-line tests of industrial apparatus. On-line testing is vastly preferable as the equipment can remain in service, and the operators can monitor the insulation condition continuously. Interferences from noise sources have been a persistent problem, which have increased with the advent of solid-state power switching electronics. Use of wavelet transform technique offers many advantages over conventional digital filters and is ideally suited to process non- stationary signals (transients) often encountered in high voltage testing and measurements. In this paper, an empirical wavelet- based method is proposed to recover PD pulses mixed with excessive noise/interference. A critical assessment of the proposed method is carried out by processing simulated PD signals along with noise signals using MAT LAB software.",2008,0, 3906,Identifying learners robust to low quality data,"Real world datasets commonly contain noise that is distributed in both the independent and dependent variables. Noise, which typically consists of erroneous variable values, has been shown to significantly affect the classification performance of learners. In this study, we identify learners with robust performance in the presence of low quality (noisy) measurement data. Noise was injected into five class imbalanced software engineering measurement datasets, initially relatively free of noise. The experimental factors considered included the learner used, the level of injected noise, the dataset used (each with unique properties), and the percentage of minority instances containing noise. No other related studies were found that have identified learners that are robust in the presence of low quality measurement data. Based on the results of this study, we recommend using the random forest learner for building classification models from noisy data.",2008,0, 3907,Code and carrier divergence technique to detect ionosphere anomalies,"A single and dual frequency smoothing techniques implemented to detect ionosphere anomalies for GBAS system (ground based augmentation system) were discuss in this paper. As a dominant threat for using differential navigation satellites systems in landing applications an ionosphere storm is considered. To detect these occurrences the number of algorithms is developed. Some of them are addressed to meet the integrity requirements of CAT III landing and base on the multi frequency GPS techniques. Depending on the combination of frequency used during code and carrier phase measurements the smoothed pseudorange achieves a different level of accuracy. From this reason the most popular algorithms e.g. the divergence free and ionosphere free smoothing algorithms are analyzed and compared. In the article the works realized in the Institute of Radioelectronics connected with GBAS application are presented, too. The investigations were conducted by using actually GPS signals and signals from GNSS simulator. The self prepared software was used to analyze the results.",2008,0, 3908,Case studies in arc flash reduction to improve safety and productivity,"With the advent of new power system analysis software, a more detailed arc flash analysis can be performed under various load conditions. These new ldquotoolsrdquo can also evaluate equipment damage, design systems with lower arc flash, and predict electrical fire locations based on high arc flash levels. This paper demonstrates how arc flash levels change with available utility MVA (mega volt amperes), additions in connected load, and selection of system components. This paper summarizes a detailed analysis of several power systems to illustrate possible misuses of 2004 NFPA 70E Risk Category Classification Tables while pointing toward future improvements of the Standards. In particular, findings indicate upstream protection may not open quick enough for fault on the secondary of a transformer or at the far end of a long cable due to the increase in system impedance. Several examples of how these problem areas can be dealt with are described in detail.",2008,0, 3909,Fuzzy fault detection and diagnosis under severely noisy conditions using feature-based approaches,"This paper introduces an approach to fault detection and diagnosis scheme which uses fuzzy reference models to describe the symptoms of both faulty and fault-free plant operation. Recently, some approaches have been combined with fuzzy logic to enhance its performance in particular applications such as fault detection and diagnosis. The reference models are generated from training data which are produced by computer simulation of typical plant. A fuzzy matching scheme compares the parameters of a fuzzy partial model, identified using on-line data collected from the real plant, with the parameters of the reference models. The reference models are also compared to each other to take account of the ambiguity which arises at some operating points when the symptoms of correct and faulty operations are similar. Independent components analysis (ICA) is used to extract the exact data from variables under severe noisy conditions. A fuzzy self organizing feature map is applied to the data obtained from ICA for obtaining more accurate and precise features representing different conditions of the system. The results are then applied to the model-based fuzzy procedure for diagnosis goals. Results are presented which demonstrate the applicability of the scheme.",2008,0, 3910,Intensity statistics-based HSI diffusion for color photo denoising,"This paper presents a new image denoising model for real color photo noise removal. Our model is implemented in the hue, saturation and intensity (HSI) space. The hue and saturation denoising are combined and implemented as a complex total variation (TV) diffusion. The intensity denoising is based on a diffusion flow to minimize a new energy functional, which is constructed with intensity component statistics. Besides the common gradient-based edge stopping functions for anisotropic diffusion, specifically for color photo denoising, we incorporate an intensity-based brightness adjusting term in the new energy, which corresponds to the noise disturbance with respect to photo intensity. In addition, we use the gradient vector flow (GVF) as the new diffusion directions for more accurate and robust denoising. Compared with previous diffusion flows only based on regular image gradients, this model provides more accurate image structure and intensity noise characterization for better denoising. Comprehensive quantitative and qualitative experiments on color photos demonstrate the improved performance of the proposed model when compared with 14 recognized approaches and 2 commercial software.",2008,0, 3911,Adaptive control strategies for a class of anaerobic depollution bioprocesses,"This paper presents the design and the analysis of some nonlinear adaptive control strategies for a class of anaerobic depollution processes that are carried out in continuous stirred tank bioreactors. The controller design is based on the input-output linearization technique. The adaptive control structure is based on the nonlinear model of the process and is combined with a state observer and a parameter estimator which play the role of the software sensors for the on-line estimation of biological states and parameter variables of interest of the bioprocess. The resulted control methods are applied in depollution control problem in the case of the anaerobic digestion bioprocess for which dynamical kinetics are strongly nonlinear and not exactly known, and not all the state variables are measurable. The effectiveness and performance of both estimation and control algorithms are illustrated by simulation results.",2008,0, 3912,A virtual instrumentation-based on-line determination of a single/two phase induction motor drive characteristics at coarse start-up,"The single/two phase induction motors (S/TPIM) are used in various low-power, low-cost applications. A representative example is the washing machine drum's main drive. Both manufacturers and designers have augmented their preoccupations and researches to optimize the efficiency of this drive due to the large and increasing number of devices in use and also stimulated by the market requests. At the same time, their efforts have been directed to maintain the drive's main advantage i. e. its low cost. A simple strategy to optimize the S/TPIM drive's efficiency is to replace the fixed capacitance of the starting capacitor of the motor with an electronically switched capacitor. In this approach, an accurate dynamic characteristic of the actuator, i.e. electric motor and washing machine's drum, need to be determined for the implementation of the starting capacitance's command law. In this paper an estimation method of the drive's mechanical strain at coarse start-up is presented. The dynamic model of the drive is based on the direct-and quadrature-axis theory; this approach provides accurate estimates of the drive's response. The on-line measurements of the input voltages and currents are performed with a fast data acquisition board and a virtual instrumentation software environment providing noise-free measurements. The main advantage of this method consists in its simple implementation on the washing machine and to provide torque predictions both in stationary and in dynamic regimes in real operating conditions.",2008,0, 3913,Behavioral Dependency Measurement for Change-Proneness Prediction in UML 2.0 Design Models,"During the development and maintenance of object-oriented (OO) software, the information on the classes which are more prone to be changed is very useful. Developers and maintainers can make a more flexible software by modifying the part of classes which are sensitive to changes. Traditionally, most change-proneness prediction has been studied based on source codes. However, change-proneness prediction in the early phase of software development can provide an easier way for developing a stable software by modifying the current design or choosing alternative designs before implementation. To address this need, we present a systematic method for calculating the behavioral dependency measure (BDM) which helps to predict change-proneness in UML 2.0 models. The proposed measure has been evaluated on a multi-version medium size open-source project namely JFreeChart. The obtained results show that the BDM is an useful indicator and can be complementary to existing OO metrics for change-proneness prediction.",2008,0, 3914,A Systematic Approach for Integrating Fault Trees into System Statecharts,"As software systems are encompassing a wide range of fields and applications, software reliability becomes a crucial step. The need for safety analysis and test cases that have high probability to uncover plausible faults are necessities in proving software quality. System models that represent only the operational behavioral of a system are incomplete sources for deriving test cases and performing safety analysis before the implementation process. Therefore, a system model that encompasses faults is required. This paper presents a technique that formalizes a safety model through the incorporation of faults with system specifications. The technique focuses on introducing semantic faults through the integration of fault trees with system specifications or statechart. The method uses a set of systematic transformation rules that tries to maintain the semantics of both fault trees and statechart representations during the transformation of fault trees into statechart notations.",2008,0, 3915,Teaching Team Software Process in Graduate Courses to Increase Productivity and Improve Software Quality,"This paper presents a case study that describes TSPi teaching (introduction to the team software process) to 4th year students, grouped by teams, at the Computer Science School, Polytechnic University of Madrid (UPM). The achievements of the teams, due to training and the use of TSPi, were analyzed and discussed. This paper briefly discusses the approach to the teaching and some of the issues that were identified. The teams collected data on the projects developed. They reviewed the schedule and quality status weekly. The metrics selected to analyze the impact on the students were: size, effort, productivity, costs and defects density. These metrics were chosen to analyze teams 'performance evolution through project development. This paper also presents a study related to the evolution of estimation, quality and productivity improvements these teams obtained. This study will prove that training in TSPi has a positive impact on getting better estimations, reducing costs, improving productivity, and decreasing defect density. Finally, the teams 'performance are analyzed.",2008,0, 3916,Error Modeling in Dependable Component-Based Systems,"Component-based development (CBD) of software, with its successes in enterprise computing, has the promise of being a good development model due to its cost effectiveness and potential for achieving high quality of components by virtue of reuse. However, for systems with dependability concerns, such as real-time systems, a major challenge in using CBD consists of predicting dependability attributes, or providing dependability assertions, based on the individual component properties and architectural aspects. In this paper, we propose a framework which aims to address this challenge. Specifically, we present a revised error classification together with error propagation aspects, and briefly sketch how to compose error models within the context of component-based systems (CBS). The ultimate goal is to perform the analysis on a given CBS, in order to find bottlenecks in achieving dependability requirements and to provide guidelines to the designer on the usage of appropriate error detection and fault tolerance mechanisms.",2008,0, 3917,New DAQB and associated virtual library included in LabVIEW for environmental parameters monitoring,"In this work a data acquisition board (DAQB) with data transfer by serial port and the associated virtual library included into LabVIEW software are presented. The DAQB developed around a National Semiconductor LM 12458 devices, have the capability to perform tasks that diminish the host processor work and is capable to communicate with the host computer by using a set of drivers associated. Highly integrated device circuit that has into it the most components of the board, facility in data handling, good metrological performance and a very low cost are the benefits of the proposed system. Using the LabView environment, we have realized a virtual instrument able to get from the prototype data acquisition board for environmental monitoring parameters the information about air pollution factors like CO, H2S, SO2, NO, NO2 etc. In order to get effective information about those factors and the monitoring points, this intelligent measurement system, compound from portable computer, and Gas detector. This system can be used to map the information about the air pollution factors dispersion in order to answer to the needs of residential and industrial areas expansion.",2008,0, 3918,Research on double-end cooperative congestion control mechanism of random delay network,"Propagation delay is the main factor of affecting the network performance. Random propagation delay has an adverse impact on the stability of feedback control mechanism. We argue that some existing schemes which try to control the node-end-systems queue donpsilat work well when random propagation delay acts on the models. To find a solution to this problem, a double-end cooperative congestion control mechanism is proposed and analyzed. We have studied the performance of the control mechanism via simulations on OPNET software. Simulations show that the mechanism can improve network performance under random propagation delay. And in the node-end-systems, the probability of cells discard is lesser.",2008,0, 3919,Application of detection method based on energy function against faults in motion system,"There are many possibilities of faults in motion systems which are related to driver equipments, based on the torque monitor signal supplied by systemic driver, a new fault diagnosis method was brought forward which was profited from a nonlinear energy operator. The principles of using it to performing fault diagnosis were presented, the keynotes about its application were also analyzed, and the detail flow chart of diagnosis was given. After the software simulation using Matlab and diagnosing experiments on X-Y-Z motion platform under numeric control arithmetic, the validity of this method was proved. Aiming at different kinds of energy functions and the choices of their parameters, after particular analysis, some guidelines about fault diagnosis are concluded.",2008,0, 3920,A software tool to relate technical performance to user experience in a mobile context,"Users in todaypsilas mobile ICT environment are confronted with more and more innovations and an ever increasing technical quality, which makes them more demanding and harder to please. It is often hard to measure and to predict the user experience during service consumption. This is nevertheless a very important dimension that should be taken into account while developing applications or frameworks. In this paper we demonstrate a software tool that is integrated in a wireless living lab environment in order to validate and quantify actual user experience. The methodology to assess the user experience combines both technological and social assets. User experience of a Wineguide application on a PDA is related to signal strength, monitored during usage of the applications. Higher signal strengths correspond with a better experience (e.g. speed). Finally, difference in the experience among users will be discussed.",2008,0, 3921,SDTV Quality Assessment Using Energy Distribution of DCT Coefficients,"The VQM (Video Quality Measurement) scheme is a methodology that measures the difference of quality between the distorted video signal and the reference video signal. In this paper, we propose a novel video quality measurement method that extracts features in DCT (Discrete Cosine Transform) domain of H.263 SDTV. Main idea of the proposed method is to utilize the texture pattern and edge oriented information that is generated in DCT domain. For this purpose, the energy distribution of the reodered DCT coefficients is considered to obtain unique information of each video file. Then, we measure the difference of probability distribution of context information between original video and distorded one. The simulation results show that the proposed algorithm can represent correctly the video quality and give a high correlation with the video DMOS.",2008,0, 3922,A novel high-capability control-flow checking technique for RISC architectures,"Nowadays more and more small transistors make microprocessors more susceptible to transient faults, and then induce control-flow errors. Software-based signature monitoring is widely used for control-flow error detection. When previous signature monitoring techniques are applied to RISC architectures, there exist some branch-errors that they can not detect. This paper proposes a novel software-based signature monitoring technique: CFC-End (Control-Flow Checking in the End). One property of CFC-End is that it uses two global registers for storing the run-time signature alternately. Another property of CFC-End is that it compares the run-time signature with the assigned signature in the end of every basic block. CFC-End is better than previous techniques in the sense that it can detect any single branch-error when applied to RISC architectures. CFC-End has similar performance overhead in comparison with the RCF (Region based Control-Flow checking) technique, which has the highest capability of branch-error detection among previous techniques.",2008,0, 3923,Chaotic Routing: A Set-based Broadcasting Routing Framework for Wireless Sensor Networks,"Data communication in wireless sensor networks (WSNs) exhibits distinctive characteristics. Routing in WSNs still relies on simple variations of traditional distance vector or link state based protocols, thus suffering low throughput and less robustness. Drawing intuitions from the Brownian motions where localized momentum exchanges enable global energy diffusion, we propose an innovative routing protocol, chaotic routing (CR), which achieves efficient information diffusion with seemingly chaotic local information exchanges. Leveraging emerging networking concepts such as potential based routing, opportunistic routing and network coding, CR improves throughput via accurate routing cost estimation, opportunistic data forwarding and localized node scheduling optimizing information propagation in mesh structures. Through extensive simulations, we prove that CR outperforms, in terms of throughput, best deterministic routing scheme (i.e. best path routing) by a factor of around 300% and beats the best opportunistic routing scheme (i.e. MORE) by a factor of around 200%. CR shows stable performance over wide range of network densities, link qualities and batch sizes.",2008,0, 3924,Online Measurement of the Capacity of Multi-Tier Websites Using Hardware Performance Counters,"Understanding server capacity is crucial for system capacity planning, configuration, and QoS-aware resource management. Conventional stress testing approaches measure the server capacity in terms of application-level performance metrics like response time and throughput. They are limited in measurement accuracy and timeliness. In a multitier website, resource bottleneck often shifts between tiers as client access pattern changes. This makes the capacity measurement even more challenging. This paper presents a measurement approach based on hardware performance counter metrics. The approach uses machine learning techniques to infer application-level performance at each tier. A coordinated predictor is induced over individual tier models to estimate system-wide performance and identify the bottleneck when the system becomes overloaded. Experimental results demonstrate that this approach is able to achieve an overload prediction accuracy of higher than 90% for a priori known input traffic patterns and over 85% accuracy even for traffic causing frequent bottleneck shifting. It costs less than 0.5% runtime overhead for data collection and no more than 50 ms for each on-line decision.",2008,0, 3925,An alternative fault location algorithm based on Wavelet Transforms for three-terminal lines,"This paper presents the study and development of a complete fault location scheme for three-terminal transmission lines using wavelet transforms (WT). The methodology is based on the low and high frequency components of the transient signals originated from a fault situation registered in the terminals of a system. By processing these signals and using the WT, it is possible to determine the time of traveling waves of voltages and/or currents from the fault point to the terminals, as well as to estimate the fundamental frequency components. As a consequence, both faulted leg and fault location can be estimated with reference to one of the system terminals. A new approach is presented to develop a reliable and accurate fault location scheme combining the best the methods can offer. The main idea is to have a decision routine in order to select which method should be used in each situation presented to the algorithm. The algorithm was tested for different fault conditions by simulations using ATP (alternative transients program) software. The results obtained are promising and demonstrate a highly satisfactory degree of accuracy and reliability of the proposed method.",2008,0, 3926,Transmission systems power quality monitors allocation,"This work develops and tests a branch and bound algorithm for solving optimum allocation of power quality monitors in a transmission power system. The optimization problem is solved by using 0-1 integer programming techniques and depends highly on network topology. The algorithm, which is implemented in Matlab software, minimizes the total cost of the monitoring system and found the optimum number and locations for monitors on the network studied, under a set of given network observability constraints. Case studies are presented for IEEE test networks and for CEMIG actual transmission power systems. Current and voltage values are estimated by using monitored variables to validate the obtained results.",2008,0, 3927,Performance evaluation of a connection-oriented Internet service based on a queueing model with finite capacity,"The operating mechanism of a connection-oriented Internet service is analyzed. Considering the finite buffer in connection-oriented Internet service, we establish a Geom/G/1/K queueing model with setup-close delay-close down for user-initiated session. Using the approach of an embedded Markovian chain and supplementary variables, we derive the probability distribution for the steady queue length and the probability generating function for the waiting time. Correspondingly, we study the performance measures of quality of service (QoS) in terms of system throughput, system response time, and system blocking probability. Based on the simulation results, we discuss the influence of the upper limit of close delay period and the capacity of the queueing model on the performance measures, which have potential application in the design, resource assigning, and optimal setting for the next generation Internet.",2008,0, 3928,Assuring information quality in Web service composition,"As organizations have begun increasingly to communicate and interact with consumers via the Web, so the information quality (IQ) of their offerings has become a central issue since it ensures service usability and utility for each visitor and, in addition, improves server utilization. In this article, we present an IQ-enable Web service architecture, IQEWS, by introducing a IQ broker module between service clients and providers (servers). The functions of the IQ broker module include assessing IQ about servers, making selection decisions for clients, and negotiating with servers to get IQ agreements. We study an evaluation scheme aimed at measuring the information quality of Web services used by IQ brokers acting as the front-end of servers. This methodology is composed of two main components, an evaluation scheme to analyze the information quality of Web services and a measurement algorithm to generate the linguistic recommendations.",2008,0, 3929,"An Empirical Study on the Relationship Between Software Design Quality, Development Effort and Governance in Open Source Projects","The relationship among software design quality, development effort, and governance practices is a traditional research problem. However, the extent to which consolidated results on this relationship remain valid for open source (OS) projects is an open research problem. An emerging body of literature contrasts the view of open source as an alternative to proprietary software and explains that there exists a continuum between closed and open source projects. This paper hypothesizes that as projects approach the OS end of the continuum, governance becomes less formal. In turn a less formal governance is hypothesized to require a higher-quality code as a means to facilitate coordination among developers by making the structure of code explicit and facilitate quality by removing the pressure of deadlines from contributors. However, a less formal governance is also hypothesized to increase development effort due to a more cumbersome coordination overhead. The verification of research hypotheses is based on empirical data from a sample of 75 major OS projects. Empirical evidence supports our hypotheses and suggests that software quality, mainly measured as coupling and inheritance, does not increase development effort, but represents an important managerial variable to implement the more open governance approach that characterizes OS projects which, in turn, increases development effort.",2008,0, 3930,Assessment driven process modeling for software process improvement,"Software process improvement (SPI) is used to develop processes to meet more effectively the software organizationpsilas business goals. Improvement opportunities can be exposed by conducting an assessment. A disciplined process assessment evaluates organizationpsilas processes against a process assessment model, which usually includes good software practices as indicators. Many benefits of SPI initiatives have been reported but some improvement efforts have failed, too. Our aim is to increase the probability to success by integrating software process modeling with assessments. A combined approach is known to provide more accurate process ratings and higher quality process models. In this study we have revised the approach by extending the scope of modeling further. Assessment Driven Process Modeling for SPI uses assessment evidence to create a descriptive process model of the assessed processes. The descriptive model is revised into a prescriptive process model, which illustrates an organizationpsilas processes after the improvements. The prescriptive model is created using a process library that is based on the indicators of the assessment model. Modeling during assessment is driven by both process performance and process capability indicators.",2008,0, 3931,Cross-Layer Transmission Scheme with QoS Considerations for Wireless Mesh Networks,"IEEE 802.11 wireless networks utilizes a hard handoff scheme when a station is travelling from one area of coverage to another one within a transmission duration. In IEEE 802.11s wireless mesh networks, the handoff procedure the transmitted data will first be buffered in the source MAP and be not relayed to the target MAP until the handoff procedure is finished. Besides, there are multi-hop in the path between the source station and the destination station. In each pair of neighboring MAPs, contention is needed to transmit data. The latency for successfully transmitting data is seriously lengthened so that the deadlines of data frames are missed with high probabilities. In this paper, we propose a cross-layer transmission (CLT) scheme with QoS considerations for IEEE 802.11 mesh wireless networks. By utilizing CLT, the ratios of missing deadlines will be significantly improved to conform strict time requirements for real-time multimedia applications. We develop a simulation model to investigate the performance of CLT. The capability of the proposed scheme is evaluated by a series of experiments, for which we have encouraging results.",2008,0, 3932,A fault tolerant approach in cluster computing system,"A long-term trend in high performance computing is the increasing number of nodes in parallel computing platforms, which entails a higher failure probability. Hence, fault tolerance becomes a key property for parallel application running on parallel computing systems. The message passing interface (MPI) is currently the programming paradigm and communication library most commonly used on parallel computing platforms. MPI applications may be stopped at any time during their execution due to an unpredictable failure. In order to avoid complete restarts of an MPI application because of only one failure, a fault tolerant MPI implementation is essential. In this paper, we propose a fault tolerant approach in cluster computing system. Our approach is based on reassignment of tasks to the remaining system and message logging is used for message losses. This system consists of two main parts, failure diagnosis and failure recovery. Failure diagnosis is the detection of a failure and failure recovery is the action needed to take over the workload of a failed component. This fault tolerant approach is implemented as an extension of the message passing interface.",2008,0, 3933,MUSIC: Mutation-based SQL Injection Vulnerability Checking,"SQL injection is one of the most prominent vulnerabilities for web-based applications. Exploitation of SQL injection vulnerabilities (SQLIV) through successful attacks might result in severe consequences such as authentication bypassing, leaking of private information etc. Therefore, testing an application for SQLIV is an important step for ensuring its quality. However, it is challenging as the sources of SQLIV vary widely, which include the lack of effective input filters in applications, insecure coding by programmers, inappropriate usage of APIs for manipulating databases etc. Moreover, existing testing approaches do not address the issue of generating adequate test data sets that can detect SQLIV. In this work, we present a mutation-based testing approach for SQLIV testing. We propose nine mutation operators that inject SQLIV in application source code. The operators result in mutants, which can be killed only with test data containing SQL injection attacks. By this approach, we force the generation of an adequate test data set containing effective test cases capable of revealing SQLIV. We implement a MUtation-based SQL Injection vulnerabilities Checking (testing) tool (MUSIC) that automatically generates mutants for the applications written in Java Server Pages (JSP) and performs mutation analysis. We validate the proposed operators with five open source web-based applications written in JSP. We show that the proposed operators are effective for testing SQLIV.",2008,0, 3934,Does Adaptive Random Testing Deliver a Higher Confidence than Random Testing?,"Random testing (RT) is a fundamental software testing technique. Motivated by the rationale that neighbouring test cases tend to cause similar execution behaviours, adaptive random testing (ART) was proposed as an enhancement of RT, which enforces random test cases evenly spread over the input domain. ART has always been compared with RT from the perspective of the failure-detection capability. Previous studies have shown that ART can use fewer test cases to detect the first software failure than RT. In this paper, we aim to compare ART and RT from the perspective of program-based coverage. Our experimental results show that given the same number of test cases, ART normally has a higher percentage of coverage than RT. In conclusion, ART outperforms RT not only in terms of the failure-detection capability, but also in terms of the thoroughness of program-based coverage. Therefore, ART delivers a higher confidence of the software under test than RT even when no failure has been revealed.",2008,0, 3935,How to Measure Quality of Software Developed by Subcontractors (Short Paper),"In Japan, where the multiple subcontracting is very common in software development, it is difficult to measure the quality of software developed by subcontractors. Even if we request them for process improvement based on CMMI including measurement and analysis, it will not work immediately. Using ""the unsuccessful ratio in the first time testing pass"" as measure to assess software quality, we have had good results. We can get these measures with a little effort of both outsourcer and subcontractor. With this measure, we could identify the defect-prone programs and conduct acceptance testing for these programs intensively. Thus, we could deliver the system on schedule. The following sections discuss why we devised this measure and its trial results.",2008,0, 3936,Towards a Method for Evaluating the Precision of Software Measures (Short Paper),"Software measurement currently plays a crucial role in software engineering given that the evaluation of software quality depends on the values of the measurements carried out. One important quality attribute is measurement precision. However, this attribute is frequently used indistinctly and confused with accuracy in software measurement. In this paper, we clarify the meaning of precision and propose a method for assessing the precision of software measures in accordance with ISO 5725. This method was used to assess a functional size measurement procedure. A pilot study was designed for the purpose of revealing any deficiencies in the design of our study.",2008,0, 3937,A Method for Measuring the Size of a Component-Based System Specification,"The system-level size measures are particularly important in software project management as tasks such as planning and estimating the cost and schedule of software development can be performed more effectively when a size estimate of the entire system is available. However, due to the black-box nature of components, traditional software measures are not adequate as system-level measures for component-based systems (CBS). Thus, if a system-level size is required, alternate measures should be used for sizing CBS. In this paper, we present a function point like approach, named component point to measure the system-level size of a CBS using the CBS specification written in UML. The component point approach integrates two software measures and extends an existing size measure from the more matured object-oriented paradigm to the related and relatively young CBS discipline. We also suggest a customized set of general system characteristics so as to make our measure more relevant to CBS.",2008,0, 3938,Path and Context Sensitive Inter-procedural Memory Leak Detection,This paper presents a practical path and context sensitive inter-procedural analysis method for detecting memory leaks in C programs. A novel memory object model and function summary system are used. Preliminary experiments show that the method is effective. Several memory leaks have been found in real programs including which and wget.,2008,0, 3939,Achievements and Challenges in Cocomo-Based Software Resource Estimation,"This article summarizes major achievements and challenges of software resource estimation over the last 40 years, emphasizing the Cocomo suite of models. Critical issues that have enabled major achievements include the development of good model forms, criteria for evaluating models, methods for integrating expert judgment and statistical data analysis, and processes for developing new models that cover new software development approaches. The article also projects future trends in software development and evolution processes, along with their implications and challenges for future software resource estimation capabilities.",2008,0, 3940,An Expectation Trust Benefit Driven Algorithm for Resource Scheduling in Grid Computing,"In traditional grid resource allocation solutions, the jobs with high-quality required may not be executed when they were allocated to low-quality offered nodes because of the separation of trust mechanism and job schedule mechanisms. Therefore, this paper combines the two mechanisms, proposes grid job schedule arithmetic driven by the expectation trust benefit. The value of the expectation trust benefit function is the prediction benefit of a certain job which executes in grid. Simulation experiments prove that expectation trust benefit driven arithmetic is better than conversional min-min arithmetic in benefit and it also has a good performance in trust benefit.",2008,0, 3941,On the Trend of Remaining Software Defect Estimation,"Software defects play a key role in software reliability, and the number of remaining defects is one of most important software reliability indexes. Observing the trend of the number of remaining defects during the testing process can provide very useful information on the software reliability. However, the number of remaining defects is not known and has to be estimated. Therefore, it is important to study the trend of the remaining software defect estimation (RSDE). In this paper, the concept of RSDE curves is proposed. An RSDE curve describes the dynamic behavior of RSDE as software testing proceeds. Generally, RSDE changes over time and displays two typical patterns: 1) single mode and 2) multiple modes. This behavior is due to the different characteristics of the testing process, i.e., testing under a single testing profile or multiple testing profiles with various change points. By studying the trend of the estimated number of remaining software defects, RSDE curves can provide further insights into the software testing process. In particular, in this paper, the Goel-Okumoto model is used to estimate this number on actual software failure data, and some properties of RSDE are derived. In addition, we discuss some theoretical and application issues of the RSDE curves. The concept of the proposed RSDE curves is independent of the selected model. The methods and development discussed in this paper can be applied to any valid estimation model to develop and study its corresponding RSDE curve. Finally, we discuss several possible areas for future research.",2008,0, 3942,A Novel Embedded Fingerprints Authentication System Based on Singularity Points,"In this paper a novel embedded fingerprints authentication system based on core and delta singularity points detection is proposed. Typical fingerprint recognition systems use core and delta singularity points for classification tasks. On the other hand, the available optical and photoelectric sensors give high quality fingerprint images with well defined core and delta points, if they are present. In the proposed system, fingerprint matching is based on singularity points position, orientation, and relative distance detection. As result, fingerprint matching involves the comparison between few features leading to a very fast system with recognition rates comparable to the standard minutiae based recognition systems. The whole system has been prototyped on the Celoxica RC203E board, equipped with a Xilinx VirtexII FPGA. The prototype has been tested with the FVC databases. Hardware system performance show high recognition rates and low execution time: FAR = 1.2% and FRR=2.6%, while each fingerprint elaboration requires 34.82 ms. Considering both FAR, FRR indexes and the high speed, the proposed system could be used for identification tasks in large fingerprint databases. To the best of our knowledge, this is the first authentication system based on singularity points.",2008,0, 3943,Towards a Multidimensional Visualization of Environmental Soft Sensors Predictions,"Soft sensors have been successfully applied to simulate physical and chemical measurements in specific locations of a monument. They allowed monitoring the quality conditions of the monument surface for long periods of time. This is a not invasive process that provides a huge set of multidimensional data. They have been analyzed by the Cultural Heritage experts to find physical or chemical critical condition which could generate some degradation process. Here we propose several multidimensional visualization techniques to represent the predictions of environmental parameters, given by several soft sensors, in a comprehensive and compact way. Visualization tools using both shape variation (glyph) and color are developed to realize an homogeneous communication paradigm. Moreover, each tool uses a 3D navigable model of the monument as visual support.",2008,0, 3944,Fast motion estimation for H.264/AVC using image edge features,"The key to high performance in video coding lies on efficiently reducing the temporal redundancies. For this purpose, H.264/AVC coding standard has adopted variable block size motion estimation on multiple reference frames to improve the coding gain. However, the computational complexity of motion estimation is also increased in proportion to the product of the reference frame number and the inter mode number. The mathematical analysis reveals that the prediction errors mainly depend on the image edge gradient amplitude and quantization parameter. Consequently, this paper proposed the image content based early termination algorithm, which outperforms the method adopted by JVT reference software, especially at high and moderate bit rates. In light of rate-distortion theory, this paper also relates the homogeneity of image to the quantization parameter. For the homogenous block, its search computation for the futile reference frames and inter modes can be efficiently discarded. Then the computation saving performance increases with quantization parameter. These content based fast algorithms were integrated with Unsymmetrical-cross Multihexagon-grid Search (UMHexagonS) algorithm. Compared to the original UMHexagonS fast matching algorithm, 26.14-54.97% search time can be saved with an average of 0.0369dB coding quality degradation.",2008,0, 3945,Automatic pixel-shift detection and restoration in videos,"A common form of serious defect in video is pixel-shift. It is caused by the consecutive pixels loss introduced by video transmission systems. Pixel-shift means a large amount of pixel shifts one by one due to a small quantity of image data loss. The damaged region in affected frame is usually large, and thus the visual effect is often very disturbing. So far there is no method of automatically treating pixel-shift. This paper focuses on a difficult issue to locate pixel-shift in videos. We propose an original algorithm of automatically detecting and restoring pixel-shift. Pixel-shift frames detection relies on spatio-temporal information and motion estimation. Accurate measure of pixels shift is best achieved based on the analysis of temporal-frequency information. Restoration is accomplished by reversing the pixels shift and spatio-temporal interpolation. Experimental results show that our completely automatic algorithm can achieve very good performances.",2008,0, 3946,Automatic detection of pronunciation errors in CAPT systems based on confidence measure,"Computer aided pronunciation training (CAPT) systems aim at listening to a learner's utterance, judging the overall pronunciation quality and pointing out any pronunciation errors in the utterance. However, the performance of the current error detection techniques can not satisfy the users' expectation. In this paper, we introduce confidence measures and anti-models into the error detection algorithm. The basic theory and the roles of confidence measures and anti-models are discussed. We then propose a new algorithm, and evaluate it on an English learning system. Experiments are conducted based on the TIMIT database and an adaptive database, which involves 40 Chinese undergraduates. The results show that confidence measures can be utilized to effectively improve the performance of the CAPT system.",2008,0, 3947,Using the TSPi Defined Process and Improving the Project Management Process,"Team software process is an integrated framework that guides development teams in producing high-quality software-intensive systems. This paper analyzes the effects of TSPi training and the improvements achieved by 44 teams, comprising fourth-year students on the software engineering degree programme, corresponding to two academic years. This study shows the benefits of using defined models such as TSPi to improve the students' skills and knowledge. The metrics of size and effort estimation, defects, productivity and costs are collected to measure the improvements through two development cycles.",2008,0, 3948,Inaccessible Area and Effort Consumption Dynamics,"The estimation of project effort E and project duration D is based on several simple empirical rules (laws). The laws are almost independent although some dependencies must exist. An example is the existence of so called inaccessible area (IA). IA is the set of pairs (D, E) that cannot appear. IA is related to the changes of effort consumption dynamics (ECD) for tight project durations D. The widely used Rayleigh model RM of ECD does not imply the existence of IA and the behavior of project efforts for projects having (D,E) near their lA's. RM has further deficiencies. Let tmax be the time when effort intensity E(t) is maximal RM require that the effort E expended before tmax is near to 0.52E. Another drawback is that RM implies almost no effort consumption for t > 3.5tmax. It therefore wrongly implies that zero defect software could be a feasible goal. It is known that it is not true. We propose a new model M inspired by physics having no such drawbacks. The model can be used to refine estimations of D and E after tmax is passed. The model implies several mutual dependencies of empirical laws of software development. M for example implies the behavior of effort E near inaccessibility area and implies a realistic estimation of project duration.",2008,0, 3949,A New Classifier to Deal with Incomplete Data,"Classification is a very important research topic in knowledge discovery and machine learning. Decision-tree is one of the well-known data mining methods that are used in classification problems. But sometimes the data set for classification contains vectors missing one or more of the feature values, and is called as incomplete data. Generally, the existence of incomplete data will degrade the learning quality of classification models. If the incomplete data can be dealt well, the classifier can be used to real life applications. So handling incomplete data is important and necessary for building a high quality classification model. In this paper a new decision tree is proposed to solve the incomplete data classification problem and it has a very good performance. At the same time, the new method solves two other important problems: rule refinement problem and importance preference problem, which ensures the outstanding advantages of the proposed approach. Significantly, this is the first classifier which can deal with all these problems at the same time.",2008,0, 3950,Privacy Preserving of Associative Classification and Heuristic Approach,"In the era of data explosion, privacy preserving has become a necessary task for any data mining task. Therefore, data transformation to ensure privacy preservation is needed. Meanwhile, the transformed data must have quality to be used in the intended data mining task, i.e. the impact on the data quality with regard to the data mining task must be minimized. However, the data transformation problem to preserve the data privacy while minimizing the impact has been proven as an NP-hard. Also, for classification mining, each classification approach may use different approach to deliver knowledge. Therefore, data quality metric for the classification task should be tailored to a specific type of classification. In this paper, we focus on maintaining the data quality in the scenarios which the transformed data will be used to build associative classification models. We propose a data quality metric for such the associative classification. Also, we propose a heuristic approach to preserve the privacy and maintain the data quality. Subsequently, we validate our proposed approaches with experiments.",2008,0, 3951,A Crossover Game Routing Algorithm for Wireless Multimedia Sensor Networks,"The multi-constrained QoS-based routing problem of wireless multimedia sensor networks is an NP hard problem. Genetic algorithms (GAs) have been used to handle these NP hard problems in wireless networks. Because the crossover probability is a key factor of GAs' action and performance, and affects the convergence of GAs, and the selection of crossover probability is very difficult, so we propose a novel method - crossover game instead of probability crossover. The crossover game in routing problems is based of each node has restricted energy, and each node trend to get maximal whole network benefit but pay out minimum cost. The players of crossover game are individual routes. The individual would perform crossover operator if it is Nash equilibrium in crossover game. The Simulation results demonstrate that this method is effective and efficient.",2008,0, 3952,An integrated framework of the modeling of failure-detection and fault-correction processes in software reliability analysis,"The failure-detection and fault-correction are critical processes in attaining good performance of software quality. In this paper, we propose several improvements on the conventional software reliability growth models (SRGMs) to describe actual software development process by eliminating some unrealistic assumptions. Most of these models have focused on the failure detection process and not given equal priority to modeling the fault correction process. But, most latent software errors may remain uncorrected for a long time even after they are detected, which increases their impact. The remaining software faults are often one of the most unreliable reasons for software quality. Therefore, we develop a general framework of the modeling of the failure detection and fault correction processes. Furthermore, we also analyze the effect of applying the delay-time non-homogeneous Poisson process (NHPP) models. Finally, numerical examples are shown to illustrate the results of the integration of the detection and correction process.",2008,0, 3953,Graphical Representation as a Factor of 3D Software User Satisfaction: A Metric Based Approach,"During the last few years, an increase in the development and research activity on 3D applications, mainly motivated by the rigorous growth of the game industry, is observed. This paper deals with assessing user satisfaction, i.e. a critical aspect of 3D software quality, by measuring technical characteristics of virtual worlds. Such metrics can be easily calculated in games and virtual environments of different themes and genres. In addition to that, the metric suite would provide an objective mean of comparing 3D software. In this paper, metrics concerning the graphical representation of a virtual world are introduced and validated through a pilot experiment.",2008,0, 3954,A BBN Based Approach for Improving a Telecommunication Software Estimation Process,"This paper describes analytically a methodology for improving the estimation process of a small-medium telecommunication (TLC) company. All the steps required for the generation of estimates such as data collection, data transformation, estimation model extraction and finally exploitation of the knowledge explored are described and demonstrated as a case study involving a Greek TLC company. Based on this knowledge certain interventions are suggested in the current process of the company under study in order to include formal estimation procedures in each development phase.",2008,0, 3955,Dynamic Multipath Allocation in Ad Hoc Networks,"Ad hoc networks are characterized by fast dynamic changes in the topology of the network. A known technique to improve QoS is to use multipath routing where packets (voice/video/...) from a source to a destination travel in two or more maximal disjoint paths. We observe that the need to find a set of maximal disjoint paths can be relaxed by finding a set of paths S wherein only bottlenecked links are bypassed. In the proposed model we assume that there is only one edge along a path in S is a bottleneck and show that by selecting random paths in S the probability that bottlenecked edges get bypassed is high. We implemented this idea in the MRA system which is a highly accurate visual ad hoc simulator currently supporting two routing protocols AODV and MRA. We have extended the MRA protocol to use multipath routing by maintaining a set of random routing trees from which random paths can be easily selected. Random paths are allocated/released by threshold rules monitoring the session quality. The experiments show that: (1) session QoS is significantly improve, (2) the fact that many sessions use multiple paths in parallel does not depredate overall performances, (3) the overhead in maintaining multipath in the MRA algorithm is negligible.",2008,0, 3956,Development of Fault Detection System in Air Handling Unit,"Monitoring systems used at present to operate air handling unit(AHU) optimally do not have a function that enables to detect faults properly when there are faults of such as operating plants or performance falling, so they are unable to manage faults rapidly and operate optimally. In this paper, we have developed a classified rule-based fault detection system which can be inclusively used in AHU system of a building by installation of sensor which is composed of AHU system and required low costs compare to the model based fault detection system which can be used only in a special building or system. In order to experiment this algorithm, it was applied to AHU system which is installed inside environment chamber(EC), verified its own practical effect, and confirmed its own applicability to the related field in the future.",2008,0, 3957,A Scalable Method for Improving the Performance of Classifiers in Multiclass Applications by Pairwise Classifiers and GA,"In this paper, a new combinational method for improving the recognition rate of multiclass classifiers is proposed. The main idea behind this method is using pairwise classifiers to enhance the ensemble. Because of more accuracy of them, they can decrease the error rate in error-prone feature space. Firstly, a multiclass classifier has been trained. Then, regarding to confusion matrix and evaluation data, the pair-classes that have the most error have been derived. After that, pairwise classifiers have been trained and added to ensemble of classifiers. Finally, weighted majority vote for combining the primary results is applied. In this paper, multi layer perceptron is used as base classifier. Also, GA determines the optimized weights in final classifier. This method is evaluated on a Farsi digit handwritten dataset. Using proposed method, the recognition rate of simple multiclass classifier has been improved from 97.83 to 98.89 which shows an adequate improvement.",2008,0, 3958,Software Dependability Evaluation Model Based on Fuzzy Theory,"Through the foundation of a universal dependability indicator system and making use of the fuzzy synthesized evaluation method, a dependability evaluation model is established to solve the problem of software dependability evaluation. The feasibility and effectiveness of this model is proved by a representative example. It provides an effective method for software dependability evaluation.",2008,0, 3959,Performance models for Cluster-enabled OpenMP implementations,"A key issue for cluster-enabled OpenMP implementations based on software distributed shared memory (sDSM) systems, is maintaining the consistency of the shared memory space. This forms the major source of overhead for these systems, and is driven by the detection and servicing of page faults. This paper investigates how application performance can be modelled based on the number of page faults. Two simple models are proposed, one based on the number of page faults along the critical path of the computation, and one based on the aggregated numbers of page faults. Two different sDSM systems are considered. The models are evaluated using the OpenMP NAS parallel benchmarks on an 8-node AMD-based Gigabit Ethernet cluster. Both models gave estimates accurate to within 10% in most cases, with the critical path model showing slightly better accuracy; accuracy is lost if the underlying page faults cannot be overlapped, or if the application makes extensive use of the OpenMP flush directive.",2008,0, 3960,"Algorithm for PM2.5 Mapping over Penang Island, Malaysia, Using SPOT Satellite Data","Air pollution is an important issue being monitored and regulated in industrial and developing cities. In this study, we explored the relationship between particulate matters of size less than 2.5 micron (PM 2.5) derived from the SPOT using regression technique. The aim of this study was to evaluate the high spatial resolution satellite data for air quality mapping by using FLAASH software. The corresponding PM 2.5 data were measured simultaneously with the acquired satellite scene and their locations were determined using a handheld Global Positioning System (GPS). Due to the fact that the current commercial aerosol retrieval method (FLAASH) was designed to work reasonably well over land, but is not accurate for water applications, adjustments had to be made to the software to optimize it for reflectance retrieval over water. In-situ reflectance spectra will be plot against spectra derived from the atmospherically corrected images. In this study, we ensure that all the atmospherically corrected spectra match reasonably with the in-situ spectra. We have developed a new algorithm that can effectively estimate the spatial distribution of atmospheric aerosols and retrieve surface reflectance from remotely sensed imagery under general atmospheric and surface conditions. The algorithm was developed base on the aerosol characteristics in the atmosphere. The efficiency of the developed algorithm, in comparison to other forms of algorithm, will be investigated in this study. Results indicate that there is a good correlation between the satellites derived PM 2.5 and the measured PM 2.5. This study shows the potential of using the thermal infrared data for air quality mapping. The finding obtained by this study indicates that the FLAASH can be used to retrieve air quality information for remotely sensed data.",2008,0, 3961,Improving the Efficiency of Misuse Detection by Means of the q-gram Distance,"Misuse detection-based intrusion detection systems (IDS) perform search through a database of attack signatures in order to detect whether any of them are present in incoming traffic. For such testing, fault-tolerant distance measures are needed. One of the appropriate distance measures of this kind is constrained edit distance, but the time complexity of its computation is too high. We propose a two-phase indexless search procedure for application in misuse detection-based IDS that makes use of q-gram distance instead of the constrained edit distance. We study how well q-gram distance approximates edit distance with special constraints needed in IDS applications. We compare the performances of the search procedure with the two distances applied in it. Experimental results show that the procedure with the q-gram distance implemented achieves for higher values of q almost the same accuracy as the one with the constrained edit distance implemented, but the efficiency of the procedure that implements the q-gram distance is much better.",2008,0, 3962,DAST: A QoS-Aware Routing Protocol for Wireless Sensor Networks,"In wireless sensor networks (WSNs), a challenging problem is how to advance network QoS. Energy-efficiency, network communication traffic and failure-tolerance, these important factors of QoS are closely related with the applied performance of WSNs. Hence a QoS-aware routing protocol called directed alternative spanning tree (DAST) is proposed to balance the above three factors of QoS. A directed tree-based model is constructed to bring data transmission more motivated and efficient. Based on Markov, a communication state predicted mechanism is proposed to choose reasonable parent, and packet transmission to double-parent is submitted with alternative algorithm. For enhancing network failure-tolerance, routing reconstruction is studied on. With the simulations, the proposed protocol is evaluated in comparison with the existing protocols from energy efficiency to the failure-tolerance. The performance of DAST is verified to be efficient and available, and it is competent for satisfying QoS of WSNs.",2008,0, 3963,Design and Analysis of Embedded GPS/DR Vehicle Integrated Navigation System,"Global Position system (GPS) is a positioning system with superior long-term error performance, while Dead Reckoning (DR) system has good positioning precision in short-term, through advantage complementation, a GPS/DR integration provides position data with high reliability for vehicle navigation system. This paper focuses on the design of the embedded GPS/DR vehicle integrated navigation system using the nonlinear Kalman filtering approach. The signal's observation gross errors are detected and removed at different resolution levels based on statistic 3sigma- theory, and navigation data are solved with Extended Kalman filter in real-time, the fault tolerance and precision of the vehicle integrated navigation system are improved greatly.",2008,0, 3964,Tempest: Towards early identification of failure-prone binaries,"Early estimates of failure-proneness can be used to help inform decisions on testing, refactoring, design rework etc. Often such early estimates are based on code metrics like churn and complexity. But such estimates of software quality rarely make their way into a mainstream tool and find industrial deployment. In this paper we discuss about the Tempest tool that uses statistical failure-proneness models based on code complexity and churn metrics across the Microsoft Windows code base to identify failure-prone binaries early in the development process. We also present the tool architecture and its usage as of date at Microsoft.",2008,0, 3965,"An integrated approach to resource pool management: Policies, efficiency and quality metrics","The consolidation of multiple servers and their workloads aims to minimize the number of servers needed thereby enabling the efficient use of server and power resources. At the same time, applications participating in consolidation scenarios often have specific quality of service requirements that need to be supported. To evaluate which workloads can be consolidated to which servers we employ a trace-based approach that determines a near optimal workload placement that provides specific qualities of service. However, the chosen workload placement is based on past demands that may not perfectly predict future demands. To further improve efficiency and application quality of service we apply the trace-based technique repeatedly, as a workload placement controller. We integrate the workload placement controller with a reactive controller that observes current behavior to i) migrate workloads off of overloaded servers and ii) free and shut down lightly-loaded servers. To evaluate the effectiveness of the approach, we developed a new host load emulation environment that simulates different management policies in a time effective manner. A case study involving three months of data for 138 SAP applications compares our integrated controller approach with the use of each controller separately. The study considers trade-offs between i) required capacity and power usage, ii) resource access quality of service for CPU and memory resources, and iii) the number of migrations. We consider two typical enterprise environments: blade and server based resource pool infrastructures. The results show that the integrated controller approach outperforms the use of either controller separately for the enterprise application workloads in our study. We show the influence of the blade and server pool infrastructures on the effectiveness of the management policies.",2008,0, 3966,Hot-spot prediction and alleviation in distributed stream processing applications,"Many emerging distributed applications require the real-time processing of large amounts of data that are being updated continuously. Distributed stream processing systems offer a scalable and efficient means of in-network processing of such data streams. However, the large scale and the distributed nature of such systems, as well as the fluctuation of their load render it difficult to ensure that distributed stream processing applications meet their Quality of Service demands. We describe a decentralized framework for proactively predicting and alleviating hot-spots in distributed stream processing applications in real-time. We base our hot-spot prediction techniques on statistical forecasting methods, while for hot-spot alleviation we employ a non-disruptive component migration protocol. The experimental evaluation of our techniques, implemented in our Synergy distributed stream processing middleware over PlanetLab, using a real stream processing application operating on real streaming data, demonstrates high prediction accuracy and substantial performance benefits.",2008,0, 3967,A recurrence-relation-based reward model for performability evaluation of embedded systems,"Embedded systems for closed-loop applications often behave as discrete-time semi-Markov processes (DTSMPs). Performability measures most meaningful to iterative embedded systems, such as accumulated reward, are thus difficult to solve analytically in general. In this paper, we propose a recurrence-relation-based (RRB) reward model to evaluate such measures. A critical element in RRB reward models is the notion of state-entry probability. This notion enables us to utilize the embedded Markov chain in a DTSMP in a novel way. More specifically, we formulate state-entry probabilities, state-occupancy probabilities, and expressions concerning accumulated reward solely in terms of state-entry probability and its companion term, namely the expected accumulated reward at the point of state entry. As a result, recurrence relations abstract away all the intermediate points that lack the memoryless property, enabling a solvable model to be directly built upon the embedded Markov chain. To show the usefulness of RRB reward models, we evaluate an embedded system for which we leverage the proposed notion and methods to solve a variety of probabilistic measures analytically.",2008,0, 3968,A new technique for assessing the diversity of close-Pareto-optimal front,"The quality of an approximation set usually includes two aspects-- approaching distance and spreading diversity. This paper introduces a new technique for assessing the diversity of an approximation to an exact Pareto-optimal front. This diversity is assessed by using an ldquoexposure degreerdquo of the exact Pareto-optimal front against the approximation set. This new technique has three advantages: Firstly, The ldquoexposure degreerdquo combines the uniformity and the width of the spread into a direct physical sense. Secondly, it makes the approaching distance independent from the spreading diversity at the most. Thirdly, the new technique works well for problems with any number of objectives, while the widely used diversity metric proposed by Deb would work poor in problems with 3 objectives or over. Experimental computational results show that the new technique assesses the diversity well.",2008,0, 3969,A Grammatical Swarm for protein classification,"We present a grammatical swarm (GS) for the optimization of an aggregation operator. This combines the results of several classifiers into a unique score, producing an optimal ranking of the individuals. We apply our method to the identification of new members of a protein family. Support vector machine and naive Bayes classifiers exploit complementary features to compute probability estimates. A great advantage of the GS is that it produces an understandable algorithm revealing the interest of the classifiers. Due to the large volume of candidate sequences, ranking quality is of crucial importance. Consequently, our fitness criterion is based on the area under the ROC curve rather than on classification error rate. We discuss the performances obtained for a particular family, the cytokines and show that this technique is an efficient means of ranking the protein sequences.",2008,0, 3970,Software quality modeling: The impact of class noise on the random forest classifier,"This study investigates the impact of increasing levels of simulated class noise on software quality classification. Class noise was injected into seven software engineering measurement datasets, and the performance of three learners, random forests, C4.5, and Naive Bayes, was analyzed. The random forest classifier was utilized for this study because of its strong performance relative to well-known and commonly-used classifiers such as C4.5 and Naive Bayes. Further, relatively little prior research in software quality classification has considered the random forest classifier. The experimental factors considered in this study were the level of class noise and the percent of minority instances injected with noise. The empirical results demonstrate that the random forest obtained the best and most consistent classification performance in all experiments.",2008,0, 3971,Module documentation based testing using Grey-Box approach,"Testing plays an important role to assure the quality of software. Testing is a process of detecting errors that can be highly effective if performed rigorously. The use of formal specifications provides significant opportunity to develop effective testing techniques. Grey-box testing approach usually based on knowledge obtains from specification and source code while seldom the design specification is concerned. In this paper, we propose an approach for testing a module with internal memory from its formal specification based on grey-box approach. We use formal specifications that are documented using Parnas’s Module Documentation (MD) method. The MD provides us with the information of external and internal view of a module that can be useful in greybox testing approach.",2008,0, 3972,The sliced Gaussian mixture filter for efficient nonlinear estimation,"This paper addresses the efficient state estimation for mixed linear/nonlinear dynamic systems with noisy measurements. Based on a novel density representation - sliced Gaussian mixture density - the decomposition into a (conditionally) linear and nonlinear estimation problem is derived. The systematic approximation procedure minimizing a certain distance measure allows the derivation of (close to) optimal and deterministic estimation results. This leads to high-quality representations of the measurement-conditioned density of the states and, hence, to an overall more efficient estimation process. The performance of the proposed estimator is compared to state-of-the-art estimators, like the well-known marginalized particle filter.",2008,0, 3973,Software quality prediction using Affinity Propagation algorithm,"Software metrics are collected at various phases of the software development process. These metrics contain the information of the software and can be used to predict software quality in the early stage of software life cycle. Intelligent computing techniques such as data mining can be applied in the study of software quality by analyzing software metrics. Clustering analysis, which can be considered as one of the data mining techniques, is adopted to build the software quality prediction models in the early period of software testing. In this paper, a new clustering method called Affinity Propagation is investigated for the analysis of two software metric datasets extracted from real-world software projects. Meanwhile, K-Means clustering method is also applied for comparison. The numerical experiment results show that the Affinity Propagation algorithm can be applied well in software quality prediction in the very early stage, and it is more effective on reducing Type II error.",2008,0, 3974,A novel method for DC system grounding fault monitoring on-line and its realization,"On the basis of comparison and analysis of the present grounding fault monitoring methods such as method of AC injection, method of DC leakage and so on, this paper points out their shortcomings in practical applications. A novel method named method of different frequency signals for detecting grounding fault of DC system is advanced, which can overcome the bad influence of distributed capacitor between the ground and branches. Finally a new kind of detector based on the proposed method is introduced. The detector, with C8051F041 as kernel, adopting method of different frequency signals, realizes on-line grounding fault monitoring exactly. The principles, hardware and software design are introduced in detail. The experimental results and practical operations show that the detector has the advantages of high-precision, better anti-interference, high degree of automation, low cost, etc.",2008,0, 3975,Type Highlighting: A Client-Driven Visual Approach for Class Hierarchies Reengineering,"Polymorphism and class hierarchies are key to increasing the extensibility of an object-oriented program but also raise challenges for program comprehension. Despite many advances in understanding and restructuring class hierarchies, there is no direct support to analyze and understand the design decisions that drive their polymorphic usage. In this paper we introduce a metric-based visual approach to capture the extent to which the clients of a hierarchy polymorphically manipulate that hierarchy. A visual pattern vocabulary is also presented in order to facilitate the communication between analysts. Initial evaluation shows that our techniques aid program comprehension by effectively visualizing large quantities of information, and can help detect several design problems.",2008,0, 3976,Performance Prediction Model for Service Oriented Applications,"Software architecture plays a significant role in determining the quality of a software system. It exposes important system properties for consideration and analysis. Performance related properties are frequently of interest in determining the acceptability of a given software design. This paper focuses mainly on developing an architectural model for applications that use service oriented architecture (SOA). This enables predicting the performance of the application even before it is completely developed. The performance characteristics of the components and connectors are modeled using queuing network model. This approach facilitates the performance prediction of service oriented applications. Further, it also helps in identification of various bottlenecks. A prototype service oriented application has been implemented and the actual performance is measured. This is compared against the predicted performance in order to analyze the accuracy of the prediction.",2008,0, 3977,"Networks, multicore, and systems evolution - facing the timing beast","Embedded systems rapidly grow in several dimensions. They grow in size, from local isolated networks with simple protocols to network hierarchies to large open heterogeneous networks with complex communication behaviour. They grow in performance, from simple microcontrollers to superscalar to multicore systems with many levels of memory hierarchy. And, they expand in the time dimension by moving from static system functions to open and evolutionary functions that change over time and require new design methods and autonomous system functions. All these development contribute to an ever increasing behavioural complexity with equally complex timing. Nevertheless, the fundamental requirements to reliability and performance predictability have stayed and even been enhanced. Embedded system technology has responded with new integration methods and software architectures supported by platform control methods using new service quality metrics, and with composable formal methods that scale with system size. The talk will give an overview on this exiting scientific field and will give practical examples.",2008,0, 3978,A Metrics Based Approach to Evaluate Design of Software Components,"Component based software development approach makes use of already existing software components to build new applications. Software components may be available in-house or acquired from the global market. One of the most critical activities in this reuse based process is the selection of appropriate components. Component evaluation is the core of the component selection process. Component quality models have been proposed to decide upon a criterion against which candidate components can be evaluated and then compared. But none is complete enough to carry out the evaluation. It is advocated that component users need not bother about the internal details of the components. But we believe that complexity of the internal structure of the component can help estimating the effort related to evolution of the component. In our ongoing research, we are focusing on quality of internal design of a software component and its relationship to the external quality attributes of the component.",2008,0, 3979,Test-Suite Augmentation for Evolving Software,"One activity performed by developers during regression testing is test-suite augmentation, which consists of assessing the adequacy of a test suite after a program is modified and identifying new or modified behaviors that are not adequately exercised by the existing test suite and, thus, require additional test cases. In previous work, we proposed MATRIX, a technique for test-suite augmentation based on dependence analysis and partial symbolic execution. In this paper, we present the next step of our work, where we (I) improve the effectiveness of our technique by identifying all relevant change-propagation paths, (2) extend the technique to handle multiple and more complex changes, (3) introduce the first tool that fully implements the technique, and (4) present an empirical evaluation performed on real software. Our results show that our technique is practical and more effective than existing test-suite augmentation approaches in identifying test cases with high fault-detection capabilities.",2008,0, 3980,An emergency Earthquake warning system for land mobile vehicles using the earthquake early warning,"This paper describes a study of an emergency earthquake warning system for land mobile vehicles using the Earthquake early warning in Japan, First, estimation accuracy of seismic wave is evaluated at a local area, when using both the position information and the soil information database in addition to the Earthquake early warning, and it is shown that using those additional information is effective for indication of the earthquake warning to each vehicle. Second, an emergency warning system for land mobile vehicles is proposed and the distribution system of the warning messages in the proposed system is investigated. It results that the Ground digital broadcasting is effective for the distribution to vehicles of the warning messages because of high quality and usability on mobile, and the Dedicated short range communication can also be provided reliable distribution. Last, an automotive experimental software of the proposed system is developed and confirmation of the software is performed by using warnings.",2008,0, 3981,Intelligent Java Analyzer,"This paper presents a software metric working prototype to evaluate Java programmer's profiles. In order to automatically detect source code patterns, a Multi Layer Perceptron neural network is applied. Features determined from such patterns constitute the basis for systempsilas programmer profiling. Results presented here show that the proposed prototype is a confident approach for support in the software quality assurance process.",2008,0, 3982,Software development for determination of power quality parameters,"The Colombian Energy and Gas Regulating Commission -CREG- through resolution CREG 024 of 2005, requires monitoring some power quality disturbances in substations of distribution systems; the main objective of this resolution is determine electrical power quality supplied by the utility operators to costumers by means of measurements analysis. In this paper the authors proposes a computational tool for management and classification of power quality measurements required by Regulator Entity, This tool performs a power quality analysis in substations and shows information about conditions in power quality to conduct subsequent analysis.",2008,0, 3983,Differential protection of three-phase transformers using Wavelet Transforms,"This paper proposes a novel formulation for differential protection of three-phase transformers using Wavelet Transforms (WTs). The new proposed methodology implements the WTs to extract predominant transient signals originated by transformer internal faults and captured from the current transformers. The Wavelet Transform is an efficient signal processing tool used to study non stationary signals with fast transition (high frequency components), mapping the signal in time-frequency representation. The three phase differential currents are the input signals used on-line to detect internal faults. The performance of this algorithm is demonstrated through simulation of different internal faults and switching conditions on a power transformer using ATP/EMTP software. The analyzed data is obtained from simulation of different normal and faulty operating conditions such as internal faults - phase/phase, phase/ground-, magnetizing inrush and external faults. The case study shows that the new algorithm is highly accurate and effective.",2008,0, 3984,Automatic generation of supervision system based on bond graph tool,"The paper deals with an integrated approach for supervision systems design based on a mechatronic properties of a bond graph tool. The methodology is implemented in a specific software which automatically creates complex process dynamic models from a simple graphical interface, where system components can be dragged from a component data base and interconnected so as to produce the overall system, following the Piping and Instrumentation Diagram. Once the model has been created, the software checks its consistency and performs its structural analysis in order to automatically determine the diagnosis algorithms which should be implemented, and their fault detectability and isolability performances. The friendly graphical user interface allows to test several sensor configurations in order to optimise the diagnostic possibilities. The obtained model can also be used for the simulation of the process and its diagnosis algorithms in normal and faulty situations.",2008,0, 3985,Web-based evaluation of Parkinson's Disease subjects: Objective performance capacity measurements and subjective characterization profiles,"Parkinson's Disease (PD) is classified as a progressively degenerative movement disorder, affecting approximately 0.2% of the population and resulting in decreased performance in a wide variety of activities of daily living. Motivated by needs associated with the conduct of multi-center clinical trials, early detection, and the optimization of routine management of individuals with PD, we have developed a three-tiered approach to evaluation of PD and other neurologic diseases/disorders. One tier is characterized as “web-based evaluationâ€? consisting of objective performance capacity tests and subjective questionnaires that target history and symptom evaluation. Here, we present the initial evaluation of three representative, self-administered, objective, web-based performance capacity tests (simple visual-hand response speed, rapid alternating movement quality, and upper extremity neuromotor channel capacity). Twenty-one subjects (13 with PD, 8 without neurologic disease) were evaluated. Generally good agreement was obtained with lab-based tests executed with an experienced test administrator. We conclude that objective performance capacity testing is a feasible component of a web-based evaluation for PD, providing a sufficient level of fidelity to be useful.",2008,0, 3986,Ultrasound-based coordinate measuring system for estimating cervical dysfunction during functional movement,"Cervical range of motion (ROM) is a part of the dynamic component of spine evaluation and can be used as an indication of dysfunction in anatomical structures as well as a diagnostic aid in patients with neck pain. Studies indicate that movement coordination of axial segments such as head in dynamic state, disrupted in pathologic conditions. In recent years, a number of non-invasive instruments with varying degrees of accuracy and repeatability have been utilized to measure active or passive range of motion in asymptomatic adults. The aim of this investigation is to design and implement a new method by evidence based approach for estimating the level of defect in segment stability and improvement after treatment by measuring quality or quantity of movement among cervical segment. Transmitter sensors which have been mounted on body send ultrasonic burst signal periodically and from the delay time it takes for this burst to reach three other sensors which arranged on a T-shape Mechanical base, three dimensional position of the transmitter can be calculated. After sending 3D coordination data to a PC via USB port, a complex and elaborative Visual Basic software calculate the angular dispersion and acceleration for each segment separately. This software also calculates the stabilization parameters such as anchoring index (AI) and cross-correlation function (CCF) between head and trunk.",2008,0, 3987,Coal Management Module (CMM) for power plant,"Importance of coal management in power plant is very much significant and also one of the most critical areas in view of plant operation as well as cost involvement, so it forms an important part of the management process in a power plant. It deals with the management of commercial, operational and administrative functions pertaining to estimating coal requirements, selection of coal suppliers, coal quality check, transportation and coal handling, payment for coal received, consumption and calculation of coal efficiency. The results are then used for cost benefit analysis to suggest further plant improvement. At various levels, management information reports need to be extracted to communicate the required information across various levels of management. The core processes of coal management involve a huge amount of paper work and manual labour, which makes it tedious, time-consuming and prone to human errors. Moreover, the time taken at each stage as well as the transparency of the relevant information has a direct bearing on the economics and efficient operation of the power plant. Both system performance and information transparency can be enhanced by the introduction of Information Technology in managing this area. This paper reports on the design & development of Coal Management Module (CMM) Software, which aims at systematic functioning of the Core Business Processes of Coal Management of a typical coal-fired power plant.",2008,0, 3988,An Early Reliability Assessment Model for Data-Flow Software Architectures,"Software architectures have emerged as a promising approach for managing, analyzing, building, integrating, reusing, and improving the quality of software systems. Specifically, early design decisions can be improved by the analysis of architectural models for different properties. This paper addresses the problem of estimating the reliability of data-flow architectures before the construction of the system. The proposed model uses an operational profile of the system and a set of component test profiles. A test profile is a set of test cases extended with information about the software intra-component execution. The analysis of the system is performed by composing the test points along the virtual execution among the components. This strategy overcomes the determination of intermediate operational profiles. In addition, metrics to select the best match in the execution trace and to evaluate the selection error in such kind of match are described.",2008,0, 3989,A Study of Analogy Based Sampling for interval based cost estimation for software project management,"Software cost estimation is one of the most challenging activities in software project management. Since the software cost estimation affects almost all activities of software project development such as: biding, planning, and budgeting, the accurate estimation is very crucial to the success of software project management. However, due to the inherent uncertainties in the estimation process and other factors, the accurate estimates are often obtained with great difficulties. Therefore, it is safer to generate interval based estimates with a certain probability over them. In the literature, many approaches have been proposed for interval estimation. In this study, we propose a navel method namely Analogy Based Sampling (ABS) and compare ABS against the well established Bootstrapped Analogy Based Estimation (BABE) which is the only existing variant of analogy based method with the capability to generate interval predictions. The results and comparisons show that ABS could improve the performance of BABE with much higher efficiencies and more accurate interval predictions.",2008,0, 3990,Software estimation tool based on three -layer model for software engineering metrics,"In todaypsilas competitive world of software, cost estimation of software plays a major role because it determines the effort, time, quality etc. Lines of codes and function point are traditional method for estimating the various software metrics. But in todaypsilas world of object oriented programming function point calculation for C++, Java is a big challenge. We have developed a tool based on function point analysis which has three layer architecture model to calculate various software metrics project, especially for Java program. This tool has been used for estimating of various projects developed by students. Finally as a result we have shown the attributes of three layer models i.e. FP, LOC, productivity and Time etc.",2008,0, 3991,A study of unavailability of software systems,"System availability is a critical measure of complex system during the operational phase. It is especially important to those safety-critical systems. While in the last few decades many technologies have been developed for enhancing the hardware reliability and availability and there are plenty of literatures in this field, little has been done in the aspect of software availability. The objective of this paper is to investigate some simple models for the estimation of software unavailability and its impact on systems considering safety, risk and cost. In this paper, a stochastic framework to model software availability is proposed. We consider software availability as the percentage of time the software is operational and both the operation and repair process are studied under this framework. The unavailability is the percentage of downtime. The dynamic behaviours of software failure and maintenance are studied and the effects of scheduled service and break-down time on software unavailability are investigated. A numerical example is provided to show how software availability can be estimated.",2008,0, 3992,Effect of color pre-processing on color-based object detection,"Color-based object detection is receiving much interest recently for the potential applications in the traffic and security fields. While Color information is precious to facilitate and accelerate the object detection process, the spectral sensitivity functions of the color camera sensor have a strong effect on the quality of image colors and their appearance in the Hue -Saturation, H-S histogram of the object surface color. In this work we study the effect of color preprocessing by using a linear spectral sharpening transform, on the quality of image colors and their detection quality. It is shown that color-based object detection is severely impaired due to the spectral overlap of camera filters. We show the performance of a new spectral sharpening method, running on real images. The image color preprocessing resulted in a significant reduction of color correlation and hence clearer image colors. The transformed colors allow for fine definition of color zones in the H-S diagram and consequently enhances the detection process.",2008,0, 3993,Statistical Analysis of Bayes Optimal Subset Ranking,"The ranking problem has become increasingly important in modern applications of statistical methods in automated decision making systems. In particular, we consider a formulation of the statistical ranking problem which we call subset ranking, and focus on the discounted cumulated gain (DCG) criterion that measures the quality of items near the top of the rank-list. Similar to error minimization for binary classification, direct optimization of natural ranking criteria such as DCG leads to a nonconvex optimization problems that can be NP-hard. Therefore, a computationally more tractable approach is needed. We present bounds that relate the approximate optimization of DCG to the approximate minimization of certain regression errors. These bounds justify the use of convex learning formulations for solving the subset ranking problem. The resulting estimation methods are not conventional, in that we focus on the estimation quality in the top-portion of the rank-list. We further investigate the asymptotic statistical behavior of these formulations. Under appropriate conditions, the consistency of the estimation schemes with respect to the DCG metric can be derived.",2008,0, 3994,The π measure,"A novel software measure, called the pi measure, used for evaluating the fault-detection effectiveness of test sets, for measuring test-case independence and for measuring code complexity is proposed. The pi measure is interpreted as a degree of run-time control and data difference at the code level, resulting from executing a program on a set of test cases. Unlike other well-known static and dynamic complexity measures, the pi measure is an execution measure, computed using only run-time information. The Diversity Analyzer computes the pi measure for programs written in C, C++, C# and VB in .NET. The experimental data presented here show a correlation between the pi measure, test case independence and fault-detection rates.",2008,0, 3995,Software reliability allocation of digital relay for transmission line protection using a combined system hierarchy and fault tree approach,"Digital relay is a special purpose signal processing unit in which the samples of physical parameters such as current, voltage and other quantities are taken. With the proliferation of computer technology in terms of computational ability as well as reliability, computers are being used for such digital signal processing purposes. As far as computer hardware is concerned, it has been growing steadily in terms of power and reliability. Since power plant technology is now globally switching over to such computer-based relaying, software reliability naturally emerges as an area of prime importance. Recently, some computer-based digital relay algorithms have been proposed based on frequency-domain analysis using wavelet-neuro-fuzzy techniques for transmission line faults. A software reliability allocation scheme is devised for the performance evaluation of a multi-functional, multi-user digital relay that does detection, classification and location of transmission line faults.",2008,0, 3996,A Rough Set Model for Software Defect Prediction,High assurance software requires extensive and expensive assessment. Many software organizations frequently do not allocate enough resources for software quality. We research the defect detectors focusing on the data sets of software defect prediction. A rough set model is presented to deal with the attributes of data sets of software defect prediction in this paper. Appling this model to the most famous public domain data set created by the NASA's metrics data program shows its splendid performance.,2008,0, 3997,Research and Design of Multifunctional Intelligent Melted Iron Analyzer,"A multifunctional intelligent melted iron analyzer is researched and designed. This device combined the function of thermal analysis for quality analysis of melted iron and ultrasonic measuring for nodularity. By thermal analysis, equation of linear regression wasused and the percentage composition of carbon, silicon and phosphorus etc. were obtained. In addition, thermal analysis was used to predict gray iron inoculation result, structure and performance. Therefore, ultrasonic measuring was adopted to be applied in this study to survey the nodularity of spheroidal graphite iron. In order to protect the analyzer far away from boiling melted iron, wireless temperature collection was adopted. The system is composed of PC104, single chip SPCE061 and other peripheral circuits. Designed temperature collection and ultrasonic measuring module, the software of data analysis and management is programmed by VB6.0 based on PC104 and windows operation system. After running for more than one year, the analyzer goes very well in the foundry.",2008,0, 3998,Molecular imaging of the myoskeletal system through Diffusion Weighted and Diffusion Tensor Imaging with parallel imaging techniques,"Diffusion weighted imaging (DWI) and diffusion tensor imaging (DTI) are useful tools when used in combination with standard imaging methods that may offer a significant advantage in certain clinical applications such as orthopedics and myoskeletal tissue imaging. Incorporation of these tools in clinical practice is limited due to the considerable amount of user intervention that apparent diffusion coefficient (ADC) and anisotropy data require in terms of processing and quantification require and due to the importance of acquisition parameter optimization in image quality. In this work various acquisition parameters and their effects in DWI and DTI are investigated. To assess the quality of these techniques, a series of experiments were conducted using a phantom. The application of lipid suppression techniques and their compatibility with other parameters were also investigated. Artifacts were provoked to study the effects in imaging quality. All the data were processed with specialized software to analyze various aspects of the measurements and quantify various parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), and the accuracy of ADC and fractional anisotropy values. The experience acquired from the experiments was applied in acquisition parameter optimization and improvement of clinical applications for the rapid screening and differential diagnosis of myoskeletal pathologies.",2008,0, 3999,Assessment and optimization of TEA-PRESS sequences in 1H MRS and MRSI of the breast,"Magnetic Resonance Spectroscopy (MRS) and Magnetic Resonance Spectroscopic Imaging (MRSI) are useful tools when used in combination with standard imaging methods that may offer a significant advantage in certain clinical applications such as cancer localization and staging. Incorporation of these tools in clinical practice is, however, limited due to the considerable amount of user intervention that spectrum processing and quantification requires and due to the importance of acquisition parameter optimization in spectrum quality. In this work various acquisition parameters and their effects in spectrum quality are investigated. In order to assess the quality of various spectroscopic techniques, a series of experiments were conducted using a standard solution. The application of water and fat suppression techniques and their compatibility with other parameters were also investigated. A number of artifacts were provoked to study the effects in spectrum quality. The stability of the equipment, the appearance of errors and artifacts and the reproducibility of the results were also examined to obtain useful conclusions for the interaction of acquisition parameters. All the data were processed with specialized computer software (jMRUI 2.2, FUNCTOOL) to analyze various aspects of the measurements and quantify various parameters such as signal to noise ratio (SNR), full width at half maximum (FWHM), peak height and j-modulation. The experience acquired from the conducted experiments was successfully applied in acquisition parameter optimization and improvement of clinical applications for the biochemical analysis of breast lesions by significantly improving the spectrum quality, SNR and spatial resolution.",2008,0, 4000,Fault-tolerant event region detection in wireless sensor networks using statistical hypothesis test,"Most existing algorithms for fault-tolerant event region detection only assume that events are spatially correlated, but we argue that events are usually both spatially and temporally correlated. By examining the temporal correlation of sensor measurements, we propose a detection algorithm by applying statistical hypothesis test (SHT). SHT-based algorithm is more accurate in detecting event regions, and is more energy efficient since it avoids measurement exchanges. To improve the capability of fault recognition, we extend SHT-based algorithm by examining both spatial and temporal correlations of sensor measurements. The extended SHT-based algorithm can recognize almost all faults when sensor network is densely deployed.",2008,0, 4001,Multiresolution sensor fusion approach to PCB fault detection and isolation,"This paper describes a novel approach to printed circuit board (PCB) testing that fuses the products of individual, non-traditional sensors to draw conclusions regarding overall PCB health and performance. This approach supplements existing parametric test capabilities with the inclusion of sensors for electromagnetic emissions, laser Doppler vibrometry, off-gassing and material parameters, and X-ray and Terahertz spectral images of the PCB. This approach lends itself to the detection and prediction of entire classes of anomalies, degraded performance, and failures that are not detectable using current automatic test equipment (ATE) or other test devices performing end-to-end diagnostic testing of individual signal parameters. This greater performance comes with a smaller price tag in terms of non-recurring development and recurring maintenance costs over currently existing test program sets. The complexities of interfacing diverse and unique sensor technologies with the PCB are discussed from both the hardware and software perspective. Issues pertaining to creating a whole-PCB interface, not just at the card-edge connectors, are addressed. In addition, we discuss methods of integrating and interpreting the unique software inputs obtained from the various sensors to determine the existence of anomalies that may be indicative of existing or pending failures within the PCB. Indications of how these new sensor technologies may comprise future test systems, as well as their retrofit into existing test systems, will also be provided.",2008,0, 4002,Regression analysis of automated measurment systems,"Automated measurement systems are dependent upon successfully application of multiple integrated systems to perform measurement analysis on various units-under-tests (UUT)s. Proper testing, fault isolation and detection of a UUT are contingent upon accurate measurements of the automated measurement system. This paper extends previous presentation from 2007 AUTOTESTCON on the applicability of measurement system analysis for automated measurement systems. The motivation for this research was to reduce risk of transportability issues from legacy measurement systems to emerging systems. Improving regression testing utilizing parametric metadata for large scale automated measurement systems over existing regression testing techniques which provides engineers, developers and management increased confidence that mission performance is not compromised. The utilization of existing software statistical tools such as MinitabR provides the necessary statistical techniques to evaluate measurement capability of automated measurement systems. By applying measurement system analysis to assess the measurement variability between the US Navypsilas two prime automated test systems the Consolidated Automated Support System (CASS) and the Reconfigurable-Transportable Consolidated Automated Support System (RTCASS). Measurement system analysis shall include capability analysis between one selected CASS and RTCASS instrument to validate measurement process capability; general linear model to assess variability between stations, multivariate analysis to analyze measurement variability of UUTs between measurement systems, and gage repeatability and reproducibility analysis to isolate sources of variability at the UUT testing level.",2008,0, 4003,Flexible Advance Reservation Impact on Backfilling Scheduling Strategies,"Advance reservation mechanism was introduced in grid environments to provide time quality of service requirements for time critical applications. Also there are applications that need resource coordination namely co-allocation and workflow, which benefits from this capability. Contemporary trend in grid computing is toward flexibility, because of its advantages in advance reservation such as increasing accept rate. This mechanism should be supported by local scheduler which is responsible for normal jobs. Using backfilling methods with FCFS priority for scheduling normal jobs is the dominant approach. Therefore well investigation of performance impact of flexible Advance Reservation on backfilling scheduling is essential. In this paper incorporation of flexible Advance Reservation in major backfilling policies is investigated, regarding workload parameters impact on well known performance metrics. Here also the impact of increasing inaccuracy of user estimated job runtime is studied. Our experimental results indicate that aggressive backfilling algorithm is biased toward AR related performance metrics but conservative method has better job performance. Although more flexibility in advance reservation would improve AR performance but its impact on degrade of job performance metrics is noteworthy. Both parameter types' rates are less in conservative method. Moreover, usually increasing inaccuracy interestingly has opposing effect on backfilling methods.",2008,0, 4004,ADOM: An Adaptive Objective Migration Strategy for Grid Platform,"Object migration is the movement of objects from one machine to another during execution. It can be used to enhance the efficiency and the reliability of grid systems, such as to balance load distribution, to enable fault resilience, to improve system administration, and to minimize communication overhead. Most existing schemes apply fixed object migration strategies, which are unadaptable to changing requirements of applications. In this paper,we address the issue of object migration for large scale grid system with multiple object levels.First, we devise a probabilistic object tree model and formulate the object migration problem as an optimization problem. Then we proposed an adaptive object migration algorithm called ADOM to solve the problem. the ADOM algorithm applies the breadth first search scheme to traversal the object tree and migrates object adaptively according to their access probability. Finally we evaluate the performance of different object migration algorithms in our grid platform, which show that the ADOM algorithm outperforms other algorithms under large object tree size.",2008,0, 4005,A Dynamic Fault Tolerant Algorithm Based on Active Replication,"To the wide area network oriented distributed computing such as Web services, a slow service is equivalent to an unavailable service. It makes demands of effectively improving the performance without damaging availability to the replication algorithms. Aim for improving the performance of the active replication algorithm, we propose a new replication algorithm named AAR (adaptive active replication). Its basic idea is: all replicas receive requests, but only the fastest one returns the response. Its main advantages are: (1) The response is returned by the fastest replica; (2) The algorithm is based on the active replication algorithm, but it avoids the redundant nested invocation problem. We prove the advantages by analyzing and experiments.",2008,0, 4006,Bridging Security and Fault Management within Distributed Workflow Management Systems,"As opposed to centralized workflow management systems, the distributed execution of workflows can not rely on a trusted centralized point of coordination. As a result, basic security features including compliance of the overall sequence of workflow operations with the pre-defined workflow execution plan or traceability become critical issues that are yet to be addressed. Besides, the detection of security inconsistencies during the execution of a workflow usually implies the complete failure of the workflow although it may be possible in some situations to recover from the latter. In this paper, we present security solutions supporting the secure execution of distributed workflows. These mechanisms capitalize on onion encryption techniques and security policy models to assure the integrity of the distributed execution of workflows, to prevent business partners from being involved in a workflow instance forged by a malicious peer and to provide business partners identity traceability for sensitive workflow instances. Moreover, we specify how these security mechanisms can be combined with a transactional coordination framework to recover from faults that may be caught during their execution. The defined solutions can easily be integrated into distributed workflow management systems as our design is strongly coupled with the runtime specification of decentralized workflows.",2008,0, 4007,Distributed and scalable control plane for next generation routers: A case study of OSPF,"The growing traffic on the core Internet entails new requirements related to scalability and resiliency of the routers. One of the promising trends of router evolution is to build next generation routers with enhanced memory capacity and computing resources, distributed across a very high speed switching fabric. The main limitation of the current routing and signaling software modules, traditionally designed in a centralized manner, is that they do not scale in order to fully exploit such an advanced distributed hardware architecture. This paper discusses an implementation for an OSPF architecture for next generation routers, aiming at increasing the scalability and resiliency. The proposed architecture distributes the OSPF processing functions on router cards, i.e., on both control and line cards. Therefore, it reduces the bottlenecks and improves both the overall performance and the resiliency in the presence of faults. Scalability is estimated with respect to the CPU utilization and memory requirements.",2008,0, 4008,Accurate real-time monitoring of bottlenecks and performance of packet trace collection,Collection of packet traces for future analysis is a very meticulous work that must guarantee accurate traces in order for these traces to be valuable for analysis. Current platforms do not provide a means to measure this accuracy. This paper describes a real-time monitoring method to measure the quality of a collected trace. The method takes a system architecture approach monitoring different points of the system to account for all potential drops of the packet journey. A set of metadata is stored in metatraces to be analyzed together with the trace after the capturing. The primary information is taken from standard Ethernet counters which are available in all commodity hardware and therefore performs very well without expensive specific hardware. The paper presents the evaluation of the real-time monitoring method concluding that the processing overhead does not produce significant performance degradation and that it improves packet loss detection up to orders of magnitude depending on different scenarios.,2008,0, 4009,Perceptual feature based music classification - A DSP perspective for a new type of application,"Today, more and more computational power is available not only in desktop computers but also in portable devices such as smart phones or PDAs. At the same time the availability of huge non-volatile storage capacities (flash memory etc.) suggests to maintain huge music databases even in mobile devices. Automated music classification promises to allow keeping a much better overview on huge data bases for the user. Such a classification enables the user to sort the available huge music archives according to different genres which can be either predefined or user defined. It is typically based on a set of perceptual features which are extracted from the music data. Feature extraction and subsequent music classification are very computational intensive tasks. Today, a variety of music features and possible classification algorithms optimized for various application scenarios and achieving different classification qualities are under discussion. In this paper results concerning the computational needs and the achievable classification rates on different processor architectures are presented. The inspected processors include a general purpose P IV dual core processor, heterogeneous digital signal processor architectures like a Nomadik STn8810 (featuring a smart audio accelerator, SAA) as well as an OMAP2420. In order to increase classification performance, different forms of feature selection strategies (heuristic selection, full search and Mann-Whitney-Test) are applied. Furthermore, the potential of a hardware-based acceleration for this class of application is inspected by performing a fine as well as a coarse grain instruction tree analysis. Instruction trees are identified, which could be attractively implemented as custom instructions speeding up this class of applications.",2008,0, 4010,Region based image appeal metric for consumer photos,"Image appeal may be defined as the interest that a photograph generates when viewed by human observers, incorporating subjective factors on top of the traditional objective quality measures. User studies were conducted in order to identify the right features to use in an image appeal measure; these studies also revealed that a photograph may be appealing even if only a region/area of the photograph is actually appealing. Extensive experimentation helped identify a good set of low level features, which were combined into a set of image appeal metrics. The appealing region detection and the image appeal metrics were validated with experiments conducted on 2000 ground truth images, each graded by three human observers. Two main image appeal metrics are presented: one that ranks the images based on image appeal with a high correlation with the human observations; and a second one which performs better at retrieving highly appealing images from an image collection.",2008,0, 4011,Design of a Pareto-optimization environment and its application to motion estimation,"The characteristics of modern video signal processing algorithms are significantly influenced by a multitude of different configuration parameters. Hence, the selection of an optimum parameter set becomes a difficult task, since often various quality metrics have to be regarded, leading to a so-called multi-objective optimization problem. Furthermore, the high computational effort typically restricts the maximum possible number of simulated parameter configurations. Therefore, a flexible environment for multi-objective optimization of configuration parameters has been elaborated which uses evolutionary algorithms to efficiently explore the design-space. This environment is applied here for a detailed analysis and Pareto-optimization of a complex state-of-the-art motion estimation algorithm being used for frame rate conversion.",2008,0, 4012,Novelty Detection in Time Series Through Self-Organizing Networks: An Empirical Evaluation of Two Different Paradigms,"This paper addresses the issue of novelty or anomaly detection in time series data. The problem may be interpreted as a spatio-temporal classification procedure where current time series observation is labeled as normal or novel/abnormal according to a decision rule. In this work, the construction of the decision rules is formulated by means of two different self-organizing neural network (SONN) paradigms: one builds decision thresholds from quantization errors and the other one from prediction errors. Simulations with synthetic and real-world data show the feasibility of the two approaches.",2008,0, 4013,Pattern Recognition by DTW and Series Data Mining in 3D Stratum Modelling and 3D Visualization,"Three-dimension (3D) modeling and visualization of stratum plays important role in seismic active fault detection, of course in geoinformation science. Well-logging data of strata is taken as time series. Similarity measure of subsequence search is proposed based on dynamic time warping (DTW). Realizing time series match in different length of time series. The frequent pattern mining experiment is carried on to survey data of strata by multivariable combination analysis. We supply the stratum geophysics attribute by the depth (time) records sequence in the non-drill hole survey data's places using these frequent patterns,combining structure frame and mathematics geology interpolation technology, establish 3D geology model of the target area,and develop the underground geologic body 3D visualization software depending on visual studio.net and OpenGL graph packages, realize 3D visualization system.",2008,0, 4014,Asynchronous Distributed Parallel Gene Expression Programming Based on Estimation of Distribution Algorithm,"In order to reduce the computation time and improve the quality of solutions of Gene Expression Programming (GEP), a new asynchronous distributed parallel gene expression programming based on Estimation of Distribution Algorithm (EDA) is proposed. The idea of introducing EDA into GEP is to accelerate the convergence speed. Moreover, the improved GEP is implemented by an asynchronous distributed parallel method based on the island parallel model on a message passing interface (MPI) environment. Some experiments are done on distributed network connected by twenty computers. The best results of sequential and parallel algorithms are compared, speedup and performance influence of some important parallel control parameters to this parallel algorithm are discussed. The experimental results show that it may approach linear speedup and has better ability to find optimal solution and higher stability than sequential algorithm.",2008,0, 4015,DRESREM 2: An Analysis System for Multi-document Software Review Using Reviewers' Eye Movements,"To build high-reliability software in software development, software review is essential. Typically, software review requires documents from multiple phases such as requirements specification, design document and source code to reveal the inconsistencies among them and to ensure the traceability of deliverables. However, most previous studies on software review (reading) techniques focus on finding defects in a single document in their experiments. In this paper, we propose a multi-document review evaluation system, DRESREM2. This system records reviewers' eye movements and mouse/keyboard operations for analysis. We conducted eye gaze analysis of reviewers in design document review with multiple documents (including requirements specification, design document, etc.) to confirm the usefulness of the system. For the performance analysis, we recorded defect detection ratio, detection time per defect, and fixation ratio of eye movements on each document. As a result, reviewers who concentrated their eye movements on requirements specification found more defects in the design document. We believe this result is good evidence to encourage developers to read high-level documents when reviewing lowlevel documents.",2008,0, 4016,A Comparative Evaluation of Using Genetic Programming for Predicting Fault Count Data,"There have been a number of software reliability growth models (SRGMs) proposed in literature. Due to several reasons, such as violation of models' assumptions and complexity of models, the practitioners face difficulties in knowing which models to apply in practice. This paper presents a comparative evaluation of traditional models and use of genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The motivation of using a GP approach is its ability to evolve a model based entirely on prior data without the need of making underlying assumptions. The results show the strengths of using GP for predicting fault count data.",2008,0, 4017,Hardware PSO for sensor network applications,"This paper addresses the problem of emission source localization in an environment monitored by a distributed wireless sensor network. Typical application scenarios of interest include emergency response and military surveillance. A nonlinear least squares method is employed to model the problem of estimation of the emission source location and the intensity at the source. A particle swam optimization (PSO) approach to solve this problem produces solution qualities that compete well with other best known traditional approaches. Moreover, the PSO solution achieves the best runtime performance compared to the other methods investigated. However, when it is targeted on to low capacity embedded processors PSO itself suffers from poor execution performance. To address this problem a direct, flexible and efficient hardware implementation of the PSO algorithm is developed, resulting in tremendous speedup over software solutions on embedded processors.",2008,0, 4018,Automatic evaluation of flickering sensitivity of fluorescent lamps caused by interharmonic voltages,"Recent studies have shown that fluorescent lamps are also prone to light flickering due to the increasing level of inter-harmonics in the power systems. This possibility and the levels of problematic inter-harmonics were confirmed through laboratory tests. These tests were manually conducted, and therefore were laborious and time consuming. It would be convenient to automate the testing by using the advanced features of data acquisition system to save time and manpower. This paper describes the development of an automated flicker measurement and testing system. It consists of a lighting booth, a programmable AC source, photo-sensing circuit, a data acquisition device and automated flicker measurement and testing software. It is developed in accordance to the IEC 61000-4-15 standard. Tests were later conducted on various types of compact fluorescent lamps, confirming their sensitivity to interharominc voltages.",2008,0, 4019,Real time “system-in-the-loopâ€?simulation of tactical networks,"Military mission success probability is closely related to careful planning of communication infrastructures for Command and Control Information Systems (C2IS). Over recent years, the designers of tactical networks have realized more and more need for using simulation tools in the process of designing networks with optimal performances, in regard to terrain conditions. One of the most demanding simulation problems is the modeling of protocols and devices; especially on the application layer, because the credibility of simulation results mainly depends on the quality of modeling. A new branch of communications network simulations has appeared for resolving these kinds of problems the - simulations with real communication devices in the simulation loop. Such simulations initiate real-time into the simulation process. The results of our research are aimed at simulation methodology. Using this system, military command personnel, can perform realistic training on the real C2IS equipment connected to the simulation tool, by modeled wireless links over a virtual terrain. In our research work, we used the OPNET Modeler simulation tool, with additional modules.",2008,0, 4020,Usage of Weibull and other models for software faults prediction in AXE,There are several families for software quality prediction techniques in development projects. All of them can be classified in several subfamilies. Each of these techniques has its own distinctive feature and it may not give correct prediction of quality for a scenario different from the one for which the technique was designed. All these techniques for software quality prediction are dispersed. One of them is statistical and probabilistic technique. The paper deals with software quality prediction techniques in development projects. Four different models based on statistical and probabilistic approach is presented and evaluated for prediction of software faults in very large development projects.,2008,0, 4021,Predicting Fault Proneness of Classes Trough a Multiobjective Particle Swarm Optimization Algorithm,"Software testing is a fundamental software engineering activity for quality assurance that is also traditionally very expensive. To reduce efforts of testing strategies, some design metrics have been used to predict the fault-proneness of a software class or module. Recent works have explored the use of machine learning (ML) techniques for fault prediction. However most used ML techniques can not deal with unbalanced data and their results usually have a difficult interpretation. Because of this, this paper introduces a multi-objective particle swarm optimization (MOPSO) algorithm for fault prediction. It allows the creation of classifiers composed by rules with specific properties by exploring Pareto dominance concepts. These rules are more intuitive and easier to understand because they can be interpreted independently one of each other. Furthermore, an experiment using the approach is presented and the results are compared to the other techniques explored in the area.",2008,0, 4022,Automatically Determining Compatibility of Evolving Services,"A major advantage of Service-Oriented Architectures (SOA) is composition and coordination of loosely coupled services. Because the development lifecycles of services and clients are decoupled, multiple service versions have to be maintained to continue supporting older clients. Typically versions are managed within the SOA by updating service descriptions using conventions on version numbers and namespaces. In all cases, the compatibility among services description must be evaluated, which can be hard, error-prone and costly if performed manually, particularly for complex descriptions. In this paper, we describe a method to automatically determine when two service descriptions are backward compatible. We then describe a case study to illustrate how we leveraged version compatibility information in a SOA environment and present initial performance overheads of doing so. By automatically exploring compatibility information, a) service developers can assess the impact of proposed changes; b) proper versioning requirements can be put in client implementations guaranteeing that incompatibilities will not occur during run-time; and c) messages exchanged in the SOA can be validated to ensure that only expected messages or compatible ones are exchanged.",2008,0, 4023,Invited Talk: The Role of Empiricism in Improving the Reliability of Future Software,"This talk first bemoan the general absence of empiricism in the evolution of software system building and then go on to show the results of some experiments in attempting to understand how defects appear in software, what factors affect their appearance and their relationship to testing generally. It challenge a few cherished beliefs on the way and demonstrate in no particular order at least the following: 1) the equilibrium state of a software system appears to conserve defect; 2) there is strong evidence in quasi-equilibrated systems for xlogx growth in defects where x is a measure of the lines of code; 3) component sizes in OO and non-OO software systems appear to be scale-free, (this is intimately related to the first two bullet points); 4) software measurements, (also known rather inaccurately as metrics) are effectively useless in determining the defect behaviour of a software system; 5) most such measurements, (including the ubiquitous cyclomatic complexity) are almost as highly correlated with lines of code as the relationship between temperature in degrees Fahrenheit and degrees Centigrade measured with a slightly noisy thermometer. In other words, lines of code are just about as good as anything else when estimating defects; 6) 'gotos considered irrelevant'. The goto statement has no obvious relationship with defects even when studied over very long periods. It probably never did; 7) checklists in code inspections appear to make no significant difference to the efficiency of the inspection; and 8) when you find a defect, there is an increasing probability of finding another in the same component. This strategy is effective up to a surprisingly large number of defects in youthful systems but not at all in elderly systems.",2008,0, 4024,Exploring the Relationship of a File's History and Its Fault-Proneness: An Empirical Study,"Knowing which particular characteristics of software are indicators for defects is very valuable for testers in order to allocate testing resources appropriately. In this paper, we present the results of an empirical study exploring the relationship between history characteristics of files and their defect count. We analyzed nine open source Java projects across different versions in order to answer the following questions: 1)Do past defects correlate with a filepsilas current defect count? 2) Do late changes correlate with a filepsilas defect count? 3) Is the file's age a good indicator for its defect count? The results are partly surprising. Only 4 of 9 programs show moderate correlation between a file's defects in previous and in current releases in more than the half of analysed releases. In contrast to our expectations, the oldest files represent the most fault-prone files. Additionally, late changes influence filepsilas defect count only partly.",2008,0, 4025,A Bayesian approach for software quality prediction,"Many statistical algorithms have been proposed for software quality prediction of fault-prone and non fault-prone program modules. The main goal of these algorithms is the improvement of software development processes. In this paper, we introduce a new software prediction algorithm. Our approach is purely Bayesian and is based on finite Dirichlet mixture models. The implementation of the Bayesian approach is done through the use of the Gibbs sampler. Experimental results are presented using simulated data, and a real application for software modules classification is also included.",2008,0, 4026,An intelligent FMEA system implemented with a hierarchy of back-propagation neural networks,"This paper has used a series of back-progation neural networks (BPNs) to form a hierarchical framework adequate for the implementation of an intelligent FMEA (failure modes and effects analysis) system. Its aim is to apply this novel system as a tool to assist the reliability design required for preventing failures occurred in the operating periods of a system The hierarchical structure upgrades the classical statistic off-line FMEA performance. From the simulated experiments of the proposed BPN-based FMEA system (N-FMEA), it has found that the accuracy of the failure modes classification and the reliability calculation are knowledgeable and potential for performing pragmatic preventive maintenance activities. As a result, this paper conducts an effective FMEA process and contributes to help FMEA working teams to reduce their working loading, shorten design time and ensure system operating success.",2008,0, 4027,Research on rotary dump health monitoring expert system based on causality diagram theory,"Causality diagram theory is a kind of uncertainty reasoning theory based on the belief network. It expresses the knowledge and causality relationship by diagrammatic form and direct causality intensity. Furthermore, it resolves the shortages of the belief network, and realizes a hybrid model which can process discrete and continuous variations. The theory of causality diagram model and the steps of causality diagram reasoning methodology are studied in this paper, and a model of rotary dump health monitoring expert system is proposed. In addition, this paper establishes the causality diagram of rotary dump and converts it to the causality tree. According to the causality tree of rotary dump, the causality diagram reasoning methodology composed of four steps is described. Finally, an application of rotary dump health monitoring expert system is shown, and the system performance analysis is discussed.",2008,0, 4028,Design of a DSP-based CMOS imaging system for embedded computer vision,"With the advance of image processing techniques and the fast reduction of image sensor costs, embedded computer vision is becoming a reality. This paper presents the design and implementation of an integrated CMOS imaging system based on DM642. The system can acquire VGA resolution image at 100 frame/s. The on-board 10 Mbps Ethernet connection and the UART as well as digital I/O ports provide convenience for direct interface to other intelligent devices. The on-board software provides image acquisition function, network-based control, network-based data transfer, JPEG compression, image pre-processing functions, edge detection function. Automatic exposure control and automatic white balance are realized in the system. Experiments show that the CMOS imaging system has a good performance in terms of imaging quality, with much potential for being used as intelligent vision system.",2008,0, 4029,Testing content addressable memories using instructions and march-like algorithms,"CAM is widely used in microprocessors and SOC TLB modules. It gives great advantage for software development. And TLB operations become bottleneck of the microprocessor performance. The test cost of normal BIST approach of the CAM can not be ignored. The paper analyses the fault models of CAM and proposes an instruction suitable march-like algorithm. The algorithm requires 14N+2L operations, where N is the number of words of the CAM and L is the width of a word. The algorithm covers 100% targeted faults. Instruction-level test using the algorithm has not any test cost on area and performance. Moreover the algorithm can be used in BIST approaches and have less performance lost for microprocessors. The paper instances the algorithm in a MIPS compatible microprocessor and have good results.",2008,0, 4030,Predicting the SEU error rate through fault injection for a complex microprocessor,"This paper deals with the prediction of SEU error rate for an application running on a complex processor. Both, radiation ground testing and fault injection, were performed while the selected processor, a Power PC 7448, executed a software issued from a real space application. The predicted error rate shows that generally used strategies, based on static cross-section, significantly overestimate the application error rate.",2008,0, 4031,Event Driven RFID Based Exhaust Gas Detection Services Oriented System Research,"The vehicle exhaust gas emission is directly related with the quality of air. This paper describes the research and development of an event driven RFID based exhaust gas detection system applying the service oriented architecture for the purpose of environmental protection. An edge engine supporting the EPCglobal ALE specification, a complex event processing engine dealing with the complex streaming RFID events and service bus, which are involved in the architecture of the system, are discussed both separately and in a whole picture. The edge engine is composed of four modules, which are kernel module, device management module, events filter module, and configuration module. It provides features to encapsulate the applications from device interfaces; to process the raw observations captured by the readers and sensors; and to provide an application-level interface for managing readers and querying RFID observations. Event processing engine consists of a network of event processing agents running in parallel that interact using a dedicated event processing infrastructure, which provides an abstract communication mechanism and allows dynamic reconfiguration of the communication topology between agents at run-time. Web Services and asynchronous APIs are pivotal system implementation for flexible components reuse, dynamic configuration and optimized performance. Service bus, through its service integration and management capabilities, is the backbone of the system. Lastly, the system is proven to be effective with high performance in the benchmark.",2008,0, 4032,"Reliable system design: Models, metrics and design techniques","Design of reliable systems meeting stringent quality, reliability, and availability requirements is becoming increasingly difficult in advanced technologies. The current design paradigm, which assumes that no gate or interconnect will ever operate incorrectly within the lifetime of a product, must change to cope with this situation. Future systems must be designed with built-in mechanisms for failure tolerance, prediction, detection and recovery during normal system operation. This tutorial will focus on models and metrics for designing reliable systems, algorithms and tools for modeling and evaluating such systems, will discuss a broad spectrum of techniques for building such systems with support for concurrent error detection, failure prediction, error correction, recovery, and self-repair. Complex interplay between power, performance and reliability requirements in future systems, and associated constraints will also be discussed.",2008,0, 4033,Graphene nanoribbon FETs: Technology exploration and CAD,"Graphene nanoribbon FETs (GNRFETs) have emerged as a promising candidate for nanoelectronics applications. This paper summarizes (i) current understanding and prospects for GNRFETs as ultimately scaled, ideal ballistic transistors, (ii) physics-based modeling of GNRFETs to support circuit design and CAD, and (iii) variability and defects in GNRs and their impact on GNRFET circuit performance and reliability.",2008,0, 4034,Heterogeneous Architecture-Based Software Reliability Estimation: Case Study,"With growing system complication, multiple structures are constructed in software, instead of single architecture style. As a result, the traditional architecture-based models that have considered only homogeneous software behaviors are not suitable to the complex system. In this paper a heterogeneous architecture-based model is presented. In order to verify the practicability, we apply the model to estimate the system reliability of the astronautics payload communication and monitoring system (APCMS). Multiple structures are constructed in the system, i.e., sequence, parallel and fault-tolerance. The study results demonstrate that the model is accurate when compared to the actual reliability. Theoretical research must be applied on actual systems to understand their applicability and accuracy.",2008,0, 4035,Human-Intention Driven Self Adaptive Software Evolvability in Distributed Service Environments,"Evolvability is essential to adapting to the dynamic and changing requirements in response to the feedback from context awareness systems. However, most of current context models have limited capability in exploring human intentions that often drive system evolution. To support service requirements analysis of real-world applications in distributed service environments, this paper focuses on human-intention driven software evolvability. In our approach, requirements analysis via an evolution cycle provides the means of speculating requirement changes, predicting possible new generations of system behaviors, and assessing the corresponding quality impacts. Furthermore, we also discuss evolvability metrics by observing intentions from user contexts.",2008,0, 4036,Assessing the Quality of Software Requirements Specifications,"Software requirements specifications (SRS) are hard to compare due to the uniqueness of the projects they were created in. In practice this means that it is not possible to objectively determine if a projects SRS fails to reach a certain quality threshold. Therefore, a commonly agreed-on quality model is needed. Additionally, a large set of empirical data is needed to establish a correlation between project success and quality levels. As there is no such quality model, we had to define our own based on the goal-question-metric (GQM) method. Based on this we analyzed more than 40 software projects (student projects in undergraduate software engineering classes), in order to contribute to the empirical part. This paper contributes in three areas: Firstly, we outline our GQM plan and our set of metrics. They were derived from widespread literature, and hence could lead to a discussion of how to measure requirements quality. Practitioners and researchers can profit from our experience, when measuring the quality of their requirements. Secondly, we present our findings. We hope that others find these valuable when comparing them to their own results. Finally,we show that the results of our quality assessment correlate to project success. Thus, we give an empirical indication for the correlation of requirements engineering and project success.",2008,0, 4037,Towards supporting evolution of service-oriented architectures through quality impact prediction,"The difficulty in evolving service-oriented architectures with extra-functional requirements seriously hinders the spread of this paradigm in critical application domains. This work tries to offset this disadvantage by introducing a design-time quality impact prediction and trade-off analysis method, which allows software engineers to predict the extra-functional consequences of alternative design decisions and select the optimal architecture without costly prototyping.",2008,0, 4038,Architecting for evolvability by means of traceability and features,"The frequent changes during the development and usage of large software systems often lead to a loss of architectural quality which hampers the implementation of further changes and thus the systemspsila evolution. To maintain the evolvability of such software systems, their architecture has to fulfil particular quality criteria. Available metrics and rigour approaches do not provide sufficient means to evaluate architectures regarding these criteria, and reviews require a high effort. This paper presents an approach for an evaluation of architectural models during design decisions, for early feedback and as part of architectural assessments. As the quality criteria for evolvability, model relations in terms of traceability links between feature model, design and implementation are evaluated. Indicators are introduced to assess these model relations, similar to metrics, but accompanied by problem resolution actions. The indicators are defined formally to enable a tool-based evaluation. The approach has been developed within a large software project for an IT infrastructure.",2008,0, 4039,Aspect Oriented Modeling of Component Architectures Using AADL,"Dependable embedded software system design is fastidious because designers have to understand and handle multiple, interdependent, pervasive dependability concerns such as fault tolerance, timeliness, performance, security. Because these concerns tend to crosscut application architecture, understanding and changing their descriptions can be difficult. Separating theses concerns at architectural level allow the designers to locate them, to understand them and thus to preserve the required properties when making the change in order to keep the architecture consistent. That separation of concerns leads to better understanding, reuse, analysis and evolution of these concerns during design. The Architecture Analysis and Design Language (AADL) is a standard architecture description language in use by a number of organizations around the world to design, analyze embedded software architectures and generate application code. In this paper we explain how aspect oriented modeling (AOM) techniques and AADL can be used to model dependability aspects of component architecture separately from other aspects. The AOM architectural model used to illustrate the approach in this paper consists of a component primary view describing the base architecture and a component template aspect model describing a fault tolerance concern that provides error detection and recovery services.",2008,0, 4040,A General QoS Error Detection and Diagnosis Framework for Accountable SOA,"Accountability is a composite measure for different but related quality aspects. To be able to ensure accountability in practice, it is required to define specific quality attributes of accountability, and metrics for each quality attribute. In this paper, we propose a quality detection and diagnosis framework for the service accountability. We first identify types of quality attributes which are essential to manage QoS in accountability framework. We then present a detection and diagnosis model for problematic situations in services system. In this model, we design situation link representing dependencies among quality attributes, and provide information to detect and diagnose problems and their root causes. Based on the model, we propose an integrated model-based and case-based diagnosis method using the situation link.",2008,0, 4041,A Novel Protocol in Intermittent Connected Mobile Networks,"Intermittent connected mobile sensor networks (ICMSN) have emerged in recent studies, e.g. underwater sensor networks, wildlife tracking and human-oriented flu virus monitoring. In these networks, there are no end to end connections from sensor nodes to sink nodes due to low node density and node mobility. Therefore, conventional routing protocols in WSN are not practical for ICMSN since packets will be dropped when no complete route to the sink is available. In this paper, we propose an efficient routing protocol for ICMSN, which is composed of buffer management and message forwarding. Buffer management consists of two related components: queuing mechanism and purging mechanism. The former determines priority of messages to transmit or discard based on the importance factor, which indicates the QoS requirement of the messages. The latter utilizes a death vector generated by the sink to purge useless messages that have been delivered from the network. Message forwarding makes decision on which sensors are qualified for next hop based on node delivery probability, which synthesizes the meeting predictability to the sink, current buffer state and residual energy and increases the likelihood that the sensor can deliver the message to the sink. Our experimental results show that the proposed routing protocol not only supports the QoS requirement of different content but also achieves the good performance tradeoff between delivery ratio, delay and resource consumption.",2008,0, 4042,A Redundancy Mechanism under Single Chip Multiprocessor Architecture,"Chip multiprocessor (CMP) will become more popular in embedded systems. As clock frequency increases, dependency among core blocks rises, which therefore tends to reduce performance of system and be more unstable. Redundancy is a way to improve the dependability of the system. But the traditional methods cannot make full use of the computing capacity of CMP. In this paper, we propose a new redundancy mechanism. The whole system is reorganized using lightweight virtualization. A group of processors and a block of memory are treated as a single logical computing unit each of which corresponds to a processer separately. In this way, only a distributor and an arbiter need to be added into operating system with least effort. The distributor dispatches one task to all the logical computing units. After execution, the arbiter gathers all results to analyze whether they are dependable. Because the distributor and the arbiter are very simple, total overhead of the system are very low. Experiment results show the proposed method is practical with average overhead less than 8%.",2008,0, 4043,Gumshoe: Diagnosing Performance Problems in Replicated File-Systems,"Replicated file-systems can experience degraded performance that might not be adequately handled by the underlying fault-tolerant protocols. We describe the design and implementation of Gumshoe, a system that aims to diagnose performance problems in replicated file-systems. Gumshoe periodically gathers OS and protocol metrics and then analyzes these metrics to automatically localize the performance problem to the culprit node(s). We describe our results and experiences with problem diagnosis in two replicated file-systems (replicated-CoreFS and BFS) using two file-system benchmarks (Postmark and IOzone).",2008,0, 4044,The model calibration protocol for parameter estimation of activated sludge model,"In this paper, a standardized model calibration method for the optimal parameter estimation of the ASM is proposed. We developed a kind of the calibration protocol of ASM 1 model based on parameter selection, design of experiments and parameter optimization using multiple response surface methodology. In this research, two softwares of WESTreg and MINITAP are used to model a waste water treatment process and optimize the model parameter and design of experiment. First, the most sensitive parameter set is determined by a new sensitivity analysis for considering the effluent quality index. Second, a multiple response surface methodology (MRS) is conducted for optimizing parameter estimation of ASM1 model. Because the proposed method is a multi-response model which is the suitable methods to estimate the model parameters in the ASM, it can simultaneously optimize the key parameters in the aspect of input-output model performance. The result of the model calibration protocol shows that it can select the key parameters of the ASM model and minimize the model error of the ASM model by a systematic sequence, which can save the much time in the modeling a process, improve the modeling performance of the ASM model and the prediction result of ASM. Since the proposed method is a kind of the calibration methodology, that is, protocol, it can be easily applied to a other ASM models, i.e., ASM2, 2d, 3 and also a full-scale plant.",2008,0, 4045,A study on position detection of Rolling Stock,"We developed a measurement system for on-line test and performance evaluation of rolling stock. The measurement system is composed of software part and hardware part. Perfect interface between multi-users is possible. Nowadays, position data inputs to pulse signal from wheel. Perfect position measurement was limited to slip and slide of vehicle. This measurement makes up for the weak points, position detection system using GPS develops. By using the system, rolling stock is capable of accurate fault position detection.",2008,0, 4046,Development of prototype Reference Station and Integrity Monitors (RSIM) for maritime application,IMO (International Maritime Organization) and IALA (International Association of Marine Aids to Navigation and Lighthouse Authorities) recommend establish the maritime DGPS (Differential GPS) for ship safety located in coast and narrow channel using GNSS (Global Navigation Satellite System). This paper analysis RSIM (Reference Stations and Integrity Monitors) prototype fault detection system and verify RSIM prototype performance.,2008,0, 4047,Design of intelligent testing device for airplane navigation radar,"Modern airborne radar systems are very complex electronic equipment systems, high reliability is demanded, and fine functions of automatic detect is needed for guarantee. In this dissertation, we have studied the detect method of a new kind of airborne radar systems. With Embed machinery control as its core, adoption unit wooden blocks type construction, examining the faults of radar one by one by exciting the fault models input and checking out the response, to get the faults localized. Being programmed by Visual Basic6.0, the software can be enlarged and advanced and it provides the users with an intelligent and automatic testing environment and amicable interface. Under the guidance of testing interface, testers can complete the fault localization of radar circuit automatically. By testing, this intelligent and synthesis detect system holds well-found function advanced techniques and predominant capabilities. It can proceed the all-directions performance test to airplane navigation radar. So it has important significance for ensuring flight safety and increasing combat effectiveness.",2008,0, 4048,Evaluation of an efficient control-oriented coverage metric,"Dynamic verification, the use of simulation to determine design correctness, is widely used due to its tractability for large designs. A serious limitation of dynamic techniques is the difficulty in determining whether or not a test sequence is sufficient to detect all likely design errors. Coverage metrics are used to address this problem by providing a set of goals to be achieved during the simulation process; if all coverage goals are satisfied then the test sequence is assumed to be complete. Many coverage metrics have been proposed but no effort has been made to identify a correlation between existing metrics and design quality. In this paper we present a technique to evaluate a coverage metric by examining its ability to ensure the detection of real design errors. We apply our evaluation technique to our control-oriented coverage metric to verify its ability to reveal design errors.",2008,0, 4049,Bottom up approach to enhance top level SoC verification,SoCs today rely heavily on behavioral models of analog circuits for Top Level Verification. The minimum modeling requirement is to model the functional behavior of the circuit. A lot of ongoing work is also focused on modeling analog circuits to predict the system performance of the SoC. This paper presents a methodology to enhance the quality of SoC verification by using a bottom up approach to verify the equivalence of building blocks and then work at higher levels to increase coverage. It is shown that this methodology can be used to verify functional and performance equivalence of behavioral models.,2008,0, 4050,Analyzing Performance of Web-Based Metrics for Evaluating Reliability and Maintainability of Hypermedia Applications,"This paper has been designed to identify the Web metrics for evaluating the reliability and maintainability of hypermedia applications. In the age of information and communication technology (ICT), Web and the Internet, have brought significant changes in information technology (IT) and their related scenarios. Therefore in this paper an attempt has been made to trace out the Web-based measurements towards the creation of efficient Web centric applications. The dramatic increase in Web site development and their relative usage has led to the need of Web-based metrics. These metrics will accurately assess the efforts in the Web-based applications. Here we promote the simple, but elegant approaches to estimate the efforts needed for designing Web-based applications with the help of user behavior model graph (UBMG), Web page replacement algorithms, and RS Web Application Effort Assessment (RSWAEA) method. Effort assessment of hyperdocuments is crucial for Web-based systems, where outages can result in loss of revenue and dissatisfied customers. Here we advocate a simple, but elegant approach for effort estimation for Web applications from an empirical point of view. The proposed methods and models have been designed after carrying out an empirical study with the students of an advanced university class and Web designers that used various client-server based Web technologies. Our first aim was to compare the relative importance of each Web-based metric and method. Second, we also implemented the quality of the designs obtained based by constructing the User Behavior Model Graphs (UBMGs) to capture the reliability of Web-based applications. Thirdly, we use Web page replacement algorithms for increasing the Web site usability index, maintainability, reliability, and ranking. The results obtained from the above Web-based metrics can help us to analytically identify the effort assessment and failure points in Web-based systems and makes the evaluation of reliability of thes- - e systems simple.",2008,0, 4051,Active Throughput Estimation Using RTT of Differing ICMP Packet Sizes,"Network measurements always play a vital role for good network management purposes and are essential for quality of service (QoS) requirements. Packet delay derivations are critical measurement parameters in diagnosing simple network performance measures. The traditional ping tool is popular for target host connectivity testing and average delay measurements. However, ping is a poor indicator of round trip time (RTT) measurements at the network layer. In this paper, an active round trip time (RTT) packet delay measurements using Internet control message protocol (ICMP) data packets of varying sizes is proposed. It was observed that in the proposed RTT measurement technique, peak throughput values were obtained compared to the ping measurements. The proposed approach eliminates the need for special purpose cooperation and/or software applications at the destination in the measured network path when using the ICMP Echo Request/ Reply technique.",2008,0, 4052,Rough Set Granularity in Mobile Web Pre-caching,"Mobile Web pre-caching (Web prefetching and caching) is an explication of performance enhancement and storage limitation of mobile devices. In this paper, we present the granularity of rough sets (RS) and RS based inductive learning in reducing the size of rules to be stored in mobile devices. The conditional attributes such as 'timestamp', 'size document' and 'object retrieval time' are presented and the provided decision is granted. Decision rules are obtained using RS based inductive learning to grant the ultimate judgment either to cache or not to cache the conditional attributes and objects. However, in mobile clients, these rules need to be specified so as the specific sessions can be kept in their mobile storage as well as the proxy caching administrations. Consequently, these rules will notify the objects in Web application request and priority level to the clients accordingly. The results represent that the granularity of RS in mobile Web pre-caching is vital to improve the latest Web caching technology by providing virtual client and administrator feedback; hence making Web pre-caching technologies practical, efficient and powerful.",2008,0, 4053,Efficient implementation of biomedical hardware using open source descriptions and behavioral synthesis,"Medical diagnostics are changing rapidly, aided by a new generation of portable equipment and handheld devices that can be carried to the patientpsilas bedside. Processing solutions for such equipment must offer high performance, low power consumption and also, minimize board space and component counts. Such a multi-objective optimization can be performed with behavioral hardware synthesis, offering design quality with significantly reduced design time. In this paper, open source code of a QRS detection algorithm is implemented in hardware using an advanced behavioral synthesis framework. Experimental results show that with this approach performance improvements are introduced with a fraction of design time, reducing dramatically time-to-market for modern diagnostic devices.",2008,0, 4054,Improving the performance of speech recognition systems using fault-tolerant techniques,"In this paper, using of fault tolerant techniques are studied and experimented in speech recognition systems to make these systems robust to noise. Recognizer redundancy is implemented to utilize the strengths of several recognition methods that each one has acceptable performance in a specific condition. Duplication-with-comparison and NMR methods are experimented with majority and plurality voting on a telephony Persian speech-enabled IVR system. Results of evaluations present two promising outcomes, first, it improves the performance considerably; second, it enables us to detect the outputs with low confidence.",2008,0, 4055,A Theoretical Model of the Effects of Losses and Delays on the Performance of SIP,The session initiation protocol (SIP) is widely used for VoIP communication. Losses caused by network or server overload would cause retransmissions and delays in the session establishment and would hence reduce the perceived service quality of the users. In order to be able to take counter measures network and service planers require detailed models that would allow them to predict such effects in advance. This paper presents a theoretical model of SIP that can be used for determining various parameters such as the delay and the number of messages required for establishing a SIP session when taking losses and delays into account. The model is restricted to the case when SIP is transported over UDP. The theoretical results are then verified using measurements.,2008,0, 4056,"The IV2 Multimodal Biometric Database (Including Iris, 2D, 3D, Stereoscopic, and Talking Face Data), and the IV2-2007 Evaluation Campaign","Face recognition finds its place in a large number of applications. They occur in different contexts related to security, entertainment or Internet applications. Reliable face recognition is still a great challenge to computer vision and pattern recognition researchers, and new algorithms need to be evaluated on relevant databases. The publicly available IV2 database allows monomodal and multimodal experiments using face data. Known variabilities, that are critical for the performance of the biometric systems (such as pose, expression, illumination and quality) are present. The face and subface data that are acquired in this database are: 2D audio-video talking-face sequences, 2D stereoscopic data acquired with two pairs of synchronized cameras, 3D facial data acquired with a laser scanner, and iris images acquired with a portable infrared camera. The IV2 database is designed for monomodal and multimodal experiments. The quality of the acquired data is of great importance. Therefore as a first step, and in order to better evaluate the quality of the data, a first internal evaluation was conducted. Only a small amount of the total acquired data was used for this evaluation: 2D still images, 3D scans and iris images. First results show the interest of this database. In parallel to the research algorithms, open-source reference systems were also run for baseline comparisons.",2008,0, 4057,Visibility Classification of Rocks in Piles,"Size measurement of rocks is usually performed by manual sampling and sieving techniques. Automatic on-line analysis of rock size based on image analysis techniques would allow non-invasive, frequent and consistent measurement. In practical measurement systems based on image analysis techniques, the surface of rock piles will be sampled and therefore contain overlapping rock fragments. It is critical to identify partially visible rock fragments for accurate size measurements. In this research, statistical classification methods are used to discriminate rocks on the surface of a pile between entirely visible and partially visible rocks. The feature visibility ratio is combined with commonly used 2D shape features to evaluate whether 2D shape features can improve classification accuracies to minimize overlapped particle error.",2008,0, 4058,Software Reliability - 40 Years of Avoiding the Question,"Summary form only given. During the past years, there has been an explosion of technical complexity of both hardware and software. Hardware, characterized by physics and empirical data, provides the reliability and systems engineers with a capability to estimate the expected reliability. Software has however managed to avoid this type of model development in part because the factors affecting reliability are not measurable by physical data. Software reliability is characterized by data gathered during systems integration and test. This data has attributes and parameters such as defect density, capability of programming of the software engineering team, the experience of the engineering team, the understanding of the application be the designers, the language used and more. Software reliability is more than the processes advocated by CMMI (Capability Maturity Modelreg Integration) and is susceptible to esoteric and infinitely harder parameters to measure. The author discusses some of the elements that affect software reliability and compares some of the differences when trying to estimate reliability of today's systems.",2008,0, 4059,Predicting Defect Content and Quality Assurance Effectiveness by Combining Expert Judgment and Defect Data - A Case Study,"Planning quality assurance (QA) activities in a systematic way and controlling their execution are challenging tasks for companies that develop software or software-intensive systems. Both require estimation capabilities regarding the effectiveness of the applied QA techniques and the defect content of the checked artifacts. Existing approaches for these purposes need extensive measurement data from his-torical projects. Due to the fact that many companies do not collect enough data for applying these approaches (es-pecially for the early project lifecycle), they typically base their QA planning and controlling solely on expert opinion. This article presents a hybrid method that combines commonly available measurement data and context-specific expert knowledge. To evaluate the methodpsilas applicability and usefulness, we conducted a case study in the context of independent verification and validation activities for critical software in the space domain. A hybrid defect content and effectiveness model was developed for the software requirements analysis phase and evaluated with available legacy data. One major result is that the hybrid model provides improved estimation accuracy when compared to applicable models based solely on data. The mean magni-tude of relative error (MMRE) determined by cross-validation is 29.6% compared to 76.5% obtained by the most accurate data-based model.",2008,0, 4060,Trace Normalization,"Identifying truly distinct traces is crucial for the performance of many dynamic analysis activities. For example, given a set of traces associated with a program failure, identifying a subset of unique traces can reduce the debugging effort by producing a smaller set of candidate fault locations. The process of identifying unique traces, however, is subject to the presence of irrelevant variations in the sequence of trace events, which can make a trace appear unique when it is not. In this paper we present an approach to reduce inconsequential and potentially detrimental trace variations. The approach decomposes traces into segments on which irrelevant variations caused by event ordering or repetition can be identified, and then used to normalize the traces in the pool. The approach is investigated on two well-known client dynamic analyses by replicating the conditions under which they were originally assessed, revealing that the clients can deliver more precise results with the normalized traces.",2008,0, 4061,Cost Curve Evaluation of Fault Prediction Models,"Prediction of fault prone software components is one of the most researched problems in software engineering. Many statistical techniques have been proposed but there is no consensus on the methodology to select the ""best model"" for the specific project. In this paper, we introduce and discuss the merits of cost curve analysis of fault prediction models. Cost curves allow software quality engineers to introduce project-specific cost of module misclassification into model evaluation. Classifying a software module as fault-prone implies the application of some verification activities, thus adding to the development cost. Misclassifying a module as fault free carries the risk of system failure, also associated with cost implications. Through the analysis of sixteen projects from public repositories, we observe that software quality does not necessarily benefit from the prediction of fault prone components. The inclusion of misclassification cost in model evaluation may indicate that even the ""best"" models achieve performance no better than trivial classification. Our results support a recommendation to adopt cost curves as one of the standard methods for software quality model performance evaluation.",2008,0, 4062,Using Statistical Models to Predict Software Regressions,"Incorrect changes made to the stable parts of a software system can cause failures - software regressions. Early detection of faulty code changes can be beneficial for the quality of a software system when these errors can be fixed before the system is released. In this paper, a statistical model for predicting software regressions is proposed. The model predicts risk of regression for a code change by using software metrics: type and size of the change, number of affected components, dependency metrics, developerpsilas experience and code metrics of the affected components. Prediction results could be used to prioritize testing of changes: the higher is the risk of regression for the change, the more thorough testing it should receive.",2008,0, 4063,An Effective Software Reliability Analysis Framework for Weapon System Development in Defense Domain,"Software reliability has been regarded as one of the most important quality attributes for software intensive systems, especially in defense domain. As most of weapon systems complicated functionalities and controls are implemented by software which is embedded in hardware systems, it became more critical to assure high reliability for software itself. However, many software development organizations in Korea defense domain have had problems in performing reliability engineered processes for developing mission-critical and/or safety-critical weapon systems. In this paper, we propose an effective framework with which software organizations can identify and select metrics associated software reliability, analyze the collected data, appraise software reliability, and develop software reliability prediction/estimation model based on the result of data analyses.",2008,0, 4064,The Effect of the Number of Defects on Estimates Produced by Capture-Recapture Models,Project managers use inspection data as input to capture-recapture (CR) models to estimate the total number of faults present in a software artifact. The CR models use the number of faults found during an inspection and the overlap of faults among inspectors to calculate the estimate. A common belief is that CR models underestimate the number of faults but their performance can be improved with more input data. This paper investigates the minimum number of faults that has to be present in an artifact before the CR method can be used. The result shows that the minimum number of faults varies from ten faults to twenty-three faults for different CR estimators.,2008,0, 4065,Software Reliability Modeling with Logistic Test Coverage Function,"Test coverage is a good indicator for testing completeness and effectiveness. This paper utilizes the logistic function to describe the test coverage growth behavior. Based on the logistic test coverage function, a model that relates test coverage to fault detection is presented and fitted to one actual data set. The experimental results show that, compared with three existing models, the evaluation performance of this new model is the best at least with the experimental data. Finally, the logistic test coverage function is applied to the NHPP software reliability modeling for further research.",2008,0, 4066,On Reliability Analysis of Open Source Software - FEDORA,"Reliability analyses of software systems often focus only on the number of faults reported against the software. Using a broader set of metrics, such as problem resolution times and field software usage levels, can provide a more comprehensive view of the product. Some of these metrics are more readily available for open source products. We analyzed a suite of FEDORA releases and obtained some interesting findings. For example, we show that traditional reliability models may be used to predict problem rates across releases. We also show that security related reports tend to have a different profile than non-security related problem reporting and repair.",2008,0, 4067,Requirement Model-Based Mutation Testing for Web Service,"Web services present a new promising software technology. However, some new issues and challenges in testing of them come out due to their characteristics of distribution, source code invisibility etc. This paper discusses the traditional mutation testing and then a new methodology of OWL-S requirement model-based web service mutation testing is brought forward. The traits of this methodology are as follows. Firstly, requirements are used effectively to reduce the magnitude of mutants. Secondly, mutants are generated by AOP technology conveniently and promptly. Thirdly, to reducing testing cost, using business logic implied in OWL-S requirement model as assistant of the process of killing the mutants. Fourthly, two sufficient measurement criteria are employed to evaluate the testing process. Finally, our empirical results have shown the usefulness of this testing method.",2008,0, 4068,A Practical Monitoring Framework for ESB-Based Services,"Services in service-oriented computing (SOC) are often black-box since they are typically developed by 3rd party developers and deployed only with their interface specifications. Therefore, internal details of services and implementation details of service components are not readily available. Also, services in SOC are highly evolvable since new services can be registered into repositories, and existing services maybe modified for their logic and interfaces, or they may suddenly disappear. As the first and most essential step for service management, service monitoring is to acquire useful data and information about the services and to assess various quality of service (QoS). Being able to monitor services is a strong prerequisite to effective service management. In this paper, we present a service monitoring framework for ESB-based services, as a sub-system of our Open Service Management Framework (OSMaF). We firstly define the criteria for designing QoS monitoring framework, and present the architecture and key components of the framework. Then, we illustrate the key techniques used to efficiently monitor services and compute QoS metrics. Finally, an implementation of the framework is presented to show its applicability.",2008,0, 4069,Using static analysis to improve communications infrastructure,"Static analysis is a promising technique for improving the safety and reliability of software used in avionics infrastructure. Source code analyzers are effective at locating a significant class of defects that are not detected by compilers during standard builds and often go undetected during runtime testing as well. Related to bug finders are a number of other static code improvement tasks, including automated unit test generation, programmer and software metrics tracking, and coding standards enforcement. However, adoption of these tools for everyday avionics software developer has been low. This paper will discuss the major barriers to adoption of these important tools and provide advice regarding how they can be effectively promulgated across the enterprise. Case studies of popular open source applications will be provided for illustration.",2008,0, 4070,HyperMIP: Hypervisor Controlled Mobile IP for Virtual Machine Live Migration across Networks,"Live migration provides transparent load-balancing and fault-tolerant mechanism for applications. When a Virtual Machine migrates among hosts residing in two networks, the network attachment point of the Virtual Machine is also changed, thus the Virtual Machine will suffer from IP mobility problem after migration. This paper proposes an approach called Hypervisor controlled Mobile IP to support live migration of Virtual Machine across networks, which enables virtual machine live migration over distributed computing resources. Since Hypervisor is capable of predicting exact time and destination host of Virtual Machine migration, our approach not only can improve migration performance but also reduce the network restoration latency. Some comprehensive experiments have been conducted and the results show that the HyperMIP brings negligible overhead to network performance of Virtual Machines. The network restoration time of HyperMIP supported migration is about only 3 second. HyperMIP is a promising essential component to provide reliability and fault tolerant for network application running in Virtual Machine.",2008,0, 4071,Formal Support for Quantitative Analysis of Residual Risks in Safety-Critical Systems,"With the increasing complexity in software and electronics in safety-critical systems new challenges to lower the costs and decrease time-to-market, while preserving high assurance have emerged. During the safety assessment process, the goal is to minimize the risk and particular, the impact of probable faults on system level safety. Every potential fault must be identified and analysed in order to determine which faults that are most important to focus on. In this paper, we extend our earlier work on formal qualitative analysis with a quantitative analysis of fault tolerance. Our analysis is based on design models of the system under construction. It further builds on formal models of faults that have been extended for estimated occurence probability allowing to analyse the system-level failure probability. This is done with the help of the probabilistic model checker PRISM. The extension provides an improvement in the costly process of certification in which all forseen faults have to be evaluated with respect to their impact on safety and reliability. We demonstrate our approach using an application from the avionic industry: an Altitude Meter System.",2008,0, 4072,Detection and Diagnosis of Recurrent Faults in Software Systems by Invariant Analysis,"A correctly functioning enterprise-software system exhibits long-term, stable correlations between many of its monitoring metrics. Some of these correlations no longer hold when there is an error in the system, potentially enabling error detection and fault diagnosis. However, existing approaches are inefficient, requiring a large number of metrics to be monitored and ignoring the relative discriminative properties of different metric correlations. In enterprise-software systems, similar faults tend to reoccur. It is therefore possible to significantly improve existing correlation-analysis approaches by learning the effects of common recurrent faults on correlations. We present methods to determine the most significant correlations to track for efficient error detection, and the correlations that contribute the most to diagnosis accuracy. We apply machine learning to identify the relevant correlations, removing the need for manually configured correlation thresholds, as used in the prior approaches. We validate our work on a multi-tier enterprise-software system. We are able to detect and correctly diagnose 8 of 10 injected faults to within three possible causes, and to within two in 7 out of 8 cases. This compares favourably with the existing approaches whose diagnosis accuracy is 3 out of 10 to within 3 possible causes. We achieve a precision of at least 95%.",2008,0, 4073,Partheno-Genetic Algorithm for Test Instruction Generation,"Test case generation is the classic method in finding software defects, and test instruction generation is one of its typical applications in embedded chipset systems.In this paper, the optimized partheno-genetic algorithm(PGA) is proposed after a 0-1 integer programming model is set up for instruction-set test cases generation problem. Based on simulation, the proposed model and algorithm achieve a convincing computational performance, in most cases 50%~70%, instruction-set test cases with better ability of error detecting obtained using this algorithm could save the execution time up to 3 seconds. Besides, it also avoids the problem of using complicated crossover and mutation operations that traditional genetic algorithm shave.",2008,0, 4074,An Experience-Based Approach for Test Execution Effort Estimation,"Software testing is becoming more and more important as it is a widely used activity to ensure software quality. Testing is now an essential phase in software development life cycle. Test execution becomes an activity in the critical path of project development. In this case, a good test execution effort estimation approach can benefit both tester managers and software projects.This paper proposes an experience-based approach for test execution effort estimation. In the approach, we characterize a test suite as a 3-dimensions vector which combines test case number, test execution complexity and its tester together. Based on the test suite execution vector model, we set up an experience database, and then a machine learning algorithm is applied to estimate efforts for given test suite vectors. We evaluate the approach through an empirical study on projects from a financial software company.",2008,0, 4075,Adaptive block-based approach to image stabilization,"The objective of image stabilization is to prevent image blurring caused by the relative motion between the camera and the scene during the image integration time. In this paper we propose a software approach to image stabilization based on capturing and fusing multiple short exposed image frames of the same scene. Due to their short exposure, the individual frames are noisy, but they are less corrupted by motion blur than it would be a single long exposed frame. The proposed fusion method is designed such that to compensate for the misalignment between the individual frames, and to prevent the blur caused by object motion in front of the camera during the multi-frame image acquisition. Various natural images acquired with camera phones have been used to evaluate the proposed image stabilization system. The results reveal the ability of the system to improve the image quality by simulating longer exposure times. In addition the system has the ability to reduce the effect of noise and outliers present in the individual short exposed frames.",2008,0, 4076,Determination of a failure probability prognosis based on PD - diagnostics in gis,"In complex high voltage components local insulation defects can cause partial discharges (PD). Especially in highly stressed gas insulated switchgear (GIS) these PD affected defects can lead to major blackouts. Today each PD activity on important insulation components causes an intervention of an expert, who has to identify and analyze the PD source and has to decide: Are the modules concerned to be switched off or can they stay in service? To reduce these cost and time intensive expert interventions, this contribution specifies a proposal which combines an automated PD defect identification procedure with a quantifiable diagnosis confidence. A risk assessment procedure is described, which is based on measurements of phase resolved PD pulse sequence data and a subsequent PD source identification. A defect specific risk is determined and then integrated within a failure probability software using the Farmer diagram. The risks of failure are classified into three levels. The uncertainty of the PD diagnosis is assessed by applying different PD sources and comparisons with other evaluation concepts as well as considering system theoretical investigations. It is shown that the PD defect specific risk is the key aspect of this approach which depends on the so called criticality range and the main PD impact aspects PD location, time dependency, defect property and (over) voltage dependency.",2008,0, 4077,Internet traffic management based on AMCC network processor,"The remarkable and continuous increase in link speed and application variety of Internet calls for new QoS provision technology that is both efficient and flexible. Being a kind of application-specific processor optimized for network computation, network processor (NP) combines the high performance of hardware and the flexibility of software, which exactly meets the demands of nowadays Internet traffic management and QoS provision. This paper originally presents an efficient traffic management mechanism based on AMCC network processor. The data-plane software architecture, the manipulation process, especially some key components such as accurate traffic classification, flexible access control and three-step scheduling are discussed in detail. We implement the traffic management mechanism based on NP3450 network processor and test it using Spirent AX/4000 broadband test system. The test results show that our proposal can efficiently provide QoS guarantee for various traffic category on OC-48 links.",2008,0, 4078,Integrated tactile system for dynamic 3D contact force mapping,"We present the first integrated tactile system that is based on dynamic, spatially distributed, three-axial contact force data. Compared to general pressure mapping systems, our devices measure not only one, but all three components of contact forces (normal and shear) with up to 64 independent micromachined force sensing elements integrated on a single chip. The spatially distributed shear force sensing adds new dimensions and directions to tactile data analysis, including pre-slip detection, enhanced robotic grasping or high quality tactile texture classification. In this paper we briefly describe the components of the novel system: the sensor arrays, the data acquisition methods and the data analysis software. We also present two example applications that exploit the advantages of real time three-dimensional contact force mapping.",2008,0, 4079,Comparative study of cognitive complexity measures,"Complexity metrics are used to predict critical information about reliability and maintainability of software systems. Cognitive complexity measure based on cognitive informatics, plays an important role in understanding the fundamental characteristics of software, therefore directly affects the understandability and maintainability of software systems. In this paper, we compared available cognitive complexity measures and evaluated cognitive weight complexity measure in terms of Weyukerpsilas properties.",2008,0, 4080,Learning software engineering principles using open source software,"Traditional lectures espousing software engineering principles hardly engage studentspsila attention due to the fact that students often view software engineering principles as mere academic concepts without a clear understanding of how they can be used in practice. Some of the issues that contribute to this perception include lack of experience in writing and understanding large programs, and lack of opportunities for inspecting and maintaining code written by others. To address these issues, we have worked on a project whose overarching goal is to teach students a subset of basic software engineering principles using source code exploration as the primary mechanism. We attempted to espouse the following software engineering principles and concepts: role of coding conventions and coding style, programming by intention to develop readable and maintainable code, assessing code quality using software metrics, refactoring, and reverse engineering to recover design elements. Student teams have examined the following open source Java code bases: ImageJ, Apache Derby, Apache Lucene, Hibernate, and JUnit. We have used Eclipse IDE and relevant plug-ins in this project.",2008,0, 4081,Research and Assessment of the Reliability of a Fault Tolerant Model Using AADL,"In order to solve the problem of the assessment of the reliability of the fault tolerant system, the work in this paper is devoted to analyze a subsystem of ATC (air traffic control system), and use AADL (architecture analysis and design language) to build its model. After describing the various software and hardware error states and as well as error propagation from hardware to software, the work builds the AADL error model and convert it to GSPN (general stochastic Petri net). Using current Petri Net technology to assess the reliability of the fault tolerant system which is based on ATC as the background, this paper receives good result of the experiment.",2008,0, 4082,The Software Failure Prediction Based on Fractal,"Reliability is one of the most important qualities of software, and failure analysis is an important part of the research of software reliability. Fractals are mathematical or natural objects that are made of parts similar to the whole in certain ways. A fractal has a self-similar structure that occurs at different scales. In this paper the failure data of software are analyzed, the fractals are discovered in the data, and the method of software failure prediction based on fractals is proposed. Analyzing the empirical failure data (three data sets including two of Musa's) validates the validity of the model. It should be noticed that the analyses and research methods in this paper are differ from the conventional methods in the past, and a new idea for the research of the software failure mechanism is presented.",2008,0, 4083,Towards High-Level Parallel Programming Models for Multicore Systems,"Parallel programming represents the next turning point in how software engineers write software. Multicore processors can be found today in the heart of supercomputers, desktop computers and laptops. Consequently, applications will increasingly need to be parallelized to fully exploit multicore processors throughput gains now becoming available. Unfortunately, writing parallel code is more complex than writing serial code. This is where the threading building blocks (TBB) approach enters the parallel computing picture. TBB helps developers create multithreaded applications more easily by using high-level abstractions to hide much of the complexity of parallel programming, We study the programmability and performance of TBB by evaluating several practical applications. The results show very promising performance but parallel programming with TBB is still tedious and error-prone.",2008,0, 4084,The Implementation of Analysis System of Vehicle Impact Based on Dynamic Image Sequence,"This paper presents the way to realize the analysis system of vehicle impact based on dynamic image sequence, and discusses some key technologies: import the method of image recovering based on atmospheric physical model to reduce the effect on image quality made by atmosphere; present the amalgamation method of background subtraction and temporal difference to realize fast detection to moving object (vehicle); establish a complete camera model based on perspective projection and distortion correction, and fulfill the 3D spatial coordinate measurement based on 2-step method. The qualitative and quantitative analysis of the impact process can be reached using this system, and the result approves its efficiency.",2008,0, 4085,Probability-Based Binary Particle Swarm Optimization Algorithm and Its Application to WFGD Control,"Sulfur dioxide is an air pollutant and an acid rain precursor. Coal-fired power generating plants are major sources of sulfur dioxide, so the limestone-gypsum wet flues gas desulphurization (WFGD) technology has been widely used in thermal power plant in China nowadays to reduce the emission of sulfur dioxide and protect the environment. The absorber slurry pH value control is very important for limestone-gypsum WFGD technique since it directly determine the desulphurization performance and the quality of product. However, it is hard to achieve the satisfactory adjustment performance for the traditional PID controller because of the complexity of the absorber slurry pH control. To tackle this problem, a novel probability-based binary particle swarm optimization (PBPSO) is proposed to tuning the PID parameters. The simulation results show PBPSO is valid and outperform the traditional binary PSO algorithm in terms of easy implementation and global optimal search ability. And it also presents that the proposed PBPSO algorithm can search the optimal PID parameters and achieves the expected control performance with PID controller by constructing the proper fitness function of PID tuning.",2008,0, 4086,An Effective Method for Support Vectors Selection in Kernel Space,"In Kernel Space, Support Vectors selection is an important issue for Support Vector Machines (SVMs). But, at present most sample selection methods have a common disadvantage that the candidate set for Support Vectors is the whole sample space, so, it may select interior samples or ldquooutliersrdquo that have little or even bad effect on the classifying quality. To tackle it, two improved methods based on effective candidate set are proposed in the paper. By using these two methods, the effective candidate set is firstly identified through ldquoremoving centerrdquo and eliminating ldquooutlinersrdquo, and then Support Vectors are selected in this effective candidate set. Experimental results show that the methods reserved effective candidate samples undoubtedly, and also improved the performance of the SVMs classifier in kernel space.",2008,0, 4087,An Effective Algorithm of Image Splicing Detection,"To implement image splicing detection a blind, passive and effective splicing detection scheme was proposed in this paper. The model was based on moment features extracted from the multi-size block discrete cosine transform (MBDCT) and some image quality metrics (IQMs) extracted from the given test image, which are sensitive to spliced image. This model can measure statistical differences between original image and spliced image. Experimental results demonstrate that this new splicing detection algorithm is effective and reliable; indicating that the proposed approach has a broad application prospect.",2008,0, 4088,An Implementation Scheme of Power Saving Mechanism for IEEE 802.11e,"U-APSD is the power save procedure defined in IEEE802.11e to improve the QoS performance of multimedia applications. U-APSD is specially suited for bi-directional traffic streams. But in some scenario without synchronized uplink and downlink traffic, defects in U-APSD procedure will result in some problems. In order to address this issue, this paper develops a low power implementation scheme of U-APSD by exploiting the downlink traffic rate estimation algorithm, and using clock gating technique as circuit-level approach to save dynamic power. Our experiments show that the scheme brings significant reduction in power consumption of NIC with little impact to the performance.",2008,0, 4089,Mining Change Patterns in AspectJ Software Evolution,"Understanding software change patterns during evolution is important for researchers concerned with alleviating change impacts. It can provide insight to understand the software evolution, predict future changes, and develop new refactoring algorithms. However, most of the current research focus on the procedural programs like C, or object-oriented programs like Java; seldom effort has been made for aspect-oriented software. In this paper, we propose an approach for mining change patterns in AspectJ software evolution. Our approach first decomposes the software changes into a set of atomic change representations, then employs the apriori data mining algorithm to generate the most frequent itemsets. The patterns we found reveal multiple properties of software changes, including their kind, frequency, and correlation with other changes. In our empirical evaluation on several non-trivial AspectJ benchmarks, we demonstrate that those change patterns can be used as measurement aid and fault predication for AspectJ software evolution analysis.",2008,0, 4090,An Architectural Quality Assessment for Domain-Specific Software,"With growing scale and complexity of software-intensive systems, the requirements to evaluating the effects of components on software quality will dramatically increase. Existing quality metrics for software architecture demand users to perform a tedious and high-cost process so that they cannot meet the needs for a rapid, simple and preliminary evaluation. In this paper, a description technique for software architecture is introduced. Through separating the control flow from component, the description technique makes it easy to analyze and evaluate software architecture. Based on the description technique, a quality metric is proposed, which is exploited to preliminarily evaluate the effects of component on the quality of software architecture. Finally, an experiment is given to show the simplicity, usability and effectiveness of the metric.",2008,0, 4091,Some Metrics for Accessing Quality of Product Line Architecture,"Product line architecture is the most important core asset of software product line. vADL, a product line architecture description languages, can be used for specifying product line architecture, and also provide enough information for measuring quality of product line architecture. In this paper, some new metrics are provided to assess similarity, variability, reusability, and complexity of product line architecture. The main feature of our approach is to assess the quality of product line architecture by analyzing its formal vADL specification, and therefore the process of metric computation can be automated completely.",2008,0, 4092,Defect Tracing System Based on Orthogonal Defect Classification,"With the increase of the software complexity, the defect measurement becomes a task of high priority. We give a new software defect analytical methodology based on orthogonal classification. This method has two folds. Then a set of orthogonal defect classification (ODC) reference model is given which includes activity, trigger, severity, origin, content and type of defect. In the end, it gives a support tool and concrete workflow of defect tracing. In contrast with the traditional method, this method not only has the advantages of popularity, haleness and low cost, but also improves the accuracy of identifying defects notably. Thus it offers a strong support for the prevention of defects in software products.",2008,0, 4093,Cost Model Based on Software-Process and Process Oriented Cost System,"The paper attempted to establish a software process oriented cost system depend upon cost model based on software process. It considers that it is impossible to realize the cost oriented process control and process improvement in view of software artifact is a complete unit. Firstly, cost model based on software process is introduced, different from the cost model based on product unit; the paper defines the reusable software process as the object of cost-metric. And then the paper provides ontology of software process and cost-metric in order to give a semantic background of cost-metric object and metric data. The research of this paper emphasizes on establishing cost management system oriented to software process, realizing the software process related cost measure, data record, and thereafter realizing cost control during the project execution and cost estimation in the phase of software process modeling. At last the paper expounds the method of integrating cost systems into PSEEs.",2008,0, 4094,Software Productivity Analysis with CSBSG Data Set,"The aim is to report on an initial exploratory statistical analysis of the relationship between software productivity and its influencing factors in China. We analyzed of a data set containing project measurement data which were collected from lots of IT companies in China by China Software Benchmarking Standards Group (CSBSG), established by China Software Process Improvement Network (CSPIN), China.Productivity with its influencing factors was investigated using a exploratory statistical analysis. Only 133 closed projects with high ratings could be used in this investigation. Furthermore, confidentiality and data quality, data filling ratio and sizing method were considered as major issues during our current work and also for future investigation. Overall, we observed that influencing factors - project size, project type and business area - do affect productivity with different level statistical significance. However, no clear evidence can be observed that factors, like maximum team size and programming languages affect productivity. Ongoing analysis of productivity and its influencing factors. Based on current productivity analysis results, comparative analysis between counties and estimation modeling will be investigated in future.",2008,0, 4095,Reliability Growth of Open Source Software Using Defect Analysis,We examine two active and popular open source products to observe whether or not open source software has a different defect arrival rate than software developed in-house. The evaluation used two common models of reliability growth models; concave and S-shaped and this analysis shows that open source has a different profile of defect arrival. Further investigation indicated that low level design instability is a possible explanation of the different defect growth profile.,2008,0, 4096,Fault Tree Based Prediction of Software Systems Safety,"In our modern world, software controls much of the hardware (equipment, electronics, and instruments) around us. Sometimes hardware failure can lead to a loss of human life. When software controls, operates, or interacts with such hardware, software safety becomes a vital concern. To assure the safety of software controlling system, prediction of software safety should be done at the beginning of systempsilas design. The paper focused on safety prediction using the key node property of fault trees. This metric use parameter ""s"" related to the fault tree to predict the safety of the software control systems. This metric allow designers to measure and the safety of software systems early in the design process. An applied example is shown in the paper.",2008,0, 4097,Research on a New Fabric Defect Identification Method,"A new method of recognition the fabric defects features is proposed for alleviating the difficulties of extracting to complicated fabric defects features. First, fast Fourier transform and self-adaptive power spectrum decomposition are performed. Sector-regional energy of spectrum is extracted and its mean and standard deviations is calculated as fabric features. Then, the spectral energy distribution was objected to direction Y, and local peak was extracted as defects recognition features after objection. Fabric defects recognition using the proposed method shows a high performance in the on-line detection.",2008,0, 4098,An Improved Three-Step Search Algorithm with Zero Detection and Vector Filter for Motion Estimation,"In this paper, a new fast block matching algorithm is developed for motion estimation. The motion estimation algorithm use improved 3-step block matching (ITSS) algorithm that can produces better quality performance and less computational time compared with three-step search (TSS) algorithm and correct the motion vector by using Zero Detection and vector filter. From the experimental results, the proposed algorithm is superior to TSS in both quality performance and computational complexity.",2008,0, 4099,A Constrained QoS Multicast Routing Algorithm in Ad Hoc,This paper introduces a constrained QoS multicast routing algorithm in Ad hoc (CQMRA). The key idea of proposed algorithm is to construct the new metric-entropy and select the stability path with the help of entropy metric to reduce the number of route reconstruction so as to provide QoS guarantee in Adhoc. The simulation results show that the proposed approach and parameters provide an accurate and efficient method of estimating and evaluating the route stability in Adhoc .,2008,0, 4100,QoS-Satisfied Pathover Scheme in FMIPv6 Environment,"Using the application of bluck data transfer, we investigate the performance of QoS-satisfied pathover for transport layer mobility scheme such as mSCTP in FMIPv6 envirmonment. We find that existing scheme has some defects in aspect of pathover and throughput. Based on this, we make a potential change to mSCTP by adding QoS-Measurement-Chunk, which is used to take into account information about wireless link condition in reselection/handover process of FMIPv6 network, we proposed a scheme with an algorithm named congestion-oriented pathover (COPO) to detect congestion of the primary path using back-to-back RTTs and adapt change of the wireless link parameters. A demonstrate using simulation is provided showing how the proposed scheme provides better performance to pathover and throughput.",2008,0, 4101,Reliable and Efficient Adaptive Streaming Mechanism for Multi-user SVC VoD System over GPRS/EDGE Network,"In this paper we propose and then implement a novel H.264/AVC Scalable Video Coding multi-user video-on-demand system for hand-held devices over GPRS/EDGE network. We design a reliable and efficient adaptive streaming mechanism for QoS support. The major contributions come from three folds: firstly, accumulation-based congestion control model is used for bandwidth estimation to ensure fairness among all the users. Secondly, for the purpose of handling time-varying bitrate, we propose a PLS algorithm which performs rate adaptation as well as optimizes the video quality in a R-D sense under the bandwidth constraint. Finally, to minimize adaptation delay, IDR frames are online inserted in the pre-encoded stream at the server by using transcoding. Furthermore, to improve system userspsila overall performance, transmission of packets is optimally scheduled through exploiting the multi-user diversity. More importantly, a practical system was implemented and further tested over existing GPRS/EDGE network deployed in China. The results demonstrate that the proposed mechanism achieves outstanding performance in terms of higher perceived video quality, smoother video playback, higher utilization of wireless link and fairer share of resources, compared to RTP Usage Model defined in 3 GPP Release7.",2008,0, 4102,Subjective Evaluation of Sound Quality for Mobile Spatial Digital Audio,"In the past decades, technical developments have enabled the delivery of sophisticated mobile spatial audio signals to consumers, over links that range very widely in quality, requiring decisions to be made about the tradeoffs between different aspects of audio quality. It is therefore important to determine the most important spatial quality attributes of reproduced sound fields and to find ways of predicting perceived sound quality on the basis of objective measurements.This paper first briefly reviews several subjective quality measures developed for streaming realtime audio over mobile network communications and in digital audio broadcasts. Then a experimental design on the application of the subjective listening test for mobile spatial audio is described. Finally, the conclusion is analysed and some future research directions are identified.",2008,0, 4103,Space-Time Correlation Based Fault Correction of Wireless Sensor Networks,"The nodes within wireless sensor network (WSN) have low reliability, and are easy to act abnormally and produce erroneous data. To address the problem, we propose a distributed fault correction algorithm, which makes use of correlation among data of adjacent nodes and the correlation between current data and historical data on a single node. The algorithm could correct measurement errors every time nodes take measures. Simulation results show that the algorithm could help correct lots of errors, and only introduce very few errors, while still keep effective for nodes near event region border where many existed algorithms failed.",2008,0, 4104,Dynamic Service Replica on Distributed Data Mining Grid,"Data mining is an important technique in summarize and prediction in different industry. With the help of grid computer, the capability for data storage and efficiency of data mining process can be highly increased. Nowadays, most of studies focus on data replication mechanism and replica selection method, but ignore the part of service replica mechanism which is also important. In the paper, we propose a dynamic service replica process (DSRP) system based on service-oriented architecture (SOA). DSRP has a mechanism to create and delete service replica automatically to achieve better load-balancing and performance. In the process, each activity in data mining process is viewed as a Web service and placed in different computer on data mining grid. User can define different data mining task by selecting proper services and the replication mechanism is sprung from the result of usage. The Web services provide data extracting, data preprocessing, data mining algorithms and analysis.",2008,0, 4105,The Portable PCB Fault Detector Based on ARM and Magnetic Image,"Traditional fault detector technique can hardly adapt to the modern electronic technique. This paper introduces how to set up military electronic equipmentspsila portable fault detector based on magnetic image, which can fast detect the electronic equipment in the scene. The detector has made use of ARM and uC/OS-II as the developing platform, taking MiniGUI as the figure interface.",2008,0, 4106,Damaged Mechanism Research of RS232 Interface under Electromagnetic Pulse,"RS232 interface executes the role of Transportation and Communication, which has became the important interface between MCU of embedded system and peripheral equipment. Because RS232 mainly work in bottom of communication protocol, so it is important to protect the infrastructure of RS232. In test, pulse double electromagnetic is pulled into RS232 data transmission lines by coupling clamp, simulated differential-mode pulse voltage, basing on the test data, it will get the damaged mechanism of RS232 interface, meanwhile presenting the Logistic model of injected voltage pulse and probability of damage to the port interface, and getting the performance evaluation of RS232 interface, moreover estimating the voltage pulse range of upper and lower bounds at the port in the normal and damage state.",2008,0, 4107,Design of a New Integrated Real Time Device to Monitor Power Harmonic Pollution,"A new integrated device is put forward to monitor harmonic pollution. This system takes a 8-bit microcontroller MPC860 chip, a PowerPC TM real time system controller that produced by Motorola's company as the nucleus and is composed of multi-channels data synchronous acquisition system in the Capacitor Voltage Transformer (CVT), which can suppress the ferromagnetic resonance, satisfy the long-range monitor and show a good frequency response. The harmonic analysis algorithm adopts Fast Fourier Transform (FFT). The measured quantities are analysed and compared with international standards. This new compact system has resolved problems of traditional models with inefficient performance and complicated program. It also can communicate with monitor center through telephone line, serial port and wireless, etc. This new device is carried out in order to identify the sources of harmonics generation, estimate the level of distortion and monitor electric power signal variations very efficiently. Simulation results show the new device helps to facilitate the supervision of electric power signals and many kinds of electric power quality disturbances and harmonic pollution are tested thoroughly.",2008,0, 4108,Data Mining from Simulation of Six Sigma in Manufacturing Company,"With the quickly development of the information system and the applications of six sigma in manufacturing company, six sigma faces the problem how to analyze the vast of data effectively. Data mining from simulation outputs is performed in this paper; it focuses on techniques for extracting knowledge from simulation outputs for a production and optimizing devices and labors with certain target. This paper first gives a brief definition of six sigma. Then we set up one simulation model for the production process and construct optimization objective. Then we set up one data mining model based on WITNESS Miner. It then explains six sigma project modeling with WITNESS incorporating the calculation of sigma ratings for processes and the export of key statistics. The WITNESS optimizer six sigma algorithm is explained and an example project illustrating the use of WITNESS Miner (data mining) is included. The mining results show that the model is able to fund important information affecting target, make manager diagnose the bottlenecks of the beer production process, and help manager to make decisions rapidly under uncertainty.",2008,0, 4109,A Novel Dynamic Rate Based System for Matching and Retrieval in Images and Video Sequences,In this paper we describe a system for improved object matching and retrieval in real world video sequences. We propose a dynamic frame rate model for selecting the best frames from a video based on the frame quality. A novel blur metric based approach is used for frame selection. A method for calculating this no-reference blur parameter by selecting only the features contributing to the real blur and discarding unwanted ones is proposed. Precision and Re-call figures are provided for trademark matching in sports videos. Results show that the proposed technique is an efficient and simple way of improving object matching and retrieval in video sequences.,2008,0, 4110,A Novel Hybrid Approach of KPCA and SVM for Crop Quality Classification,"Quality evaluation and classification is very important for crop market price determination. A lot of methods have been applied in the field of quality classification including principal component analysis (PCA) and artificial neural network (ANN) etc. The use of ANN has been shown to be a cost-effective technique. But their training is featured with some drawbacks such as small sample effect, black box effect and prone to overfitting. This paper proposes a novel hybrid approach of kernel principal component analysis (KPCA) with support vector machine (SVM) for developing the accuracy of quality classification. The tobacco quality data is evaluated in the experiment. Traditional PCA-SVM, SVM and ANN are investigated as comparison basis. The experimental results show that the proposed approach can achieve better performance in crop quality classification.",2008,0, 4111,Image Blind Forensics Using Artificial Neural Network,"With the advent of digital technology, digital image has gradually taken the place of the original analog photograph, and the forgery of digital image has become increasingly easy and indiscoverable. To implement image splicing blind detection, this paper proposes a new splicing detection model. Image splicing detection can be treated as a two-class pattern recognition problem, which builds the model using moment features and some image quality metrics (IQMs) extracted from the given test image. Artificial neural network (ANN) is chosen as a classifier to train and test the given images. Experimental results demonstrate that the proposed approach has a high accuracy rate, and the network selected can work properly, proving that the ANN is effective and suitable for this model.",2008,0, 4112,Widest K-Shortest Paths Q-Routing: A New QoS Routing Algorithm in Telecommunication Networks,"Actually, various kinds of sources (such as voice, video or data) with diverse traffic characteristics and quality of service requirements (QoS), which are multiplexed at very high rates, leads to significant traffic problems such as packet losses, transmission delays, delay variations, etc, caused mainly by congestion in the networks. The prediction of these problems in real time is quite difficult, making the effectiveness of ""traditional"" methodologies based on analytical models questionable. This article proposed and evaluates a QoS routing policy in packets topology and irregular traffic of communications network called widest K-shortest paths Q-routing. The technique used for the evaluation signals of reinforcement is Q-learning. Compared to standard Q-routing, the exploration of paths is limited to K best non loop paths in term of hops number (number of routers in a path) leading to a substantial reduction of convergence time. In this work a proposal for routing which improves the delay factor and is based on the reinforcement learning is concerned. We use Q-learning as the reinforcement learning technique and introduce K-shortest idea into the learning process. The proposed algorithm are applied to two different topologies. The OPNET is used to evaluate the performance of the proposed algorithm. The algorithm evaluation is done for two traffic conditions, namely low load and high load.",2008,0, 4113,Adaptive Spectrum Selection for Cognitive Radio Networks,"While essentially all the frequency is allocated to different wireless applications, observations provide evidence that usage of the spectrum is actually quite limited. Fortunately, cognitive radio technology offers an attractive solution to the reuse of under-utilized licensed spectrum. Therefore, proper channel selection mechanism and channel quality metric are essential to exploit the benefits of spectrum sharing. We propose a free time ratio (FTR) metric that captures channel quality. In addition, we also propose a distributed collaborative sensing scheme to ensure the robustness of the whole network. Simulation results show that the proposed scheme significantly outperforms the existing channel selection methods.",2008,0, 4114,Bearing Fault Diagnosis Based on Feature Weighted FCM Cluster Analysis,"A new method of fault diagnosis based on feature weighted FCM is presented. Feature-weight assigned to a feature indicates the importance of the feature. This paper shows that an appropriate assignment of feature-weight can improve the performance of fuzzy c-means clustering. Feature evaluation based on class separability criterion is discussed in this paper. Experiment shows that the algorithm is able to reliably recognize not only different fault categories but also fault severities. Therefore, it is a promising approach to fault diagnosis of rotating machinery.",2008,0, 4115,Harmonic Detection in Electric Power System Based on Wavelet Multi-resolution Analysis,"Because of widespread uses of nonlinear load, electric power system network is injected with much harmonic current, which does great harm to consumer equipment. In order to prevent harmonic current from influencing safety of system¿s operation, we should know well how much distorted harmonic wave contained and take corresponding measurement to control or compensate it. Inthis paper, based on a kind of multi-resolution wavelet method, we use seven levels of restructuring through using Daubechies wavelet (db24) to decompose the electric current signal into fundamental wave and higher harmonic. By experiment analysis of simulation software MATLAB, separated fundamental wave error is less than 1%, which realizes harmonic effective tracking and gives accuracy compensation by making use of total distorted component.",2008,0, 4116,Color Reproduction Quality Metric on Printing Images Based on the S-CIELAB Model,"The just-perceived color difference in a pair of printing images, the reproduction and its original, has been confirmed, subjectively by a paired-comparison psychological experiment and objectively by the S-CIELAB color quality metric. For one color image, a total number of 53 pairs of test images, simulating a number of varieties in C, M, Y, and K ink amounts, were produced, and determined their corresponding color difference recognizing probabilities by visual paired-comparison. Also, the image color difference Delta Es values, presented in the S-CIELAB model, for each pairs of images were calculated and correlated with their color difference recognizing probability. The results showed that the just-perceived image color difference Delta Es, when 0.9 color difference recognizing probability being considered as the just-perceived level, was about 1.4 Delta Eab units for the experiment image, being the image color fidelity threshold parameter.",2008,0, 4117,A Novel Optimum Data Duplication Approach for Soft Error Detection,"Soft errors are a growing concern for computer reliability. To mitigate the effects of soft errors, a variety of software-based fault tolerance methodologies have been proposed for their low costs. Data duplication techniques have the advantage of flexible and general implementation with strong capacity for error detection. However, the trade-off between reliability, performance and memory overhead should be carefully considered before employing data duplication techniques. In this paper, we first introduce an analytical model, named PRASE (Program Reliability Analysis with Soft Errors), which is able to access the impact of soft errors for the reliability of a program. Furthermore, the analytical result of PRASE points out a factor about data reliability weight, which meters the criticality of data for the overall reliability of the target program. Based on PRASE, we propose a novel data duplication approach, called ODD, which can provide the optimum error coverage under system performance constraints. To illustrate the effectiveness of our method, we perform several fault injection experiments and performance evaluations on a set of simple benchmark programs using the SimpleScalar tool set.",2008,0, 4118,Mining Individual Performance Indicators in Collaborative Development Using Software Repositories,"A better understanding of the individual developers¿ performance has been shown to result in benefits such as improved project estimation accuracy and enhanced software quality assurance. However, new challenges of distinguishing the individual activities involved in software evolution arise when considering collaborative development environments. Since software repositories such as version control systems (VCS) and bug tracking systems (BTS) are available for most software projects and hold a detailed and rich record of the historical development information, this paper presents our experiences mining individual performance indicators in collaborative development environments by using these repositories. The base of our key idea is to identify the complexity metrics (in the code base) and field defects (from bug tracking system) at individual-level by incorporating the historical data from version control system. We also remotely measure and analyze these indicators mined from a libre project jEdit, which involves around one hundred developer. The results show that these indicators are feasible and instructive in the understanding of the individual performance.",2008,0, 4119,A Design Quality Model for Service-Oriented Architecture,"Service-Oriented Architecture (SOA) is emerging as an effective solution to deal with rapid changes in the business environment. To handle fast-paced changes, organizations need to be able to assess the quality of its products prior to implementation. However, literature and industry has yet to explore the techniques for evaluating design quality of SOA artifacts. To address this need, this paper presents a hierarchical quality assessment model for early assessment of SOA system quality. By defining desirable quality attributes and tracing necessary metrics required to measure them, the approach establishes an assessment model for identification of metrics at different abstraction levels. Using the model, design problems can be detected and resolved before they work into the implemented system where they are more difficult to resolve. The model is validated against an empirical study on an existing SOA system to evaluate the quality impact from explicit and implicit changes to its requirements.",2008,0, 4120,Test Case Prioritization Based on Analysis of Program Structure,"Test case prioritization techniques have been empirically proved to be effective in improving the rate of fault detection in regression testing. However, most of previous techniques assume that all the faults have equal severity, which dose not meet the practice. In addition, because most of the existing techniques rely on the information gained from previous execution of test cases or source code changes, few of them can be directly applied to non-regression testing. In this paper, aiming to improve the rate of severe faults detection for both regression testing and non-regression testing, we propose a novel test case prioritization approach based on the analysis of program structure. The key idea of our approach is the evaluation of testing-importance for each module (e.g., method) covered by test cases. As a proof of concept, we implement $Apros$, a test case prioritization tool, and perform an empirical study on two real, non-trivial Java programs. The experimental result represents that our approach could be a promising solution to improve the rate of severe faults detection.",2008,0, 4121,Early Estimate the Size of Test Suites from Use Cases,"Software quality becomes an increasingly important factor in software marketing. It is well known that software testing is an important activity to ensure software quality. Despite the important role that software testing plays, little is known about the prediction of test suites size. Estimation of testing size is a crucial activity among the tasks of testing management. Work plan and subsequent estimations of the effort required are made based on the estimation of test suites size. The earlier test suites size estimation we do, the more benefit we will get in the process of testing. This paper presents an experience-based approach for the test suites size estimation. The main findings are: (1) Model of use case verification points. (2) Linear relationship between use case verification points and test case number. The test case number prediction model deduced from the data of real projects in a financial software company.",2008,0, 4122,Theoretical Maximum Prediction Accuracy for Analogy-Based Software Cost Estimation,"Software cost estimation is an important area of research in software engineering. Various cost estimation model evaluation criteria (such as MMRE, MdMRE etc.) have been developed for comparing prediction accuracy among cost estimation models. All of these metrics capture the residual difference between the predicted value and the actual value in the dataset, but ignore the importance of the dataset quality. What is more, they implicitly assume the prediction model to be able to predict with up to 100% accuracy at its maximum for a given dataset. Given that these prediction models only provide an estimate based on observed historical data, absolute accuracy cannot be possibly achieved. It is therefore important to realize the theoretical maximum prediction accuracy (TMPA) for the given model with a given dataset. In this paper, we first discuss the practical importance of this notion, and propose a novel method for the determination of TMPA in the application of analogy-based software cost estimation. Specifically, we determine the TMPA of analogy using a unique dynamic K-NN approach to simulate and optimize the prediction system. The results of an empirical experiment show that our method is practical and important for researchers seeking to develop improved prediction models, because it offers an alternative for practical comparison between different prediction models.",2008,0, 4123,Experimental Study of Discriminant Method with Application to Fault-Prone Module Detection,"Some techniques have been applied to improving software quality by classifying the software modules into fault-prone or non fault-prone categories. This can help developers focus on some high risk fault-prone modules. In this paper, a distribution-based Bayesian quadratic discriminant analysis (D-BQDA) technique is experimental investigated to identify software fault-prone modules. Experiments with software metrics data from two real projects indicate that this technique can classify software modules into a proper class with a lower misclassification rate and a higher efficiency.",2008,0, 4124,Simultaneously Removing Noise and Selecting Relevant Features for High Dimensional Noisy Data,"The classification for the noisy training data in high dimension suffers from concurrent negative effects by noise and irrelevant/redundant features. Noise disrupts the training data and irrelevant/redundant features prevent the classifier from picking relevant features in building the model. Therefore they may reduce classification accuracy. This paper introduces a novel approach to improve the quality of training data sets with noisy dependent variable and high dimensionality by simultaneously removing noisy instances and selecting relevant features for classification. Our approach relies on two genetic algorithms, one for noise detection and the other for feature selection, and allows them to exchange their results periodically at certain generation intervals. Prototype selection is used to improve the performance along with the genetic algorithm in the noise detection method. This paper shows that our approach enhances the quality of noisy training data sets with high dimension and substantially increases the classification accuracy.",2008,0, 4125,Force Feature Spaces for Visualization and Classification,"Distance-preserving dimension reduction techniques can fail to separate elements of different classes when the neighborhood structure does not carry sufficient class information. We introduce a new visual technique, K-epsilon diagrams, to analyze dataset topological structure and to assess whether intra-class and inter-class neighborhoods can be distinguished. We propose a force feature space data transform that emphasizes similarities between same-class points and enhances class separability. We show that the force feature space transform combined with distance-preserving dimension reduction produces better visualizations than dimension reduction alone. When used for classification, force feature spaces improve performance of K-nearest neighbor classifiers. Furthermore, the quality of force feature space transformations can be assessed using K-epsilon diagrams.",2008,0, 4126,Evaluating Interpolation-Based Power Management,"Power management for WSNs can take many forms, from adaptively tuning the power consumption of some of the components of a node to hibernating it completely. In the later case, the competence of the WSN must not be compromised. In general, the competence of a WSN is its ability to perform its function in an accurate and timely fashion. These two, related, Quality of Service (QoS) metrics are primarily affected by the density and latency of data from the environment, respectively. Without adequate density, interesting events may not be adequately observed or missed completely by the application, while stale data could result in event detection occurring too late. In opposition to this is the fact that the energy consumed by the network is related to the number of active nodes in the deployment. Therefore, given that the nodes have finite power resources, a trade-off exists between the longevity and QoS provided by the network and it is crucial that both aspects are considered when evaluating a power management protocol. In this paper, we present an evaluation of a novel node hibernation technique based on interpolated sensor readings according to these four metrics: energy consumption, density, message latency and the accuracy of an application utilising the data from the WSN. A comparison with a standard WSN that does not engage in power management is also presented, in order to show the overhead in the protocols operation.",2008,0, 4127,Orderly Random Testing for Both Hardware and Software,"Based on random testing, this paper introduces a new concept of orderly random testing for both hardware and software systems. Random testing, having been employed for years, seems to be inefficient for its random selection of test patterns. Therefore, a new concept of pre-determined distance among test vectors is proposed in the paper to make it more effective in testing. The idea is based on the fact that the larger the distance between two adjacent test vectors in a test sequence, the more the faults will be detected by the test vectors. Procedure of constructing such a testing sequence is presented in detail. The new approach has shown its remarkable advantage of fitting in with both hardware and software testing. Experimental results and mathematical analysis are also given to evaluate the performances of the novel method.",2008,0, 4128,Bayesian Inference Approach for Probabilistic Analogy Based Software Maintenance Effort Estimation,"Software maintenance effort estimation is essential for the success of software maintenance process. In the past decades, many methods have been proposed for maintenance effort estimation. However, most existing estimation methods only produce point predictions. Due to the inherent uncertainties and complexities in the maintenance process, the accurate point estimates are often obtained with great difficulties. Therefore some prior studies have been focusing on probabilistic predictions. Analogy Based Estimation (ABE) is one popular point estimation technique. This method is widely accepted due to its conceptual simplicity and empirical competitiveness. However, there is still a lack of probabilistic framework for ABE model. In this study, we first propose a probabilistic framework of ABE (PABE). The predictive PABE is obtained by integrating over its parameter k number of nearest neighbors via Bayesian inference. In addition, PABE is validated on four maintenance datasets with comparisons against other established effort estimation techniques. The promising results show that PABE could largely improve the point estimations of ABE and achieve quality probabilistic predictions.",2008,0, 4129,A New Paradigm for Software Reliability Modeling – From NHPP to NHGP,"Non-homogeneous gamma process (NHGP) models with typical reliability growth patterns are developed for software reliability assessment in order to overcome a weak point of the usual non-homogeneous Poisson process (NHPP) models. Though the analytical treatment of NHGPs as stochastic point processes is not so easy in general, they have an advantage to involve the NHPPs as well as a gamma renewal process as special cases, and are rather tractable on parameter estimation by means of the method of maximum likelihood. We perform the goodness-of fit test for several NHGP-based software reliability models (SRMs) and compare them with the existing NHPP-based ones. Throughout a numerical example with a real software fault data, it is shown that the NHGP-based SRMs can provide the better goodness-of-fit performances in earlier testing phases than the NHPP-based ones, but approach to them gradually as the testing time goes on. This implies that our new software reliability modeling framework with flexibility can describe better the software-fault detection phenomenon when the less information on software fault data is available.",2008,0, 4130,Hyper-Erlang Software Reliability Model,"This paper proposes a hyper-Erlang software reliability model (HErSRM) in the framework of non-homogeneous Poisson process (NHPP) modeling. The proposed HErSRM is a generalized model which contains some existing NHPP-based SRMs like Goel-Okumoto SRM and Delayed S-shaped SRM, and can represent a variety of software fault-detection patterns. Such characteristics are useful to solve the model selection problem arising in the practical use of NHPP-based SRMs. More precisely, we discuss the statistical inference of HErSRM based on the EM (expectation-maximization) algorithm. In numerical experiments, we show that the HErSRM outperforms conventional NHPP-based SRMs with respect to fitting ability.",2008,0, 4131,PRASE: An Approach for Program Reliability Analysis with Soft Errors,"Soft errors are emerging as a new challenge in computer applications. Current studies about soft errors mainly focus on the circuit and architecture level. Few works discuss the impact of soft errors on programs. This paper presents a novel approach named PRASE, which can analyze the reliability of a program with the effect of soft errors. Based on the simple probability theory and the corresponding assembly code of a program, we propose two models for analyzing the probabilities about error generation and error propagation. The analytical performance is increased significantly with the help of basic block analysis. The program¿s reliability is determined according to its actual execution paths. We propose a factor named PVF (program vulnerability factor), which represents the characteristic of program¿s vulnerability in the presence of soft errors. The experimental results show that the reliability of a program has a connection with its structure. Comparing with the traditional fault injection techniques, PRASE has the advantage of faster speed and lower price with more general results.",2008,0, 4132,Training Security Assurance Teams Using Vulnerability Injection,"Writing secure Web applications is a complex task. In fact, a vast majority of Web applications are likely to have security vulnerabilities that can be exploited using simple tools like a common Web browser. This represents a great danger as the attacks may have disastrous consequences to organizations, harming their assets and reputation. To mitigate these vulnerabilities, security code inspections and penetration tests must be conducted by well-trained teams during the development of the application. However, effective code inspections and testing takes time and cost a lot of money, even before any business revenue. Furthermore, software quality assurance teams typically lack the knowledge required to effectively detect security problems. In this paper we propose an approach to quickly and effectively train security assurance teams in the context of web application development. The approach combines a novel vulnerability injection technique with relevant guidance information about the most common security vulnerabilities to provide a realistic training scenario. Our experimental results show that a short training period is sufficient to clearly improve the ability of security assurance teams to detect vulnerabilities during both code inspections and penetration tests.",2008,0, 4133,Panel: SOA and Quality Assurance,This panel session discusses which qualities are of particular importance for SOA and it explores the possible ways to guarantee them. Presentations in the session will explore the use of different techniques and approaches in order to foster the adoption of quality modeling techniques by industrial software systems.,2008,0, 4134,Software Defect Prediction Using Call Graph Based Ranking (CGBR) Framework,"Recent research on static code attribute (SCA) based defect prediction suggests that a performance ceiling has been achieved and this barrier can be exceeded by increasing the information content in data. In this research we propose static call graph based ranking (CGBR) framework, which can be applied to any defect prediction model based on SCA. In this framework, we model both intra module properties and inter module relations. Our results show that defect predictors using CGBR framework can detect the same number of defective modules, while yielding significantly lower false alarm rates. On industrial public data, we also show that using CGBR framework can improve testing efforts by 23%.",2008,0, 4135,IFPUG-COSMIC Statistical Conversion,"One of the main issues faced within the Functional Size Measurement (FSM) community is the convertibility issue between FSM methods. A particular attention during last years was devoted to find a mathematical function for converting IFPUG functional size units to the newer COSMIC ones. Moving from the data sets and experiences described in previous studies, some attention points about cost and quality from the data gathering process emerge. This paper analyzes the data gathering process issue and proposes a solution for overcoming such difficulties. From an application of a repeteable and verifiable procedure, performed in a university course on Software Engineering with the support of an experienced measurer, two new data sets were derived. Finally an analysis of all datasets was done, presenting a possible interval for the conversion between IFPUG-COSMIC fsu.",2008,0, 4136,AODE for Source Code Metrics for Improved Software Maintainability,"Software metrics are collected at various phases of the whole software development process, in order to assist in monitoring and controlling the software quality. However, software quality control is complicated, because of the complex relationship between these metrics and the attributes of a software development process. To solve this problem, many excellent techniques have been introduced into software maintainability domain. In this paper, we propose a novel classification method--Aggregating One-Dependence Estimators (AODE) to support and enhance our understanding of software metrics and their relationship to software quality. Experiments show that performance of AODE is much better than eight traditional classification methods and it is a promising method for software quality prediction. Furthermore, we present a Symmetrical Uncertainty (SU) based feature selection method to reduce source code metrics taking part in classification, make these classifiers more efficient and keep their performances not undermined meanwhile. Our empirical study shows the promising capability of SU for selecting relevant metrics and preserving original performances of the classifiers.",2008,0, 4137,An Improving Fault Detection Mechanism in Service-Oriented Applications Based  on Queuing Theory,"SOA has become more and more popular, but fault tolerance is not fully supported in most existing SOA frameworks and solutions provided by various major software companies. SOA implementations with large number of users, services, or traffic, maintaining the necessary performance levels of applications integrated using an ESB presents a substantial challenge, both to the architects who design the infrastructure as well as to IT professionals who are responsible for administration. In this paper, we improve the performance model for analyzing and detecting faults based on the queuing theory. The performance of services of SOA applications is measuring in two categories (individual services and composite services). We improve the model of the individuals services and add the composite services performance measuring.",2008,0, 4138,Learning to rank with voted multiple hyperplanes for documents retrieval,"The central problem for many applications in Information retrieval is ranking. Learning to rank has been considered as a promising approach for addressing the issue. In this paper, we focus on applying learning to rank to document retrieval, particularly the approach of using multiple hyperplanes to perform the task. Ranking SVM (RSVM) is a typical method of learning to rank. We point out that although RSVM is advantageous, it still has shortcomings. RSVM employs a single hyperplane in the feature space as the model for ranking, which is too simple to tackle complex ranking problems. In this paper, we look at an alternative approach to RSVM, which we call ¿¿multiple vote ranker¿¿ (MVR), and make comparisons between the two approaches. MVR employs several base rankers and uses the vote strategy for final ranking. We study the performance of the two methods with respect to several evaluation criteria, and the experimental results on the OHSUMED dataset show that MVR outperforms RSVM, both in terms of quality of results and in terms of efficiency.",2008,0, 4139,Development of a New Optimization Algorithm Based on Artificial Immune System and Its Application,"In resent years, research has taken an interest in design of approximation algorithms due to the requirement of these algorithms for solving many problems of science and engineering like system modeling, identification of plants, controller design, fault detection, computer security, prediction of data sets etc. The area of Artificial Immune System (AIS) is emerging as an active and attractive field involving models, techniques and applications of greater diversity. In this paper a new optimization algorithm based on AIS is developed. The proposed algorithm has been suitably applied to develop practical applications like design of a new model for efficient approximation of nonlinear functions and identification of nonlinear systems in noisy environments. Simulation study of few benchmark function approximation and system identification problems are carried out to show superior performance of the proposed model over the standard methods in terms of response matching, accuracy of identification and convergence speed achieved.",2008,0, 4140,A Multipath Forwarding Algorithm Based-On Predictive Path Loss Rate for Real-Time Application,"Delivery of real time streaming applications, such as voice over IP (VoIP), in packet switched networks is based on dividing the stream into packets and transferring each of the packets on an individual basis to the destination. To study the effect of packet dispersion on the quality of VoIP applications, we focus on the effect of the packet loss rate on the applications, and present a model in which packets of a certain session are dispersed over multiple paths based on the concept of predictive path loss rate (PPLR). The proposed PPLR strategy requires the dynamic prediction of loss levels over all available network paths. This predictive information is then used as an input to adjust the allocation proportion for next prediction period. Through simulations, we show that the proposed PPLR forwarding strategy provides significantly better performance than ECMP.",2008,0, 4141,Research on Organizational-Level Software Process Improvement Model and Its Implementation,"The development of software products is a complex activity with a large number of factors involved in defining success. Measurement is a necessary prerequisite, for software process improvement. However few guidelines exist for systematic planning of measurement programs within software projects and for multi-project selection. This paper discusses the major problems in software process measurement, presents an organizational-level software process improvement model (O-SPIM) to support software process improvement. Based on O-SPIM, software organizations can select suitable programs in those potential projects and establish adaptive measurement process and execute the measure just close-related the process goals, which focuses on their particular business environment. Subsequently, some methods of a comprehensive assessment of quality estimating to support higher level quantitative management are suggested. The implementation of O-SPIM and the related methods is introduced in the end. These works were integrated in a toolkit called SQMSP which was used in some medium-sized enterprises in China.",2008,0, 4142,A LD-aCELP Speech Coding Algorithm Based on Modified SOFM Vector Quantizer,"According to the character of codebook size and codeword dimension in low delay speech coding algorithm, a codebook design algorithm based on modified self-organizing feature map (SOFM) neural network is put forward. The input vectors and weight vectors are normalized. In order to reduce the computation complexity and improve the codebook performance, decompose the adaptive adjusting process of network weights into two stages of sequencing and convergence. The modified algorithm is used to generate vector quantization codebook in low delay speech coding algorithm experiment results show that the modified algorithm improves the performance of SOFM network significantly. Compared with the basic LBG algorithm, the synthesized speech using codebook with the modified SOFM neural network are greatly improved in the aspect of subjective and objective quality.",2008,0, 4143,Test Case Prioritization for Multiple Processing Queues,"Test case prioritization is an effective technique that helps to increase the rate of fault detection or code coverage in regression testing. However, all existing methods can only prioritize test cases to a single queue. Once there are two or more machines that participate in testing, all exiting techniques are not applicable any more. To extend the prioritization methods to parallel scenario, this paper defines the prioritization problem in such scenario and applies the task scheduling method to prioritization algorithms to help partitioning a test suite into multiple prioritized subsets. Besides, this paper also discusses the limitation of previous metrics and proposes a new measure of effectiveness of prioritization methods in a parallel scenario. Finally, a case study is performed to illustrate the algorithms and metrics presented in this article.",2008,0, 4144,Research on Fuzzy-Grey Comprehensive Evaluation of Software Process Modeling Methods,"Large numbers of practices show that software process modeling technique can be effectively used to describe and analyze software process and offer opportunity to discover process improvements. There are many kinds of software process modeling method, so it is significant to make reasonable evaluation of software process modeling methods which can help developers choose the most appropriate modeling method according to specific modeling environment and requirement for achieving the best modeling effect. On the basis of evaluation system of software process modeling methods, comprehensive evaluation method which combines fuzzy evaluation and grey theory is applied to evaluate some modeling methods. It can make full use of fuzziness and grayness of evaluation information by experts which makes the evaluation more objective and accurate. Further, it is proved that the method can estimate modeling methods soundly and have feasibility and engineering value to software development project practice by an example.",2008,0, 4145,Full-Reference Quality Assessment for Video Summary,"As video summarization techniques have attracted more and more attention for efficient multimedia data management, quality assessment of video summary is required. To address the lack of automatic evaluation techniques, this paper proposes a novel framework including several new algorithms to assess the quality of the video summary against a given reference. First, we partition the reference video summary and the candidate video summary into the sequences of summary unit (SU). Then, we utilize alignment based algorithm to match the SUs in the candidate summary with the SUs in the corresponding reference summary. Third, we propose a novel similarity based 4 C - assessment algorithm to evaluate the candidate video summary from the perspective of coverage, conciseness, coherence, and context, respectively. Finally, the individual assessment results are integrated according to userpsilas requirement by a learning based weight adaptation method. The proposed framework and techniques are experimented on a standard dataset of TRECVID 2007 and show the good performance in automatic video summary assessment.",2008,0, 4146,APD-based measurement technique to estimate the impact of noise emission from electrical appliances on digital communication systems,"This paper describes a technique to measure noise emissions from electrical appliances and study their impact towards digital communication systems by using the amplitude probability distribution (APD) methodology. The APD has been proposed within CISPR for measurement of electromagnetic noise emission. We present a measurement approach which utilizes a programmable digitizer with an analysis software to evaluate the noise APD pattern and probability density function (PDF). A unique noise APD pattern is obtained from each measurement of noise emission from different appliances. The noise PDF is useful for noise modeling and simulation, from which we can estimate the degradation on digital communication performance in terms of bit error probability (BEP). This technique provides a simple platform to examine the effect of other electrical appliances noise emission towards specific digital communication services.",2008,0, 4147,Evaluating Quality of Software Systems by Design Patterns Detection,"Software industry demands the development of products at a very fast rate with most effective solutions. For this reason most of the components are desired to be reused again and again. Use of design patterns can speed up the process of product development by providing the pre-tested footsteps to follow. Design patterns show their effectiveness from Design phase to maintenance. Design patterns can be analyzed for their quality which is the most necessary aspect of any software structure. This is a metric based study of the design patterns by proposing suitable metrics that are suitable for all the patterns. Higher is the quality of the used design pattern higher will be the quality of the product, with more flexibility towards change and maintenance. We can further extend the paper by determining most suitable design patterns for a problem to solve by providing highly reusable solutions.",2008,0, 4148,Application of Random Forest in Predicting Fault-Prone Classes,"There are available metrics for predicting fault prone classes, which may help software organizations for planning and performing testing activities. This may be possible due to proper allocation of resources on fault prone parts of the design and code of the software. Hence, importance and usefulness of such metrics is understandable, but empirical validation of these metrics is always a great challenge. Random forest (RF) algorithm has been successfully applied for solving regression and classification problems in many applications. This paper evaluates the capability of RF algorithm in predicting fault prone software classes using open source software. The results indicate that the prediction performance of random forest is good. However, similar types of studies are required to be carried out in order to establish the acceptability of the RF model.",2008,0, 4149,Class Ordering Tool – A Tool for Class Ordering in Integration Testing,"The presence of cyclic dependency calls in integration testing is a major problem to determine an order of classes to be tested. We proposed an approach to solve this problem. Hence, this paper proposes a class ordering tool (CO Tool) for integration testing based on our approach for finding a test order using object-oriented slicing technique. Our approach breaks cycles by slicing classes for partial testing, whereas many researchers delete relationships to break cycles and create stub to represent interactions of testing between classes. Therefore, the benefit of our approach is to decrease the cost of implementing test stubs.",2008,0, 4150,Analysis of software quality cost modeling’s industrial applicability with focus on defect estimation,The majority of software quality cost models is by design capable of describing costs retrospectively but relies on defect estimation in order to provide a cost forecast. We identify two major approaches to defect estimation and evaluate them in a large scale industrial software development project with special focus on applicability in quality cost models. Our studies show that neither static models based on code metrics nor dynamic software reliability growth models are suitable for an industrial application.,2008,0, 4151,Optimization of feature weights and number of neighbors for Analogy based cost Estimation in software project management,"Software cost estimation affects almost all activities of software project development such as: bidding, planning, and budgeting, thus it is very crucial to the success of software project management. In past decades, many methods have been proposed for cost estimation. Analogy based cost estimation (ABE) is among the most popular techniques due to its conceptual simplicity and empirical competitiveness. In order to improve ABE model, many previous studies have focused on optimizing the feature weights in the similarity function. However, according to some prior studies, the K parameter for the K-nearest neighbor is also essential to the performance of ABE. Nevertheless, few studies attempt to optimize the K number of neighbors and most of them are based on the trial-error scheme. In this study, we propose the genetic algorithm to simultaneously optimize the K parameter and the feature weights for ABE (OKFWSABE). The proposed OKFWABE method is validated on three real-world software engineering data sets. The experiment results show that our methods could significantly improve the prediction accuracy of conventional ABE and has the potential to become an effective method for software cost estimation.",2008,0, 4152,Simulation-based FDP & FCP analysis with queueing models,"Continuous efforts have been devoted to software reliability modeling evolution over the past decades in order to adapt to the practical complex software testing environments. Many models have been proposed to describe the software fault related process. These software reliability growth models (SRGMs) have evolved from describing one fault detection process (FDP) into incorporating fault correction process (FCP) as well, in order to provide higher accuracy with more information. To provide mathematical tractability, models need to have closed form, with restrictive assumptions in a narrow sense. This in turn confines their capability for general applications. Alternatively, in this paper a general simulation based queueing modeling framework is proposed to describe FDP and FCP, with resource factors from practical software testing incorporated. Good simulation performance is observed with a numerical example. Furthermore, release time and debugger staffing issues are investigated with a revised cost model. The analysis is conducted through a simulation optimization approach.",2008,0, 4153,Correlation analysis between maturity factors and performance indexes in software project,"Software development inclined to production of scale, criterion and industrialization, with the development of information technology and software industry. Different kinds of Project Maturity Models for enhancing performances of software development were broadly exploited and used factually. Based on literature review and enterprise research this paper put forward 30 main factors which cover in three aspects of technology, organization and personnel. At the same time, the author classified the software development performance estimation indexes into five geniuses: quality, satisfaction, knowledge, control, and efficiency index. Recurring to large-scale questionnaire research, the author also applied SPSS statistic instrument consisted of gene analysis, correlation analysis and regression analysis to analyze the relationship between maturity factors and performance indexes. The results indicated that close connections between the project performance and different maturity factors affected the project performance to a certain extent.",2008,0, 4154,Detecting Defects in Golden Surfaces of Flexible Printed Circuits Using Optimal Gabor Filters,"This paper studies the application of advanced computer image processing techniques for solving the problem of automated defect detection for golden surfaces of flexible printed circuits (FPC). A special defect detection scheme based on semi-supervised mechanism is proposed, which consists of an optimal Gabor filter and a smoothing filter. The aim is to automatically discriminate between ""known"" non-defective background textures and ""unknown"" defective textures of golden surfaces of FPC. In developing the scheme, the parameters of the optimal Gabor filter are searched with the help of the genetic algorithm based on constrained minimization of a Fisher cost function. The performance of the proposed defect detection scheme is evaluated off-line by using a set of golden images acquired from CCD. The results exhibit accurate defect detection with low false alarms, thus showing the effectiveness and robustness of the proposed scheme.",2008,0, 4155,Deformable Block Motion Estimation and Compensation in the Redundant Domain,"Because the redundant discrete wavelet transform (RDWT) is shift invariant, the motion estimation algorithms in the redundant wavelet domain have good performance. And, deformable block matching algorithm (DBMA) not only can reduce 'block effect' which is brought by traditional block matching algorithm but also can have an effective prediction for the non-translational motion. So, this paper mainly focus.es on deformable block motion estimation and compensation in the redundant discrete wavelet transformation domain (RDWT-DBMA). The motion estimation is done in the biorthogonal 9/7 redundant wavelet domain, and four-nodes-based model is used in nodal-based deformable matching algorithm. Experimental results show that RDWT-DBMA has a better subjective and objective image quality than full search algorithm (FS) which has the best precision in block matching algorithm (BMA), especially for the sequence which has fast motions, and the average PSNR of the image can be improved 3.17dB.",2008,0, 4156,Resource Planning Heuristics for Service-Oriented Workflows,"Resource allocation and resource planning, especially in a SOA and grid environment, become crucial. Particularly, in an environment with a huge number of workflow consumers requesting a decentralized cross-organizational workflow, performance evaluation and execution-management of service-oriented workflows gain in importance. The need for an effective and efficient workflow management forces enterprises to use intelligent optimization models and heuristics to compose workflows out of several services under real-time conditions. This paper introduces the required architecture workflow performance extension - WPX.KOM for resource planning and workload prediction purposes. Furthermore, optimization approaches and a high-performance heuristic solving the addressed resource planning problem with low computational overhead are presented.",2008,0, 4157,Error Correcting Output Coding-Based Conditional Random Fields for Web Page Prediction,"Web page prefetching has been used efficiently to reduce the access latency problem of the Internet, its success mainly relies on the accuracy of Web page prediction. As powerful sequential learning models, conditional random fields (CRFs) have been used successfully to improve the Web page prediction accuracy when the total number of unique Web pages is small. However, because the training complexity of CRFs is quadratic to the number of labels, when applied to a Web site with a large number of unique pages, the training of CRFs may become very slow and even intractable. In this paper, we decrease the training time and computational resource requirements of CRFs training by integrating error correcting output coding (ECOC) method. Moreover, since the performance of ECOC-based methods crucially depends on the ECOC code matrix in use, we employ a coding method, search coding, to design the code matrix of good quality.",2008,0, 4158,An Attack Modeling Based on Hierarchical Colored Petri Nets,"Todaypsilas serious attacks are complex, multi-stage scenarios and can involve bypassing multiple security mechanisms and the use of numerous computer systems. A host which has been controlled by an attacker can become a stepping stone for further intrusion and destruction. Providing attack graphs is one of the most direct and effective way to analyze interactions among network components and sequences of vulnerabilities. However, the findings obtained from an attack graph highly depend on the quality of modeling. In this paper, attack modeling based on Petri nets is extended and an approach based on hierarchical Colored Petri nets is provided. We will use Colored Petri nets to describe attacks in two levels, those being generally and specifically. These treatments can facilitate the understanding of network vulnerabilities further, and enhance effective protection measures.",2008,0, 4159,Estimate Test Execution Effort at an Early Stage: An Empirical Study,"Software testing is becoming more and more important as it is a widely used activity to ensure software quality. Test execution becomes an activity in the critical path of a project. In this case, early estimation of test execution effort can benefit both tester managers and software projects. This paper reports an empirical study on early test execution effort estimation. In the study, we propose an approach which mainly consists of two parts: test case number prediction from use cases and test effort estimation based on the test suite execution vector model which combines test case number, test execution complexity and its tester together. For each part, we evaluate it with the data of real projects from a financial software company.",2008,0, 4160,Adaptive partition size temporal error concealment for H.264,"Existing temporal error concealment methods for H.264 often decide the partition size of the lost macroblock (MB) before recovering the motion information, without actual quality comparison between different partition modes. In this paper, we propose to select the best partition mode by minimizing the Weighted Double-Sided External Boundary Matching Error (WDS-EBME), which jointly measures the inter-MB boundary discontinuity, inter-partition boundary discontinuity and intrapartition block artifacts in the recovered MB. The proposed method estimates the best motion vectors for each of the candidate partition modes, calculates the overall WDS-EBME values for them, and selects the partition mode with the smallest overall WDS-EBME to recover the lost MB. We also propose a progressive concealment order for the 4times4 partition mode. Test results show that the adaptive partition size method always outperforms the fixed partition size methods. Both the adaptive and fixed partition size methods are much superior to the temporal error concealment (TEC) method in the H.264 reference software.",2008,0, 4161,A cross-layer quality driven approach in Web service selection,"In order to make Web services operate in a performance optimal status, it is necessary to make an effective decision on selecting the most suitable service provider among a set Web services that provide identical functions. We argue that the network performance between the service container and service consumer can pose a significant influence to the performance of Web service that the consumer actually receive, while current researches have limited emphasis on this issue. In this paper, we propose a cross-layer approach for Web service selection which takes the network performance issue into consideration during the service selection process. A discrete representation of cross-layer performance correlation is proposed. Based on which, a qualitative reasoning method is introduced to predict the performance at the service user side. The integration of the quality driven Web service selection method to service oriented architecture is also considered. Simulation is designed and experiment results suggest that the new approach significantly improves the accuracy of Web service selection and delivers a performance elevation for Web services.",2008,0, 4162,Image quality assessment software of security screening system,"Security screening systems based on imaging technology, like X-ray and gamma-ray based screening systems, are widely used in the field of aviation security. The image quality of the systems reflects detection performance directly. In the paper a set of image quality assessment software based-on human visual sensitivity (HVS) has been developed. The software integrated HVS method in the objective method and overcomes the shortage of both objective and subjective method of image processing. Experiments show that the assessment results are consistent with the perception of operators. With the help of the image quality assessment software, the image quality assessment of the security screening system can be finished justly and efficiently, which contributes the safety of aviation transportation.",2008,0, 4163,Multiphysic modeling and design of carbon nanotubes based variable capacitors for microwave applications,"This paper describes the multiphysic methodology developed to design carbon nanotubes (CNT) based variable capacitor (varactor). Instead of using classical RF-MEMS design methodologies; we take into account the real shape of the CNT, its nanoscale dimensions and its real capacitance to ground. A capacitance-based numerical algorithm has then been developed in order to predict the pull-in voltage and the RF-capacitance of the CNT-based varactor. This software, which has been validated by measurements on various devices, has been used to design varactor device for which 20V of actuation voltage has been predicted. We finally extend the numerical modeling to describe the electromagnetical behavior of the devices. The RF performances has also been efficiently predicted and the varactor (in parallel configuration) exhibits predicted losses of 0.3 dB at 5 GHz and quality factor of 22at 5 GHz, which is relevant for high quality reconfigurable circuits requirements where as the expected sub-microsecond switching time range opens the door to real time tunability.",2008,0, 4164,Broadband One-port Material Characterization Method of Porous and Fluidic Materials,"Magnetic and electrical properties of materials are investigated by the use of a novel method inspired by the known distance-to-fault (DTF) measurement. The principles of the one-port method are detailed and it is verified by presenting measurement results. The complex relative permeability and permittivity of the samples are derived simultaneously by means of a network analyzer and control software. The coaxial sample holder designed for this measurement allows the investigation of liquid and porous materials. As a special feature, the method can be easily implemented into various applications, since it is based on using a scalar network analyzer. Precognition of the sample length is not necessary, which provides high level of flexibility when using the presented method.",2008,0, 4165,Dynamically reconfigurable soft output MIMO detector,"MIMO systems (with multiple transmit and receive antennas) are becoming increasingly popular, and many next-generation systems such as WiMAX, 3-GPP LTE and IEEE802.11 n wireless LANs rely on the increased throughput of MIMO systems with up to four antennas at receiver and transmitter. High throughput implementation of the detection unit for MIMO systems is a significant challenge. This challenge becomes still harder, because the above mentioned standards demand support for multiple modulation and coding schemes. This implies that the MIMO detector must be dynamically reconfigurable. Also, to achieve required bit error rate (BER) or frame error rate (FER) performance, the detector has to provide soft values to advanced forward error correction (FEC) schemes like turbo Codes. This paper presents an ASIC implementation of a novel MIMO detector architecture that is able to reconfigure on the fly and provides soft values as output. The design is implemented in 45 nm predictive technology library, and has a parallelism factor of four. The detector has many qualities of a systolic architecture and achieves a continuous throughput of 1 Gbps for QPSK, 500 Mbps for 16-QAM, and 187.5 Mbps for 64-QAM. The total area is estimated to be approximately 70 KGates equivalent, and power consumption is estimated to be 114 mW.",2008,0, 4166,Reversi: Post-silicon validation system for modern microprocessors,"Verification remains an integral and crucial phase of todaypsilas microprocessor design and manufacturing process. Unfortunately, with soaring design complexities and decreasing time-to-market windows, todaypsilas verification approaches are incapable of fully validating a microprocessor before its release to the public. Increasingly, post-silicon validation is deployed to detect complex functional bugs in addition to exposing electrical and manufacturing defects. This is due to the significantly higher execution performance offered by post-silicon methods, compared to pre-silicon approaches. Validation in the post-silicon domain is predominantly carried out by executing constrained-random test instruction sequences directly on a hardware prototype. However, to identify errors, the state obtained from executing tests directly in hardware must be compared to the one produced by an architectural simulation of the designpsilas golden model. Therefore, the speed of validation is severely limited by the necessity of a costly simulation step. In this work we address this bottleneck in the traditional flow and present a novel solution for post-silicon validation that exposes its native high performance. Our framework, called Reversi, generates random programs in such a way that their correct final state is known at generation time, eliminating the need for architectural simulations. Our experiments show that Reversi generates tests exposing more bugs faster, and can speed up post-silicon validation by 20x compared to traditional flows.",2008,0, 4167,New approach to the modeling of Command and Control Information Systems,"Probability of military activities successes in great degree depends on the qualities of their tactical planning processes. A very important part of tactical planning is the planning of a command and control information systempsilas (C2IS) communication infrastructure, respectively tactical communication networks. There are several known software tools that assist the process of tactical planning. However, they are often useless because different armies use different information and communication technologies. This is the main reason we started to develop a simulation system that helps in the process of planning tactical communication networks. Our solution is based on the well-known OPNET modeler simulation environment, which is also used for some other solutions in this area. In addition to the simulation and modeling methodologies we have also developed helper software tools. TPGEN is a tool which enables the user-friendly entering and editing of tactical network models. It performs mapping from C2IS and tactical communication network descriptions to an OPNET simulation modelpsilas parameters. Because the simulation results obtained by an OPNET modeler are user-unfriendly and need expert knowledge when analyzing them, we have developed an expert system for the automatic analysis of simulation results. One of outputs from this expert system is the user-readable evaluation of tactical networkspsila performances with guidance on how to improve the network. Another output is formatted to use in our tactical player. This tactical player is an application which helps when visualizing simulation and expert system results. It is also designed to control an OPNET history player in combination with 3DNV visualization of a virtual terrain. This developed solution, in user-friendly way, helps in the process of designing and optimizing tactical networks.",2008,0, 4168,Improving software reliability and security with automated analysis,"Static-analysis tools that identify defects and security vulnerabilities in source and executables have advanced significantly over the last few years. A brief description of how these tools work is given. Their strengths and weaknesses in terms of the kinds of flaws they can and cannot detect are discussed. Methods for quantifying the accuracy of the analysis are described, including sources of ambiguity for such metrics. Recommendations for deployment of tools in a production setting are given.",2008,0, 4169,Dual mode radio: A new tranceiver architecture for UWB and 60 GHz-applications,"The paper contains a new concept for installation of radio transmission links for digital signals with a mimimum of hardware and no software effort. As opposed to known architectures, the difference in information is evaluated between two free-space modes. A phase modulation is set on the differences between two transmission signals. Both transmitter and receiver work without crystal oscillator. Due to this procedure neither a oscillator or time recovery is required at the receiver. Each signal can for example be demodulated by a multiplier (mixer) in the receiver. Consequently, the receiver can be easily implemented. Owing to the fact that only the difference between two transmission signals influence transmission quality and no longer the absolute phase noise of the transmitter oscillators, the best transmission systems can be realized even with the worst phase noise of high frequency or microwave oscillators. This is the reason why this concept opens up the possibility of direct radio transmission of UWB and double digit GBit values in the microwave range. The architecture, the theory, and the very good measurement results of a 2.45 GHz-demonstrator in SMD-technology is presented.",2008,0, 4170,ORTEGA: An Efficient and Flexible Online Fault Tolerance Architecture for Real-Time Control Systems,"Fault tolerance is an important aspect in real-time computing. In real-time control systems, tasks could be faulty due to various reasons. Faulty tasks may compromise the performance and safety of the whole system and even cause disastrous consequences. In this paper, we describe On-demand real-time guard (ORTEGA), a new software fault tolerance architecture for real-time control systems. ORTEGA has high fault coverage and reliability. Compared with existing real-time fault tolerance architectures, such as Simplex, ORTEGA allows more efficient resource utilizations and enhances flexibility. These advantages are achieved through the on-demand detection and recovery of faulty tasks. ORTEGA is applicable to most industrial control applications where both efficient resource usage and high fault coverage are desired.",2008,0, 4171,Influence of Routing Protocol on VoIP Quality Performance in Wireless Mesh Backbone,"In this paper we focus on the following question: how routing protocols, working under different channel conditions, can influence the human perceived quality of VoIP calls in a wireless mesh backbone? In order to respond this question, we propose a study case where we analyze two routing approaches in a 802.11 wireless mesh backbone, namely, reactive AODV and proactive OLSR. We calculated the voice speech quality by making use of a reviewed version of ITU-T E-model proposed in previous work. Results obtained from highly credible stochastic simulation environment, also described in this paper, showed that AODV performs better in very hostile mediums while OLSR presents better results when more friendly mediums take place.",2008,0, 4172,A Distributed Intrusion Detection System Based on Agents,"Due to the rapid growth of the network application, new kinds of network attacks are emerging endlessly. So it is critical to protect the networks from attackers and the intrusion detection technology becomes popular. On the basis of analyzing the defect of a kind of modern distributed intrusion detection system this article proposes a distributed intrusion detection system model based on agents. This system adopts the way which combines static agent and mobile agent, host-based intrusion detection system (IDS) and network-based intrusion detection system. The function of each module in the system is described in detail. The system uses mobile agent for decentralized data collection, data analysis and response, and has certain dynamic learning capability. The self-adapt ability of the system is strong and can solve the main problems of the modern system. Finally, the preliminary implementation of the module in this system like agent is given in detail and the systempsilas performance evaluation is presented.",2008,0, 4173,An Efficient Local Bandwidth Management System for Supporting Video Streaming,"In order to guarantee continuous delivery of video streaming over best-effort (BE) forwarding network, some quality-of-service (QoS) strategies such as RSVP and DiffServ must be used to improve the transmission performance. However, these methods are too difficult to be employed in practical applications since their technical complexity. In this paper, we design and implement an efficient local bandwidth management system to tackle this problem in IPv6 environment. The system monitors local access network and provides assured forwarding (AF) service through controlling the video streaming requests based on available network bandwidth. To assess the benefit of this system, we perform tests to compare its performance with that of conventional BE service. Our test results indicate convincingly that AF offers substantially better performance than BE.",2008,0, 4174,A new clustering approach based on graph partitioning for navigation patterns mining,We present a study of the Web based user navigation patterns mining and propose a novel approach for clustering of user navigation patterns. The approach is based on the graph partitioning for modeling user navigation patterns. For the clustering of user navigation patterns we create an undirected graph based on connectivity between each pair of Web pages and we propose novel formula for assigning weights to edges in such a graph. The experimental results represent that the approach can improve the quality of clustering for user navigation pattern in Web usage mining systems. These results can be use for predicting userpsilas next request in the huge Web sites.,2008,0, 4175,Cost-efficient Automated Visual Inspection system for small manufacturing industries based on SIFT,"This paper presents a cost efficient automated visual inspection (AVI) system for small industriespsila quality control system. The complex hardware and software make current AVI systems too expensive to afford for small-size manufacturing industries. Proposed approach to AVI systems is based on an ordinary PC with a medium resolution camera without any other extra hardware. The scale invariant feature transform (SIFT) is used to acquire good accuracy and make it applicable for different situations with different sample sizes, positions, and illuminations. Proposed method can detect three different defect types as well as locating and measuring defect percentage for more specialized utilization. To evaluate the performance of this system different samples with different sizes, shapes, and complexities are used and the results show that proposed system is highly applicable to different applications and is invariant to noise, illumination changes, rotation, and transformation.",2008,0, 4176,A new adaptive hybrid neural network and fuzzy logic based fault classification approach for transmission lines protection,"In this paper, an adaptive hybrid neural networks and fuzzy logic based algorithm is proposed to classify fault types in transmission lines. The proposed method is able to identify all ten shunt faults in transmission lines with high level of robustness against variable conditions such as measured amplitudes and fault resistance. In this approach, a two-end unsynchronized measurement of the signals is used. For real-time estimation of unknown synchronization angle and three phase phasors a two-layer adaptive linear neural (ADALINE) network is used. The estimated parameters are fed to a fuzzy logic system to classify fault types. This method is feasible to be used in digital distance relays which are able to be programmed, to share and discourse data with all protective and monitoring devices. The proposed method is evaluated by a number of simulations conducted in PSCAD/EMTDC and MATLAB software.",2008,0, 4177,Impact of metrics based refactoring on the software quality: a case study,"As the software system changes, the design of the software deteriorates hence reducing the quality of the system. This paper presents a case study in which an inventory application is considered and efforts are made to improve the quality of the system by refactoring. The code is an open source application namely ldquoinventor deluxe v 1.03rdquo, which was first, assessed using the tool Metrics 1.3.6 (an Eclipse plug-in). The code was then refactored and three more versions were built. At the end of creation of every version the code was assessed to find the improvement in the quality. The results obtained after measuring various metrics helped in tracing the spots in the code, which requires further improvement and hence can be refactored. Most of the refactoring was done manually with little tool support. Finally, a trend was found which shows, that average complexity and size of the code reduces with refactoring - based development, which helps to make the software more maintainable. Thus, although refactoring is time consuming and a labor-intensive work, it has a positive impact on the software quality.",2008,0, 4178,StegoHunter: Passive audio steganalysis using Audio Quality Metrics and its realization through genetic search and X-means approach,"Steganography is used to hide the occurrence of communication. This creates a potential problem when this technology is misused for planning criminal activities. Differentiating anomalous audio document (stego audio) from pure audio document (cover audio) is difficult and tedious. This paper investigates the use of a Genetic-X-means classifier, which distinguishes a pure audio document from the adulterated one. The basic idea is that, the various audio quality metrics (AQMs) calculated on cover audio signals and on stego-audio signals vis-a-vis their denoised versions, are statistically different. Our model employs these AQMs to steganalyse the audio data. Genetic paradigm is exploited to select the AQMs that are sensitive to various embedding techniques. The classifier between cover and stego-files is built using X-means clustering on the selected feature set. The presented method can not only detect the presence of hidden message but also identify the hiding domains. The experimental results show that the combination strategy (Genetic-X-means) can improve the classification precision even with lesser payload compared to the traditional ANN (Back Propagation Network).",2008,0, 4179,Modeling Software Contention Using Colored Petri Nets,"Commercial servers, such as database or application servers, often attempt to improve performance via multi-threading. Improper multi-threading architectures can incur contention, limiting performance improvements. Contention occurs primarily at two levels: (1) blocking on locks shared between threads at the software level and (2) contending for physical resources (such as the cpu or disk) at the hardware level. Given a set of hardware resources and an application design, there is an optimal number of threads that maximizes performance. This paper describes a novel technique we developed to select the optimal number of threads of a target-tracking application using a simulation-based colored Petri nets (CPNs) model. This paper makes two contributions to the performance analysis of multi-threaded applications. First, the paper presents an approach for calibrating a simulation model using training set data to reflect actual performance parameters accurately. Second, the model predictions are validated empirically against the actual application performance and the predicted data is used to compute the optimal configuration of threads in an application to achieve the desired performance. Our results show that predicting performance of application thread characteristics is possible and can be used to optimize performance.",2008,0, 4180,A performance-correctness explicitly-decoupled architecture,"Optimizing the common case has been an adage in decades of processor design practices. However, as the system complexity and optimization techniquespsila sophistication have increased substantially, maintaining correctness under all situations, however unlikely, is contributing to the necessity of extra conservatism in all layers of the system design. The mounting process, voltage, and temperature variation concerns further add to the conservatism in setting operating parameters. Excessive conservatism in turn hurt performance and efficiency in the common case. However, much of the systempsilas complexity comes from advanced performance features and may not compromise the whole systempsilas functionality and correctness even if some components are imperfect and introduce occasional errors. We propose to separate performance goals from the correctness goal using an explicitly-decoupled architecture. In this paper, we discuss one such incarnation where an independent core serves as an optimistic performance enhancement engine that helps accelerate the correctness-guaranteeing core by passing high-quality predictions and performing accurate prefetching. The lack of concern for correctness in the optimistic core allows us to optimize its execution in a more effective fashion than possible in optimizing a monolithic core with correctness requirements. We show that such a decoupled design allows significant optimization benefits and is much less sensitive to conservatism applied in the correctness domain.",2008,0, 4181,Dynamic Resource Assignment for MC-CDMA with prioritization and QoS constraint,"In this paper, we present a dynamic method for resource assignment by implementing a cross-layer design for Multi-Carrier Code Division Multiple Access (MC-CDMA) systems that adapts user allocation to channel fluctuations. Our approach consists of prioritizing users according to their class of service (CoS), assigning them to the best subchannels based on channel estimates, and allowing redundancy of assignment in case of strong degradation of otherspsila signals. This improves upon the performance of the existing methods by allowing adaptation to the system parameters. This technique has been implemented in software and validated experimentally, bringing about 6 dB gain for a bit error rate (BER) of 10-3 in a multipath fading environment with 16 users for 8 available codes, compared to the classic round robin (RR) assignment.",2008,0, 4182,Implementation of a high level hands-on-training at an experimental PET scanner,"The remarkable increase of medical imaging within the last decade demands the implementation of teaching programs for experts and students. Such courses must achieve both the realization of theoretical basics and an understanding of its implementation within real systems. This accounts for positron emission tomography (PET) as well. PET is, among other techniques like computer tomography (CT) or magnetic resonance imaging (MRI), one of the key technologies for biological and medical imaging. To teach and demonstrate the fundamentals of CT and the physical background of PET as well as to illustrate the signal processing of a coincidence measurement and the data processing of multi-parameter, list-mode data sets, the experimental PET scanner PET-TW05 has been developed. It is a simplified but still fully functional scanner consisting of two commercial BGO block-detectors. They are fixed in opposite to each other and can be moved along a linear axis and rotated around the field of view (FOV). The hardware layout and the software performance of the PET-TW05 scanner support the demonstration of important process steps of PET imaging like (i) system calibration, (ii) detector efficiency measurements, (iii) determination of time and spatial resolution of a PET scanner, (iv) list-mode data acquisition, (v) tomographic reconstruction by means of filtered backprojection, (vi) the effect of different filter kernels as well as of time and energy windows on the image quality, (vii) the influence of scatter on the image quality. Furthermore, the principle of time-of-flight PET can be experimentally demonstrated.",2008,0, 4183,Computer evaluation of a novel multipinhole SPECT system for cardiac imaging,"Multipinhole collimation for clinical myocardial perfusion imaging dates back to the 1970’s and 1980’s. However, the acceptance and use of pinhole collimator technique for application in single photon emission computed tomography (SPECT) was impeded by wide-spread availability of use for rotational dual-head cameras for general purpose imaging. Today virtually all SPECT imaging is performed with a parallel-hole collimator. In the past few years, experimental small animal SPECT systems have seen progress with pinhole collimation. There has also been an interest in designing dedicated cardiac SPECT systems for improved sensitivity to reduce imaging time. This has renewed clinical interest in multipinhole radionuclide cardiac imaging. It is known that pinhole collimation surpasses parallel-hole collimation in both sensitivity and resolution for small objects located close to the pinhole aperture. However, the short object-to-detector distance limits the size of the field of view (FOV) for the pinhole collimator, which causes part of the emission rays to miss the detector. The lost data are said to be truncated from the projection measurements. The objective of this work is to investigate the effect of truncation and limited angular sampling on image quality for a novel cardiac imaging system with stationary multipinhole collimation. Computer simulations show that the system produces high quality images of the myocardium comparable to that of a rotating multipinhole detector system. Improved quantitation for truncation of the interior problem is accomplished by calibrating the system matrix using a uniform phantom in the FOV of known activity.",2008,0, 4184,Optimized multipinhole design for mouse imaging,"To enhance high-sensitivity focused mouse imaging using multipinhole SPECT on a dual head camera, a fast analytical method was used to predict the contrast-to-noise ratio (CNR) in many points of a homogeneous cylinder for a large number of pinhole collimator designs with modest overlap. The design providing the best overall CNR, a configuration with 7 pinholes, was selected. Next, the pinhole pattern was made slightly irregular to reduce multiplexing artifacts. Two identical, but mirrored 7-pinhole plates were manufactured. In addition, the calibration procedure was refined to cope with small deviations of the camera from circular motion. First, the new plates were tested by reconstructing a simulated homogeneous cylinder measurement. Second, a Jaszczak phantom filled with 37 MBq 99mTc was imaged on a dual head gamma camera, equipped with the new pinhole collimators. The image quality before and after refined calibration was compared for both heads, reconstructed separately and together. Next, 20 short scans of the same phantom were performed with single and multipinhole collimation to investigate the noise improvement of the new design. Finally, two normal mice were scanned using the new multipinhole designs to illustrate the reachable image quality of abdomen and thyroid imaging. The simulation study indicated that the irregular patterns suppress most multiplexing artifacts. Using body support information strongly reduces the remaining multiplexing artifacts. Refined calibration improved the spatial resolution. Depending on the location in the phantom, the CNR increased with a factor of 1 to 2.5 using the new instead of a single pinhole design. The first proof of principle scans and reconstructions were successful, allowing the release of the new plates and software for preclinical studies in mice.",2008,0, 4185,Alignment and monitoring of ATLAS Muon Spectrometer,"The ATLAS Muon Spectrometer is designed to measure muons from 14 TeV p-p interactions over a wide transverse momentum (pT ) range with accurate momentum resolution, taking into account the high background environment, the inhomogeneous magnetic field, and the large size of the apparatus. At high pT one of the main contributions to the resolution is the uncertainty on the Spectrometer alignment. The design resolution requires accurate control of the position and deformation of muon chambers, induced by mechanical stress and thermal gradients. Corrections provided by an optical alignment system, as well as track-based alignment procedures, are integrated in the software leading to track reconstruction and momentum measurement. We present here this optical system and the alignment procedures, along with recent results on muon reconstruction quality improvements obtained after feeding the software with alignment corrections. These results are obtained on muons from cosmic rays, which are currently being used to finalize detector commissioning. To that purpose, and also for future running of ATLAS with LHC collisions, tools to monitor the status of the Muon Spectrometer and determine the quality of the data are being developed. The goal is to spot problems during data taking and flag the data accordingly. Careful monitoring is important at the beginning of an experiment where the environment is new and requires some experience to fully comprehend; such tools can help determine problems quickly and solve them. The different steps of offline monitoring are presented here with examples.",2008,0, 4186,The ALICE Dimuon Spectrometer High Level Trigger,"The ALICE Dimuon Spectrometer High Level Trigger (dHLT) is an on-line processing stage whose primary function is to select interesting events that contain distinct physics signals from heavy resonance decays such as J/ψ and ϒ particles, amidst unwanted background events. It forms part of the High Level Trigger of the ALICE experiment, whose goal is to reduce the large data rate of about 25 GB/s from the ALICE detectors by an order of magnitude, without loosing interesting physics events. The dHLT has been implemented as a software trigger within a high performance and fault tolerant data transportation framework, which is run on a large cluster of commodity compute nodes. To reach the required processing speeds, the system is built as a concurrent system with a hierarchy of processing steps. The main algorithms perform partial event reconstruction, starting with hit reconstruction on the level of the raw data received from the spectrometer. Then a tracking algorithm finds track candidates from the reconstructed hit points. Physical parameters such as momentum are extracted from the track candidates and finally a dHLT decision is made to readout the event based on certain trigger criteria. Various simulations and commissioning tests have shown that the dHLT can expect a background rejection factor of at least 5 compared to hardware triggering alone, with little impact on the signal detection efficiency.",2008,0, 4187,Data quality monitor of the muon spectrometer tracking detectors of the ATLAS experiment at the Large Hadron Collider: First experience with cosmic rays,"The Muon Spectrometer of the ATLAS experiment at the CERN Large Hadron Collider is completely installed and many data have been collected with cosmic rays in different trigger configurations. In the barrel part of the spectrometer, cosmic ray muons are triggered with Resistive Plate Chambers, RPC, and tracks are obtained joining segments reconstructed in three measurement stations equipped with arrays of high-pressure drift tubes, MDT. The data are used to validate the software tools for the data extraction, to assess the quality of the drift tubes response and to test the performance of the tracking programs. We present a first survey of the MDT data quality based on large samples of cosmic ray data selected by the second level processors for the calibration stream. This data stream was set up to provide high statistics needed for the continuous monitor and calibration of the drift tubes response. Track segments in each measurement station are used to define quality criteria and to assess the overall performance of the MDT detectors. Though these data were taken in not optimized conditions, when the gas temperature and pressure was not stabilized, the analysis of track segments shows that the MDT detector system works properly and indicates that the efficiency and space resolution are in line with the results obtained with previous tests with a high energy muon beam.",2008,0, 4188,System tests of the LHCb RICH detectors in a charged particle beam,"The RICH detectors of the LHCb experiment will provide efficient particle identification over the momentum range 1â€?00 GeVc. Results are presented from a beam test of the LHCb RICH system using final production pixel Hybrid Photon Detectors, the final readout electronics and an adapted version of LHCb RICH reconstruction software. Measurements of the photon yields and Cherenkov angle resolutions for both nitrogen and C4F10 radiators agree well with full simulations. The quality of the data and the results obtained demonstrate that all aspects meet the stringent physics requirements of the experiment are now ready for first data.",2008,0, 4189,"Surveillance of nuclear threats using multiple, autonomous detection units","A detection systems is presented that involves multiple autonomous detectors, which are merged to a net like structure. Each unit is equipped with a stabilized and linearized multichannel analyzer for the measurement of gamma spectra and optionally a He3 neutron detector. The flexible hard- and software architecture allows the simple integration of a variable number of detectors, even with different volumes. With this technique, spectroscopical screening of large areas, e.g. airports or comparably sensitive objects, can be established by an individual combination of these detectors. The general outline of the nuclide identification algorithm is presented. In the context of medical isotopes, the importance of correct reference data that incorporates the scattering effects of human bodies is clarified. A special focus is drawn to the localization of nuclear sources. The model, corresponding simulations and its applications are depicted. Different tests and verification measures are described that refer to the nuclide identification and the source localization as well as to the spectral quality of the detectors. A finished real application is presented that combines the tested algorithms and has proven its reliability on different occasions.",2008,0, 4190,The ATLAS conditions database model for the Muon Spectrometer,"The Muon System has extensively started to use the LCG conditions database project ‘COOLâ€?as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon Conditions database is responsible for almost of all the ‘non-eventâ€?data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long-term compatibility with entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated to a particular time. In this work, an overview of the entire Muon Conditions Database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the Conditions Data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the Conditions data to the reconstruction.",2008,0, 4191,Software quality prediction techniques: A comparative analysis,"There are many software quality prediction techniques available in literature to predict software quality. However, literature lacks a comprehensive study to evaluate and compare various prediction methodologies so that quality professionals may select an appropriate predictor. To find a technique which performs better in general is an undecidable problem because behavior of a predictor also depends on many other specific factors like problem domain, nature of dataset, uncertainty in the available data etc. We have conducted an empirical survey of various software quality prediction techniques and compared their performance in terms of various evaluation metrics. In this paper, we have presented comparison of 30 techniques on two standard datasets.",2008,0, 4192,A meta-measurement approach for software test processes,"Existing test process assessment and improvement models intend to raise maturity of an organization with reference to testing activities. Such process assessments are based on ldquowhatrdquo testing activities are being carried out, and thus implicitly evaluate process quality. Other test process measurement techniques attempt to directly assess some partial quality attribute such as efficiency or effectiveness some test metrics. There is a need for a formalized method of test process quality evaluation that addresses both implicitly and partially of these current evaluations. This paper describes a conceptual framework to specify and explicitly evaluate test process quality aspects. The framework enables provision of evaluation results in the form of objective assessments, and problem-area identification to improve the software testing processes.",2008,0, 4193,prediction of fault count data using genetic programming,"Software reliability growth modeling helps in deciding project release time and managing project resources. A large number of such models have been presented in the past. Due to the existence of many models, the models' inherent complexity, and their accompanying assumptions; the selection of suitable models becomes a challenging task. This paper presents empirical results of using genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The goodness of fit (adaptability) and predictive accuracy of the evolved model is measured using five different measures in an attempt to present a fair evaluation. The results show that the GP evolved model has statistically significant goodness of fit and predictive accuracy.",2008,0, 4194,Global Sensitivity Analysis (GSA) Measures the Quality of Parameter Estimation. Case of Soil Parameter Estimation with a Crop Model,"The behavior of crops can be accurately predicted when all the parameters of the crop model are well known, and assimilating data observed on crop status in the model is one way of estimating parameters. Nevertheless, the quality of the estimation depends on the sensitivity of model output variables to the parameters. In this paper, we quantify the link between the global sensitivity analysis (GSA) of the soil parameters of the mechanistic crop model STICS, and the ability to retrieve the true values of these parameters. The Global sensitivity indices were computed by a variance based method (Extended FAST) and the quality of parameter estimation (RRMSE) was computed with an importance sampling method based on Bayes theory (GLUE). Criteria based on GSA were built to link GSA indices with the quality of parameters estimation. The result shows that the higher the criteria, the better the quality of parameters estimation and GSA appeared to be useful to interpret and predict the performance of the estimation parameters process.",2008,0, 4195,Online Optimization in Application Admission Control for Service Oriented Systems,"In a service oriented environment an application is created through the composition of different service components. In this paper, we investigate the problem of application admission control in a service oriented environment. We propose an admission control system that makes admission decisions using an online optimization approach. The core part of the proposed system is an online optimization algorithm that solves a binary integer programming problem which we formulate in this paper. An online optimizer maximizes the system revenue given the system's available resources as well as the system's previous commitments. Another part of the proposed system carries out a feasibility evaluation that is intended to guarantee an agreed level of probability of success for each admitted application instance. We use simulations and performance comparisons to show that the proposed application admission control system can improve the system revenue while guaranteeing the required level of quality of service.",2008,0, 4196,Timing Failures Detection in Web Services,"Current business critical environments increasingly rely on SOA standards to execute business operations. These operations are frequently based on Web service compositions that use several Web services over the internet and have to fulfill specific timing constraints. In these environments, an operation that does not conclude in due time may have a high cost as it can easily turn into service abandonment with financial and prestige losses to the service provider. In fact, at certain points, carrying on with the execution of an operation may be useless as a timely response will be impossible to obtain. This paper proposes a time-aware programming model for Web services that provides transparent timing failure detection. The paper illustrates the proposed model using a set of services specified by the TPC-App performance benchmark.",2008,0, 4197,A Stochastic Performance Model Supporting Time and Non-time QoS Matrices for Web Service Composition,"In recently years, Web service composition becomes a new approach to overcome many difficult problems confronted by B2B e-commerce, inter-organization workflow management, enterprise application integration etc. Due to the uncertainty of the Internet and various Web services, the performance of the composed Web service can not be ensured. How to model and predict the performance of the composed Web service is a difficult problem in the Web service composition. A novel simulation model that can model and simulate time and non-time performance characters, called STPM+, is presented in this paper. Based on Petri net, the STPM+ model can simulate and predict multiple performance characters, such as the cost, the reliability and the reputation of the composed Web service etc. To examine the validation of the STMP+ model, a visual performance evaluation tool, called VisualWSCPE, has been implemented. Besides, some simulation experiments have been fulfilled based on VisualWSCPE. The experiment results demonstrate the feasibility and efficiency of the STPM+ model.",2008,0, 4198,Service Availability of Systems with Failure Prevention,"In this paper we present a queueing model for service availability formulated as a Petri net. We define a metric for service availability and show how it can be estimated from the model. Assuming that a failure avoidance mechanism is present, we analyze the distribution of time-to-failure. Finally, we show in experiments how service availability depends on queue length, utilization and failure prevention and how it relates to steady-state system availability.",2008,0, 4199,Time Sensitive Ranking with Application to Publication Search,"Link-based ranking has contributed significantly to the success of Web search. PageRank and HITS are the best known link-based ranking algorithms. These algorithms do not consider an important dimension, the temporal dimension. They favor older pages because these pages have many in-links accumulated over time. Bringing new and quality pages to the users is important because most users want the latest information. Existing remedies to PageRank are mostly heuristic approaches. This paper investigates the temporal aspect of ranking with application to publication search, and proposes a principled method based on the stationary probability distribution of the Markov chain. The proposed techniques are evaluated empirically using a large collection of high energy particle physics publication. The results show that the proposed methods are highly effective.",2008,0, 4200,Towards Process Rebuilding for Composite Web Services in Pervasive Computing,"The emerging paradigm of pervasive computing and web services needs a flexible service discovery and composition infrastructure. A composite Web service is essential a process in a loosely-coupled service-oriented architecture. It is usually a black box for service requestors and only its interfaces can be seen externally. In some scenarios, to conduct performance debugging and analysis, a workflow representation of the underlying process is required. This paper describes a method to discover such underlying processes from execution logs. Based on a probabilistic assumption model, the algorithm can discover sequential, parallel, exclusive choice and iterative structures. Some examples are given to illustrate the algorithm.",2008,0, 4201,Service Process Composition with QoS and Monitoring Agent Cost Parameters,"Service-oriented architecture (SOA) provides a flexible paradigm to dynamically compose service processes from individual services. The flexibility, on the other hand, makes it necessary to monitor and manage service behaviors at runtime for performance assurance. One solution is to deploy software monitoring agents. In this paper, we present an approach to consider agent cost at process composition by selecting services with lower monitoring costs. We propose two algorithms to perform service selection with agent cost. IGA is able to achieve efficient service selection and bounded agent cost by modeling the problem as the weighted set covering (WSC) problem. We also study a heuristic algorithm to estimate the agent cost as part of the service utility function. The execution time of heuristic algorithm is shorter than IGA. On the other hand, the agent cost may not be consistently minimized using heuristic algorithms.",2008,0, 4202,The Design and Fabrication of a Full Field Quantitative Mammographic Phantom,"Breast cancer is among the leading causes of death in women worldwide. Screen-film mammography (SFM) is still the standard method used to detect early breast cancer thus leading to early treatment. Digital mammography (DM) has recently been designated as the imaging technology with the greatest potential for improving the diagnosis of breast cancer. For successful mammography, high quality images must be achieved and maintained, and reproducible quantitative quality control (QC) testing is thus required. Assessing images of known reference phantoms is one accepted method of doing QC testing. Quantitative QC techniques are useful for the long-term follow-up of mammographic quality. Following a comprehensive critical evaluation of available mammography phantoms, it was concluded that a more suitable phantom for DM could be designed. A new relatively inexpensive Applied Physics Group (APG) phantom was designed to be fast and easy to use, to provide the user with quantitative and qualitative measures of high and low contrast resolution over the full field of view and to demonstrate any geometric distortions. It was designed to cover the entire image receptor so as to assess the heel effect, and to be suitable for both SFM and DM. The APG phantom was designed and fabricated with embedded test objects and software routines were developed to provide a complete toolkit for SFM and DM QC. The test objects were investigated before embedding them.",2008,0, 4203,Incorporating varying requirement priorities and costs in test case prioritization for new and regression testing,"Test case prioritization schedules the test cases in an order that increases the effectiveness in achieving some performance goals. One of the most important performance goals is the rate of fault detection. Test cases should run in an order that increases the possibility of fault detection and also that detects the most severe faults at the earliest in its testing life cycle. Test case prioritization techniques have proved to be beneficial for improving regression testing activities. While code coverage based prioritization techniques are found to be taken by most scholars, test case prioritization based on requirements in a cost effective manner has not been taken for study so far. Hence, in this paper, we propose to put forth a model for system level test case prioritization (TCP) from software requirement specification to improve user satisfaction with quality software that can also be cost effective and to improve the rate of severe fault detection. The proposed model prioritizes the system test cases based on the six factors: customer priority, changes in requirement, implementation complexity, usability, application flow and fault impact. The proposed prioritization technique is experimented in three phases with student projects and two sets of industrial projects and the results show convincingly that the proposed prioritization technique improves the rate of severe fault detection.",2008,0, 4204,A new robust distributed real time scheduling services for RT-CORBA applications,"Distributed computing environment is flexible to control in complex embedded systems and their software components gain complexity when these systems are equipped with many microcontrollers and software object which covers diverse platforms and electronic control units connecting hundreds or thousands of analog and digital sensors and actuators, the simply known as DRE system. These DRE systems need new inter-object communication solution thus QoS-enabled middleware services and mechanisms have begun to emerge. It is also, widely known that scheduling services are used in real-time. We show how high performance scheduling services over the real time distributed environment can be achieved, when adopting the use of RT-CORBA specification which implements a dynamic scheduling policy to achieve end-to-end predictability and performance through novel dynamic scheduling service.",2008,0, 4205,Data collection and analysis for the reliability prediction and estimation of a safety critical system using AIRS,"Software is an essential part of many critical and non-critical applications, and virtually any industry is dependent on computers for their basic functioning. Techniques to measure and ensure reliability of hardware have seen rapid advances, leaving software as the bottleneck in achieving overall system reliability. Hence there rises a situation for the developers to develop high quality software. Having that in mind it is necessary to provide the reliability of the software to the developers before it is shipped. This can be achieved based on immune system. The immune system classifies the corrected data and error, also the error date are further classified into four different data sets. Then the error rate is measured from the data sets. This will produce high quality, reliable results over a wide variety of problems compared to a range of other approaches, without the need of expert fine-tuning.",2008,0, 4206,Detecting spurious features using parity space,"Detection of spurious features is instrumental in many computer vision applications. The standard approach is feature based, where extracted features are matched between the image frames. This approach requires only vision, but is computer intensive and not yet suitable for real-time applications. We propose an alternative based on algorithms from the statistical fault detection literature. It is based on image data and an inertial measurement unit (IMU). The principle of analytical redundancy is applied to batches of measurements from a sliding time window. The resulting algorithm is fast and scalable, and requires only feature positions as inputs from the computer vision system. It is also pointed out that the algorithm can be extended to also detect non-stationary features (moving targets for instance). The algorithm is applied to real data from an unmanned aerial vehicle in a navigation application.",2008,0, 4207,Introducing Support for Release Planning of Quality Requirements â€?An Industrial Evaluation of the QUPER Model,"In market-driven product development and release planning, it is important to market success to find the right balance among competing quality requirements. To address this issue, a conceptual model that incorporates quality as a dimension in addition to the cost and value dimensions used in prioritisation approaches for functional requirements has been developed. In this paper, we present an industrial evaluation of the model. The results indicate that the quality performance model provides helpful information about quality requirements in release planning. All subjects stated that the most difficult estimations may be more accurate by using the quality performance model.",2008,0, 4208,Comparison of the acoustic response of attached and unattached BiSphereâ„?microbubbles,"Two systems that independently allow the investigation of the response of individual unattached and attached microbubbles have previously been described. Both offered methods of studying the acoustic response of single microbubbles in well defined acoustic fields. The aim of the work described here was to investigate the responses of single attached microbubbles for a range of acoustic pressures and to compare these to the backscatter from unattached single microbubbles subjected to the same acoustic fields. Single attached BiSphereTM (Point Biomedical) microbubbles were attached to polyester with poly-L-lysine. Individual attached microbubbles were insonated at 1.6 MHz for acoustic pressures ranging from 300 to 1000 kPa using a Sonos5500 (Philips Medical Systems) research ultrasound scanner. Each microbubble was aligned to 6 cycle pulse, M-mode ultrasound beams, and unprocessed backscattered RF data captured using proprietary hardware and software. The backscatter from these microbubbles was compared to that of single unattached microbubbles subjected to the same acoustic parameters, microbubbles were insonated several times to determine possible differences in rate of decrease of backscatter between attached and unattached microbubbles. In total over 100 single attached microbubbles have been insonated. At 550kPa an acoustic signal was detected for 20 % of the attached microbubbles and at 1000 kPa for 63%. At acoustic pressures of 300kPa no signal was detected. Mean RMS fundamental pressure from attached and unattached microbubbles insonated at 800 kPa was 9.7 Pa and 8.7 Pa respectively. The ratio between the first two backscattered pulses decreased with increasing pressure. However, for unattached microbubbles the magnitude of the ratio was less than that of attached (at 550kPa mean ratio attached: 0.92 + 0.1, unattached: 0.28 + 0.2). There was no significant difference in the peak amplitude of the backscattered signal for unattached and attached micro- - bubbles. BiSphereTM microbubbles comprise an internal polymer shell with an albumin coating, resulting in a stiff shell. BiSphereTM microbubbles do not oscillate in the same manner as a softer shelled microbubble, but allow gas leakage which then performs free bubble oscillations. The results here agree with previous acoustic and optical microscopy measurements which show that a proportion of microbubbles will scatter and this number increases with acoustic pressure. The lack of difference in scatter between the unattached and attached microbubbles may be attributed to the free microbubble oscillation being in the vicinity of the stiff shell, which may provide the same motion damping to a wall. Second pulse exposure shows that the wall becomes important in the survival of the free gas. These high quality measurements can be further improved by incorporating microbubble sizing to increase the specificity of the comparisons between unattached and attached microbubbles.",2008,0, 4209,Cardiac monitoring using transducers attached directly to the heart,"Cardiac ultrasound systems deliver excellent information about the heart, but are constructed for intermittent imaging and interpretation by a skilled operator. This paper presents a dedicated ultrasound system to monitor cardiac function continuously during and after cardiac surgery. The system uses miniature 10 MHz transducers sutured directly to the heart surface. M-mode images give a visual interpretation of the contraction pattern, while tissue velocity curves give detailed quantitative information. The ultrasound measurements are supported by synchronous ECG and pressure recordings. The system has been tested on pigs, demonstrating M-mode and tissue velocity measurements of good quality. When occluding the LAD coronary artery, the system detected changes in contraction pattern that correspond with known markers of ischemia. The system uses dedicated analog electronics and a PC with digitizers and LabVIEW software, and may also be useful in other experimental ultrasound applications.",2008,0, 4210,The PIRR Methodology to Estimate Resource Requirements for Distributed NM Applications in Mobile Networks,"The current centralized Network Management approach in mobile networks may lead to problems with being able to scale up when new customer services and network nodes will be deployed in the network. The next generation mobile networks proposed by 3rd Generation Partnership Project (3GPP) envision a flat network architecture comprising some thousands of network nodes, each of which will need to be managed, in a timely and efficient fashion. This consideration has prompted a research activity to find alternative application architectures that could overcome the limits of the current centralized approaches.",2008,0, 4211,Performance Analysis of SER in DVB-H by using Reed-Solomon Code with Erasure and Non Erasure Technique,"Digital video broadcasting-handheld (DVB-H) is based on the earlier standard DVB-T, which is used for terrestrial TV broadcasting. This new standard brings features that make it possible to receive digital video broadcast of DVD quality video and sound in handheld devices, it also offers reliable high data rate reception for mobile & battery powered devices, The paper discusses and compares the key technology elements 4 K and 2 K modes, in-depth interleavers, time slicing and multi protocol encapsulation-forward error correction (MPE-FEC). In addition, extensive range of SER performance results on software based simulations is provided. The result suggest that by using an erasure decoding method with the Ree -Solomon code & cyclic redundancy check error detection as the link layer forward error correction, the strength of the signal is much higher while error detection & correction becomes doubles.",2008,0, 4212,Application of kernel learning vector quantization to novelty detection,"In this paper, we focus on kernel learning vector quantization (KLVQ) for handling novelty detection. The two key issues are addressed: the existing KLVQ methods are reviewed and revisited, while the reformulated KLVQ is applied to tackle novelty detection problems. Although the calculation of kernelising the learning vector quantization (LVQ) may add an extra computational cost, the proposed method exhibits better performance over the LVQ. The numerical study on one synthetic data set confirms the benefit in using the proposed KLVQ.",2008,0, 4213,ARTOO,"Intuition is often not a good guide to know which testing strategies will work best. There is no substitute for experimental analysis based on objective criteria: how many faults a strategy finds, and how fast. ""Random"" testing is an example of an idea that intuitively seems simplistic or even dumb, but when assessed through such criteria can yield better results than seemingly smarter strategies. The efficiency of random testing is improved if the generated inputs are evenly spread across the input domain. This is the idea of adaptive random testing (ART). ART was initially proposed for numerical inputs, on which a notion of distance is immediately available. To extend the ideas to the testing of object-oriented software, we have developed a notion of distance between objects and a new testing strategy called ARTOO, which selects as inputs objects that have the highest average distance to those already used as test inputs. ARTOO has been implemented as part of a tool for automated testing of object-oriented software. We present the ARTOO concepts, their implementation, and a set of experimental results of its application. Analysis of the results shows in particular that, compared to a directed random strategy, ARTOO reduces the number of tests generated until the first fault is found, in some cases by as much as two orders of magnitude. ARTOO also uncovers faults that the random strategy does not find in the time allotted, and its performance is more predictable.",2008,0, 4214,The effect of the number of inspectors on the defect estimates produced by capture-recapture models,"Inspections can be made more cost-effective by using capture-recapture methods to estimate post-inspection defects. Previous capture-recapture studies of inspections used relatively small data sets compared with those used in biology and wildlife research (the origin of the models). A common belief is that capture-recapture models underestimate the number of defects but their performance can be improved with data from more inspectors. This increase has not been evaluated in detail. This paper evaluates new estimators from biology not been previously applied to inspections. Using a data from seventy-three inspectors, we analyze the effect of the number of inspectors on the quality of estimates. Contrary to previous findings indicating that Jackknife is the best estimator, our results show that the SC estimators are better suited to software inspections. Our results also provide a detailed analysis of the number of inspectors necessary to obtain estimates within 5% to 20% of the actual.",2008,0, 4215,Interval Quality: Relating Customer-Perceived Quality to Process Quality,"We investigate relationships among software quality measures commonly used to assess the value of a technology, and several aspects of customer perceived quality measured by interval quality (IQ): a novel measure of the probability that a customer will observe a failure within a certain interval after software release. We integrate information from development and customer support systems to compare defect density measures and IQ for six releases of a major telecommunications system. We find a surprising negative relationship between the traditional defect density and IQ. The four years of use in several large telecommunication products demonstrates how a software organization can control customer perceived quality not just during development and verification, but also during deployment by changing the release rate strategy and by increasing the resources to correct field problems rapidly. Such adaptive behavior can compensate for the variations in defect density between major and minor releases.",2008,0, 4216,Water Quality Measurement using Transmittance and 90° Scattering Techniques through Optical Fiber Sensor,This research is the continuance of the previous developed water quality fiber sensor by Omar et al (2007). The application of spectroscopy analysis is the major tasks established in the measurement of the capacity of total suspended solids (TSS) in water sample in the unit of mg/L. The interaction between the incident light and the turbid water sample will leads the light to be either transmitted through the water sample or absorbed and scattered by water molecules and TSS particles. The resultant light produced after the interaction will be collected by plastic optical fiber located at 180deg (transmittance measurement technique) and 90deg (90deg scattering measurement technique) from the incident light. The measurement is done through BLUE and RED Systems with peak response of emitter and detector at 470 nm and 635 nm respectively. Signal interpretation and data display is established through Basic Stamp 2 microcontroller. Transmittance measurement technique has a capability to sense the amount of solids suspended in water as low as 20 mg/L. 90deg measurement technique has a capability to sense the amount of solids suspended in water as low as 10 mg/L. Both measurement techniques recorded a very good linear correlation coefficient and low standard error.,2008,0, 4217,A framework for interoperability analysis on the semantic web using architecture models,IT decision making requires analysis of possible future scenarios. The quality of the decisions can be enhanced by the use of architecture models that increase the understanding of the components of the system scenario. It is desirable that the created models support the needed analysis effectively since creation of architecture models often is a demanding and time consuming task. This paper suggests a framework for assessing interoperability on the systems communicating over the semantic web as well as a metamodel suitable for this assessment. Extended influence diagrams are used in the framework to capture the relations between various interoperability factors and enable aggregation of these into a holistic interoperability measure. The paper is concluded with an example using the framework and metamodel to create models and perform interoperability analysis.,2008,0, 4218,Improvement model for collaborative networked organizations,Small and medium enterprises (SMEs) have huge improvement potential both in domain and in collaboration/interoperability capabilities. Before implementing respective improvement measures it's necessary to assess the performance in specific process areas which we divide in domain (e.g. tourism) and collaboration oriented ones. Both in enterprise collaboration (EC) and in enterprise interoperability (EI) the behavior of organizations regarding interoperability must be improved.,2008,0, 4219,Inaccuracies of input data relevant for PV yield prediction,"Accuracy of the PV yield prediction process, including meteorological data (direct and diffuse irradiance with its actual spectral composition and spatial distribution), material properties of encapsulation (refractive indices, absorption coefficients, thermal properties), parameters relevant for heat transfer, PV conversion parameters of the cell (temperature coefficients, spectral response, weak light performance, degradation) considerably depends on the quality of the input data applied (derived from literature, data sheets, norms, software tools, or own measurements). The contribution gives an overview of the processes involved, the relevant parameters, the accuracy achievable and the impact on yield prediction.",2008,0, 4220,Identifying factors influencing reliability of professional systems,"Modern product development strategies call for a more proactive approach to fight intense global competition in terms of technological innovation, shorter time to market, quality and reliability and accommodative price. From a reliability engineering perspective, development managers would like to estimate as early as possible how reliably the product is going to behave in the field, so they can then focus on system reliability improvement. To steer such a reliability driven development process, one of the important aspects in predicting the reliability behavior of a new product, is to know the factors that may influence its field performance. In this paper, two methods are proposed for identifying reliability factors and their significance in influencing the reliability of the product.",2008,0, 4221,A Deterministic Methodology for Identifying Functionally Untestable Path-Delay Faults in Microprocessor Cores,"Delay testing is crucial for most microprocessors. Software-based self-test (SBST) methodologies are appealing, but devising effective test programs addressing the true functionally testable paths and assessing their actual coverage are complex tasks. In this paper, we propose a deterministic methodology, based on the analysis of the processor instruction set architecture, for determining rules arbitrating the functional testability of path-delay faults in the data path and control unit of processor cores. Moreover, the performed analysis gives guidelines for generating test programs. A case study on a widely used 8-bit microprocessor is provided.",2008,0, 4222,Research and Design of Intelligent Wireless Power Parameter Detection System,"To assure electric power system and used in safe state, we urgent need to monitor, analyze, and measure various electric power parameter, which can provide rectifying and reforming scheme effectively, strength preventive measures, and assure electric power operation in safety, reliable, and economic. So, electric parameter monitoring technology emerges. Aiming to disadvantages of present electric parameter monitoring, we design a kind of wireless power electric parameter detection system based on single chip microprocessor (SCM) technology, wireless communication technology, and visual instrument (VI) technology, etc. System is composed of upper PC and lower PC, lower PC module acts SCM as core of processor, and uses voltage/currency transformer module, signal processor module to realize electric network signal conversion, which can be recognized by MSP430FI49 and completes signal A/D transfer by itself ADC12 module, as well as controls nRF905 wireless transceiver module to realize signal transmitting between lower PC and upper PC's wireless receiving module in real time. Wireless receiving module is connected with Upper PC installing of LabVIEW through serial interface RS-232, which power electric parameter detection system is carried out. This paper mainly introduces system's structure, part of hardware and software design in detail.",2008,0, 4223,A Method of Multimedia Software Reliability Test Based on Software Partial Quality Measurement,"According to the characteristics of multimedia software this paper presents an approach about how to use the software partial quality measurement and reliability prediction to adjust the test allocation of software reliability test which is based on the original operational profile. In this new approach, software reliability test is not only based on the operational profile but also guided by the prediction result of software partial quality measurement. The adjustable range is decided by the operation independence factor which can be gotten by neural network learning algorithm and is different from software to software.",2008,0, 4224,Modeling and Analysis for Educational Software Quality Hierarchy Triangle,"The area of educational software evaluation has been increasingly muddled because of a lack of consensus among software evaluators. This article presents the classification and evaluation granularity of educational software, proposes an educational software quality hierarchy triangle model based on some major software quality models, analyzes the signification and relativity of the factors. At last, it indicates the primary analysis for the model formalization.",2008,0, 4225,A Quantitative Approach to eMM,"eMM (e-learning maturity model) is used for evaluating the capability and maturity of an institution engaged in e-learning. Based on the original eMM, we suggest a quantitative approach , which can set up an eMM process meta-model as a frame of reference to process improvement, use process capability baseline to metric process capability, make quality estimation and project management, while use SPC to control itspsila continuous process improvement.",2008,0, 4226,Evaluation of Test Criteria for Space Application Software Modeling in Statecharts,"Several papers have addressed the problem of knowing which software test criteria are better than others with respect to parameters such as cost, efficiency and strength. This paper presents an empirical evaluation in terms of cost and efficiency for one test method for finite state machines, switch cover, and two test criteria of the statechart coverage criteria family, all-transitions and all-simple-paths, for a reactive system of a space application. Mutation analysis was used to evaluate efficiency in terms of killed mutants. The results show that the two criteria and the method presented the same efficiency but all-simple-paths presented a better cost because its test suite is smaller than the one generated by switch cover. Besides, test suite due to the all-simple-paths criterion killed the mutants faster than the other test suites meaning that it might be able to detect faults in the software more quickly than the other criteria.",2008,0, 4227,Connector-Driven Gradual and Dynamic Software Assembly Evolution,"Complex and long-lived software need to be upgraded at runtime. Replacing a software component with a newer version is the basic evolution operation that has to be supported. It is error-prone as it is difficult to guarantee the preservation of functionalities and quality. Few existing work on ADLs fully support a component replacement process from its description to its test and validation. The main idea of this work is to have software architecture evolution dynamically driven by connectors (the software glue between components). It proposes a connector model which autonomically handle the reconfiguration of connections in architectures in order to support the versioning of components in a gradual, transparent and testable manner. Hence, the system has the choice to commit the evolution after a successful test phase of the software or rollback to the previous state.",2008,0, 4228,On Preprocessing Multi-channel Sensor Data for Online Process Monitoring,This paper discusses online monitoring production processes based on multi-channel sensor data. Particularly the problem of transient and anomaly detection is addressed for which a processing framework consisting of a preprocessing module and a reasoning engine is outlined. While there is much theory available in the literature for the reasoning engine this is not the case for the preprocessing which massively depends on the physical interpretation and semantics of the data. The paper addresses these problems and proposes new normalization concepts based on regularization especially for making transients of multi-channel data comparable and adequate for further processing by a reasoning engine. A proof of concept is demonstrated by means of real data from an injection moulding process.,2008,0, 4229,Mining Bug Repositories--A Quality Assessment,"The process of evaluating, classifying, and assigning bugs to programmers is a difficult and time consuming task which greatly depends on the quality of the bug report itself. It has been shown that the quality of reports originating from bug trackers or ticketing systems can vary significantly. In this research, we apply information retrieval (IR) and natural language processing (NLP) techniques for mining bug repositories. We focus particularly on measuring the quality of the free form descriptions submitted as part of bug reports used by open source bug trackers. Properties of natural language influencing the report quality are automatically identified and applied as part of a classification task. The results from the automated quality assessment are used to populate and enrich our existing software engineering ontology to support a further analysis of the quality and maturity of bug trackers.",2008,0, 4230,2-D Software Quality Model and Case Study in Software Flexibility Research,"This work starts from the analysis of the traditional decomposed quality model and the relation quality model and discovers the deficiency of them presented the software quality. Then based on the measurement theory a 2-D software quality model is proposed, which synthesizes the traditional decomposed quality model and the relation quality model to comprehensive present software quality. In particular, the use of Bayesian network theory to construct the uncertainty casual relationships in the same level notions establishes a theoretical basis for the software quality assessment and prediction. Finally, the method of 2-D software quality model constructing and the initial result are presented by the research on the software flexibility in the study of cases.",2008,0, 4231,An Explanation Model for Quality Improvement Effect of Peer Reviews,"Through the analysis of Rayleigh model, an explanation model on the quality effect of peer reviews is constructed. The review activities are evaluated by the defect removal rate at each phase. We made Hypotheses on how these measurements are related to the product quality. These Hypotheses are verified through regression analysis of actual project data and concrete calculation formulae are obtained as a model. This explanation model can be used to (1) evaluate the effect of peer review, especially the upper steam review quantitatively, (2) make concrete review plan and to set objective values for review activities, (3) evaluate the effect of each peer review activity or method.",2008,0, 4232,Non-effectively grounded system line selection device based on ARM,"A hardware and software platform based on ARM (advanced RISC machines) is designed to meet the requisition of reliability and rapidity in line selection. ARM processor LPC2214 with low power loss acts as the controlling center of the platform. The software algorithm called integrated criterion combines several line detection methods to improve adaptability of line selection. The advantages of ARM, excellent real time interrupt response and high computing power, make the intelligent algorithm to be fast processed. The operation results show that this platform operates steadily and has high adaptability and line selection is implemented fast and reliable. This device is an ideal earth-fault detection platform for its better performance-price ratio.",2008,0, 4233,An efficient parallel approach for identifying protein families in large-scale metagenomic data sets,"Metagenomics is the study of environmental microbial communities using state-of-the-art genomic tools. Recent advancements in high-throughput technologies have enabled the accumulation of large volumes of metagenomic data that was until a couple of years back was deemed impractical for generation. A primary bottleneck, however, is in the lack of scalable algorithms and open source software for large-scale data processing. In this paper, we present the design and implementation of a novel parallel approach to identify protein families from large-scale metagenomic data. Given a set of peptide sequences we reduce the problem to one of detecting arbitrarily-sized dense subgraphs from bipartite graphs. Our approach efficiently parallelizes this task on a distributed memory machine through a combination of divide-and-conquer and combinatorial pattern matching heuristic techniques. We present performance and quality results of extensively testing our implementation on 160 K randomly sampled sequences from the CAMERA environmental sequence database using 512 nodes of a BlueGene/L supercomputer.",2008,0, 4234,Nimrod/K: Towards massively parallel dynamic Grid workflows,"A challenge for Grid computing is the difficulty in developing software that is parallel, distributed and highly dynamic. Whilst there have been many general purpose mechanisms developed over the years, Grid programming still remains a low level, error prone task. Scientific workflow engines can double as programming environments, and allow a user to compose dasiavirtualpsila Grid applications from pre-existing components. Whilst existing workflow engines can specify arbitrary parallel programs, (where components use message passing) they are typically not effective with large and variable parallelism. Here we discuss dynamic dataflow, originally developed for parallel tagged dataflow architectures (TDAs), and show that these can be used for implementing Grid workflows. TDAs spawn parallel threads dynamically without additional programming. We have added TDAs to Kepler, and show that the system can orchestrate workflows that have large amounts of variable parallelism. We demonstrate the system using case studies in chemistry and in cardiac modelling.",2008,0, 4235,Influence of low order harmonic phase angles on tripping time of overcurrent relays,"Experimental analyses are used to investigate the impact of low order harmonics on the operation and tripping times of overcurrent relays. Two analytical approaches based on relay characteristics provided by the manufacturer and simulations using PSCAD software package are used to estimate tripping times under non-sinusoidal operating conditions. Experiments were conducted such that the order, magnitude and phase angle of harmonics could be controlled by employing a computer-based single-phase harmonic source. Computed and measured tripping times are compared and suggestions on the application of overcurrent relays in harmonically polluted environments are provided.",2008,0, 4236,A Quasi-experiment for Effort and Defect Estimation Using Least Square Linear Regression and Function Points,"Software companies are currently investing large amounts of money in software process improvement initiatives in order to enhance their products' quality. These initiatives are based on software quality models, thus achieving products with guaranteed quality levels. In spite of the growing interest in the development of precise prediction models to estimate effort, cost, defects and other project's parameters, to develop a certain software product, a gap remains between the estimations generated and the corresponding data collected in the project's execution. This paper presents a quasi-experiment reporting the adoption of effort and defect estimation techniques in a large worldwide IT company. Our contributions are the lessons learned during (a) extraction and preparation of project historical data, (b) the use of estimation techniques on these data, and (c) the analysis of the results obtained. We believe such lessons can contribute to the improvement of the state-of-the-art in prediction models for software development.",2008,0, 4237,Issues on Estimating Software Metrics in a Large Software Operation,"Software engineering metrics prediction has been a challenge for researchers throughout the years. Several approaches for deriving satisfactory predictive models from empirical data have been proposed, although none has been massively accepted due to the difficulty of building a generic solution applicable to a considerable number of different software projects. The most common strategy on estimating software metrics is the linear regression statistical technique, for its ease of use and availability in several statistical packages. Linear regression has numerous shortcomings though, which motivated the exploration of many techniques, such as data mining and other machine learning approaches. This paper reports different strategies on software metrics estimation, presenting a case study executed within a large worldwide IT company. Our contributions are the lessons learned during the preparation and execution of the experiments, in order to aid the state of the art on prediction models of software development projects.",2008,0, 4238,On the Relation between External Software Quality and Static Code Analysis,"Only a few studies exist that try to investigate whether there is a significant correlation between external software quality and the data provided by static code analysis tools. A clarification on this issue could pave the way for more precise prediction models on the probability of defects based on the violation of programming rules. We therefore initiated a study where the defect data of selected versions of the open source development environment ldquoEclipse SDKrdquo is correlated with the data provided by the static code analysis tools PMD and FindBugs applied the source code of Eclipse. The results from this study are promising as especially some PMD rulesets show a good correlation with the defect data and could therefore serve as basis for measurement, control and prediction of software quality.",2008,0, 4239,SeaWASP: A small waterplane area twin hull autonomous platform for shallow water mapping,"Students with Santa Clara University (SCU) and the Monterey Bay Aquarium Research Institute (MBARI) are developing an innovative platform for shallow water bathymetry. Bathymetry data is used to analyze the geography, ecosystem, and health of marine habitats. However, current methods for shallow water measurements typically involve large, manned vessels. These vessels may pose a danger to themselves and the environment in shallow, semi-navigable waters. Small vessels, however, are prone to disturbance by the waves, tides, and currents of shallow water. The SCU / MBARI autonomous surface vessel (ASV) is designed to operate safely, stably in waters > 1 m and without significant manned support. Final deployment will be at NOAA's Kasitsna Bay Laboratory in Alaska. The ASV utilizes several key design components to provide stability, shallow draft, and long-duration unmanned operations. Bathymetry is measured with a multibeam sonar in concert with DVL and GPS sensors. Pitch, roll, and heave are minimized by a Small Waterplane Area Twin Hull (SWATH) design. The SWATH has a submerged hull, small water-plane area, and high mass to damping ratio, making it less prone to disturbance and ideal for accurate data collection. Precision sensing and actuation is controlled by onboard autonomous algorithms. Autonomous navigation increases the quality of the data collection and reduces the necessity for continuous manning. The vessel has been operated successfully in several open water test environments, including Elkhorn Slough, CA, Steven's Creek, CA, and Lake Tahoe, NV. It is currently is in the final stages of integration and test for its first major science mission at Orcas Island, San Juan Islands, WA, in August, 2008. The Orcas Island deployment will feature design upgrades implemented in Summer, 2008, including additional batteries for all-day power (minimum eight hours), active ballast, real-time data monitoring, updated autonomous control electronics and software, and data- editing using in-house bathymetry mapping software, MB-System. This paper will present the results of the Orcas Island mission and evaluate possible design changes for Alaska. Also, we will include a discussion of our shallow water bathymetry design considerations and a technical overview of the subsystems and previous test results. The ASV has been developed in partnership with Santa Clara University, the Monterey Bay Aquarium Research Institute, the University of Alaska Fairbanks, and NOAA's West Coat and Polar Regions Undersea Research Center.",2008,0, 4240,"Availability and reliability estimation for a system undergoing minimal, perfect and failed rejuvenation","In this paper, a software rejuvenation model is presented in which two different rejuvenation actions are considered, perfect and minimal. The concept of a failed rejuvenation action which leads the system to failure is also introduced. The presented model is studied under a Continuous Time Markov Chain (CTMC) framework and a maximum likelihood estimator of the generator matrix is presented. Based on this, estimators for instantaneous availability and reliability function are also presented. Moreover, the behavior of the above estimators is studied under various rejuvenation policies. A numerical example based on simulation results is finally presented.",2008,0, 4241,Optimization of economizer tubing system renewal decisions,"The economizer is a critical component in coal fired power stations. An optimal renewal strategy is needed for minimizing the lifetime cost of this component. Here we present an effective optimization approach which considers economizer tubing failure probabilities, repair and renewal costs, potential production losses, and fluctuations in electricity market prices.",2008,0, 4242,FDC-methods as trigger for predective maintenance for hot- and wet-process equipment,"In high volume production a key performance-indicator is the highest possible uptime of process equipment. This may be achieved by moving from time-triggered “preventinveâ€?to event-triggered “predictiveâ€?maintenance, which has become a topic in the industry in the past years. This paper gives examples on events that can be used as triggers for maintenance actions. Performing FDC (Fault Detection and Classification) methods on trace-data and on event-logs collected from the equipment helped to identify these triggers. Furthermore, the FDC system may be used for recipe optimization and tool-matching.",2008,0, 4243,A hardware-oriented variable block-size motion estimation method for H.264/AVC video coding,"The variable block-size motion estimation (VBSME) is the most time-consuming component in H.264/AVC encoder. This paper presents a simple hardware-oriented VBSME method with an adaptive computation-aware motion estimation (ME) algorithm. To reduce the complexity, VBSME is driven by the 16×16 mode and performed jointly for all modes. Moreover, this approach is integrated with a few other fast ME algorithms to demonstrate the efficiency of the adaptive ME. As a measure of efficiency, a cost function which combines the reconstruction error and the bit-rate is proposed. Obtained results show high efficiency of the adaptive strategy and a small negative influence of the proposed VBSME method on performance. It is also observed, that the increase of the number of search points (SPs) per macroblock (MB) over a certain threshold does not have to lead to the quality improvement.",2008,0, 4244,Evolution and Search Based Metrics to Improve Defects Prediction,"Testing activity is the most widely adopted practice to ensure software quality. Testing effort should be focused on defect prone and critical resources i.e., on resources highly coupled with other entities of the software application.In this paper, we used search based techniques to define software metrics accounting for the role a class plays in the class diagram and for its evolution over time. We applied Chidamber and Kemerer and the newly defined metrics to Rhino, a Java ECMA script interpreter, to predict version 1.6R5 defect prone classes. Preliminary results show that the new metrics favorably compare with traditional object oriented metrics.",2009,1, 4245,Predicting faults using the complexity of code changes,"Predicting the incidence of faults in code has been commonly associated with measuring complexity. In this paper, we propose complexity metrics that are based on the code change process instead of on the code. We conjecture that a complex code change process negatively affects its product, i.e., the software system. We validate our hypothesis empirically through a case study using data derived from the change history for six large open source projects. Our case study shows that our change complexity metrics are better predictors of fault potential in comparison to other well-known historical predictors of faults, i.e., prior modifications and prior faults.",2009,1, 4246,Predicting defects in SAP Java code: An experience report,"Which components of a large software system are the most defect-prone? In a study on a large SAP Java system, we evaluated and compared a number of defect predictors, based on code features such as complexity metrics, static error detectors, change frequency, or component imports, thus replicating a number of earlier case studies in an industrial context. We found the overall predictive power to be lower than expected; still, the resulting regression models successfully predicted 50-60% of the 20% most defect-prone components.",2009,1, 4247,Merits of using repository metrics in defect prediction for open source projects,Many corporate code developers are the beta testers of open source software. They continue testing until they are sure that they have a stable version to build their code on. In this respect defect predictors play a critical role to identify defective parts of the software. Performance of a defect predictor is determined by correctly finding defective parts of the software without giving any false alarms. Having high false alarms means testers/developers would inspect bug free code unnecessarily. Therefore in this research we focused on decreasing the false alarm rates by using repository metrics. We conducted experiments on the data sets of Eclipse project. Our results showed that repository metrics decreased the false alarm rates on the average to 23% from 32% corresponding up to 907 less files to inspect.,2009,1, 4248,An investigation of the relationships between lines of code and defects,"It is always desirable to understand the quality of a software system based on static code metrics. In this paper, we analyze the relationships between lines of code (LOC) and defects (including both pre-release and post-release defects). We confirm the ranking ability of LOC discovered by Fenton and Ohlsson. Furthermore, we find that the ranking ability of LOC can be formally described using Weibull functions. We can use defect density values calculated from a small percentage of largest modules to predict the number of total defects accurately. We also find that, given LOC we can predict the number of defective components reasonably well using typical classification techniques. We perform an extensive experiment using the public Eclipse dataset, and replicate the study using the NASA dataset. Our results confirm that simple static code attributes such as LOC can be useful predictors of software quality.",2009,1, 4249,Ineffectiveness of Use of Software Science Metrics as Predictors of Defects in Object Oriented Software,"Software science metrics (SSM) have been widely used as predictors of software defects. The usage of SSM is an effect of correlation of size and complexity metrics with number of defects. The SSM have been proposed keeping in view the procedural paradigm and structural nature of the programs. There has been a shift in software development paradigm from procedural to object oriented (OO) and SSM have been used as defect predictors of OO software as well. However, the effectiveness of SSM in OO software needs to be established. This paper investigates the effectiveness of use of SSM for: (a)classification of defect prone modules in OO software (b) prediction of number of defects. Various binary and numeric classification models have been applied on dataset kc1 with class level data to study the role of SSM. The results show that the removal of SSM from the set of independent variables does not significantly affect the classification of modules as defect prone and the prediction of number of defects. In most of the cases the accuracy and mean absolute error has improved when SSM were removed from the set of independent variables. The results thus highlight the ineffectiveness of use of SSM in defect prediction in OO software.",2009,1, 4250,On the Relationship Between Change Coupling and Software Defects,"Change coupling is the implicit relationship between two or more software artifacts that have been observed to frequently change together during the evolution of a software system. Researchers have studied this dependency and have observed that it points to design issues such as architectural decay. It is still unknown whether change coupling correlates with a tangible effect of design issues, i.e., software defects.In this paper we analyze the relationship between change coupling and software defects on three large software systems. We investigate whether change coupling correlates with defects, and if the performance of bug prediction models based on software metrics can be improved with change coupling information.",2009,1, 4251,A Positive Detecting Code and Its Decoding Algorithm for DNA Library Screening,"The study of gene functions requires high-quality DNA libraries. However, a large number of tests and screenings are necessary for compiling such libraries. We describe an algorithm for extracting as much information as possible from pooling experiments for library screening. Collections of clones are called pools, and a pooling experiment is a group test for detecting all positive clones. The probability of positiveness for each clone is estimated according to the outcomes of the pooling experiments. Clones with high chance of positiveness are subjected to confirmatory testing. In this paper, we introduce a new positive clone detecting algorithm, called the Bayesian network pool result decoder (BNPD). The performance of BNPD is compared, by simulation, with that of the Markov chain pool result decoder (MCPD) proposed by Knill et al. in 1996. Moreover, the combinatorial properties of pooling designs suitable for the proposed algorithm are discussed in conjunction with combinatorial designs and d-disjunct matrices. We also show the advantage of utilizing packing designs or BIB designs for the BNPD algorithm.",2009,0, 4252,Deterministic Priority Channel Access Scheme for QoS Support in IEEE 802.11e Wireless LANs,"The enhanced distributed channel access (EDCA) of IEEE 802.11e has been standardized to support quality of service (QoS) in wireless local area networks (LANs). The EDCA statistically supports the QoS by differentiating the probability of channel access among different priority traffic and does not provide the deterministically prioritized channel access for high-priority traffic, such as voice or real-time video. Therefore, lower priority traffic still affects the performance of higher priority traffic. In this paper, we propose a simple and effective scheme called deterministic priority channel access (DPCA) to improve the QoS performance of the EDCA mechanism. To provide guaranteed channel access to multimedia applications, the proposed scheme uses a busy tone to limit the transmissions of lower priority traffic when higher priority traffic has packets to send. Performance evaluation is conducted using both numerical analysis and simulation and shows that the proposed scheme significantly outperforms the EDCA in terms of throughput, delay, delay jitter, and packet drop ratio under a wide range of contention level.",2009,0, 4253,Fault-Aware Runtime Strategies for High-Performance Computing,"As the scale of parallel systems continues to grow, fault management of these systems is becoming a critical challenge. While existing research mainly focuses on developing or improving fault tolerance techniques, a number of key issues remain open. In this paper, we propose runtime strategies for spare node allocation and job rescheduling in response to failure prediction. These strategies, together with failure predictor and fault tolerance techniques, construct a runtime system called FARS (Fault-Aware Runtime System). In particular, we propose a 0-1 knapsack model and demonstrate its flexibility and effectiveness for reallocating running jobs to avoid failures. Experiments, by means of synthetic data and real traces from production systems, show that FARS has the potential to significantly improve system productivity (i.e., performance and reliability).",2009,0, 4254,Beyond Output Voting: Detecting Compromised Replicas Using HMM-Based Behavioral Distance,"Many host-based anomaly detection techniques have been proposed to detect code-injection attacks on servers. The vast majority, however, are susceptible to ""mimicry"" attacks in which the injected code masquerades as the original server software, including returning the correct service responses, while conducting its attack. ""Behavioral distance,"" by which two diverse replicas processing the same inputs are continually monitored to detect divergence in their low-level (system-call) behaviors and hence potentially the compromise of one of them, has been proposed for detecting mimicry attacks. In this paper, we present a novel approach to behavioral distance measurement using a new type of hidden Markov model, and present an architecture realizing this new approach. We evaluate the detection capability of this approach using synthetic workloads and recorded workloads of production Web and game servers, and show that it detects intrusions with substantially greater accuracy than a prior proposal on measuring behavioral distance. We also detail the design and implementation of a new architecture, which takes advantage of virtualization to measure behavioral distance. We apply our architecture to implement intrusion-tolerant Web and game servers, and through trace-driven simulations demonstrate that it experiences moderate performance costs even when thresholds are set to detect stealthy mimicry attacks.",2009,0, 4255,What Types of Defects Are Really Discovered in Code Reviews?,"Research on code reviews has often focused on defect counts instead of defect types, which offers an imperfect view of code review benefits. In this paper, we classified the defects of nine industrial (C/C++) and 23 student (Java) code reviews, detecting 388 and 371 defects, respectively. First, we discovered that 75 percent of defects found during the review do not affect the visible functionality of the software. Instead, these defects improved software evolvability by making it easier to understand and modify. Second, we created a defect classification consisting of functional and evolvability defects. The evolvability defect classification is based on the defect types found in this study, but, for the functional defects, we studied and compared existing functional defect classifications. The classification can be useful for assigning code review roles, creating checklists, assessing software evolvability, and building software engineering tools. We conclude that, in addition to functional defects, code reviews find many evolvability defects and, thus, offer additional benefits over execution-based quality assurance methods that cannot detect evolvability defects. We suggest that code reviews may be most valuable for software products with long life cycles as the value of discovering evolvability defects in them is greater than for short life cycle systems.",2009,0, 4256,A Vector Approach for Image Quality Assessment and Some Metrological Considerations,"The main purpose of this paper is to present a metrology-based view of the image quality assessment (IQA) field. Three main topics are developed. First, the state of the art in the field of IQA is presented, providing a classification of some of the most important objective and subjective IQA methods. Then, some aspects of the field are analyzed from a metrological point of view, also through a comparison with the software quality measurement area. In particular, a statistical approach to the evaluation of the uncertainty for IQA objective methods is presented, and the topic of measurement modeling for subjective IQA methods is analyzed. Finally, a vector approach for full-reference IQA is discussed, with applications to images corrupted by impulse and Gaussian noise. For these applications, the vector root mean squared error (VRMSE) and fuzzy VRMSE are introduced. These vector parameters provide a possible way to overcome the main limitations of the mean-squared-error-based IQA methods.",2009,0, 4257,CoMoM: Efficient Class-Oriented Evaluation of Multiclass Performance Models,"We introduce the class-oriented method of moments (CoMoM), a new exact algorithm to compute performance indexes in closed multiclass queuing networks. Closed models are important for performance evaluation of multitier applications, but when the number of service classes is large, they become too expensive to solve with exact methods such as mean value analysis (MVA). CoMoM addresses this limitation by a new recursion that scales efficiently with the number of classes. Compared to the MVA algorithm, which recursively computes mean queue lengths, CoMoM also carries on in the recursion information on higher-order moments of queue lengths. We show that this additional information greatly reduces the number of operations needed to solve the model and makes CoMoM the best-available algorithm for networks with several classes. We conclude the paper by generalizing CoMoM to the efficient computation of marginal queue-length probabilities, which finds application in the evaluation of state-dependent attributes such as quality-of-service metrics.",2009,0, 4258,Vector Signal Analyzer Implemented as a Synthetic Instrument,"Synthetic instruments (SIs) use the substantial signal processing assets of a field-programmable gate array (FPGA) to perform the multiple tasks of targeted digital signal processing (DSP)-based instruments. The topic of this paper is vector signal analysis from which time-dependent amplitude and phase is extracted from the input time signal. With access to the time-varying amplitude-phase profiles of the input signal, the vector signal analyzer can present many of the quality measures of a modulation process. These include estimates of undesired attributes such as modulator distortion, phase noise, clock jitter, I -Q imbalance, intersymbol interference, and others. This is where the SI is asked to become a smart software-defined radio (SDR), performing all the tasks of a DSP radio receiver and reporting small variations between the observed modulated signal parameters and those of an ideal modulated signal. Various quality measures (e.g., the size of errors) have value in quantifying and probing performance boundaries of communication systems.",2009,0, 4259,PLR: A Software Approach to Transient Fault Tolerance for Multicore Architectures,"Transient faults are emerging as a critical concern in the reliability of general-purpose microprocessors. As architectural trends point toward multicore designs, there is substantial interest in adapting such parallel hardware resources for transient fault tolerance. This paper presents process-level redundancy (PLR), a software technique for transient fault tolerance, which leverages multiple cores for low overhead. PLR creates a set of redundant processes per application process and systematically compares the processes to guarantee correct execution. Redundancy at the process level allows the operating system to freely schedule the processes across all available hardware resources. PLR uses a software-centric approach to transient fault tolerance, which shifts the focus from ensuring correct hardware execution to ensuring correct software execution. As a result, many benign faults that do not propagate to affect program correctness can be safely ignored. A real prototype is presented that is designed to be transparent to the application and can run on general-purpose single-threaded programs without modifications to the program, operating system, or underlying hardware. The system is evaluated for fault coverage and performance on a four-way SMP machine and provides improved performance over existing software transient fault tolerance techniques with a 16.9 percent overhead for fault detection on a set of optimized SPEC2000 binaries.",2009,0, 4260,Compositional Dependability Evaluation for STATEMATE,"Software and system dependability is getting ever more important in embedded system design. Current industrial practice of model-based analysis is supported by state-transition diagrammatic notations such as Statecharts. State-of-the-art modelling tools like STATEMATE support safety and failure-effect analysis at design time, but restricted to qualitative properties. This paper reports on a (plug-in) extension of STATEMATE enabling the evaluation of quantitative dependability properties at design time. The extension is compositional in the way the model is augmented with probabilistic timing information. This fact is exploited in the construction of the underlying mathematical model, a uniform continuous-time Markov decision process, on which we are able to check requirements of the form: ""The probability to hit a safety-critical system configuration within a mission time of 3 hours is at most 0.01."" We give a detailed explanation of the construction and evaluation steps making this possible, and report on a nontrivial case study of a high-speed train signalling system where the tool has been applied successfully.",2009,0, 4261,Analysis of Pulsed THz Imaging Using Optical Character Recognition,"Using a reflection-based pulsed THz imaging system built upon our ErAs:GaAs photoconductive switch and a gated receiver, we quantify image quality at different detection bands (centered at 100, 400, and 600 GHz). Zero-bias Schottky diode detectors mounted in various waveguide sizes are used to tune the operational frequency bands of the imaging system, while the rest of the imaging system remains unchanged. The image quality is quantified by applying an optical character recognition (OCR) algorithm on THz images of 8-by-10 mm copper letters on a fiberglass substrate. Using the OCR success rate as a metric, we see a fivefold improvement in image quality from a 400 GHz to a 600 GHz imaging system, while our 100 GHz images do not produce any correct OCR results. In a comparison experiment performed at 600 GHz, the image signal-to-noise ratio (SNR) is degraded by placing increasing numbers of denim sheets (5.4 dB decrease in signal per layer) into the beam path. We find that the OCR success rate is roughly constant from one sheet to four sheets of denim (33-25 dB SNR) and then drops off sharply starting at five denim sheets.",2009,0, 4262,Mining Software History to Improve Software Maintenance Quality: A Case Study,Errors in software updates can cause regressions failures in stable parts of the system. The Binary Change Tracer collects data on software projects and helps predict regressions in software projects.,2009,0, 4263,Analytics-Driven Dashboards Enable Leading Indicators for Requirements and Designs of Large-Scale Systems,"Mining software repositories using analytics-driven dashboards provides a unifying mechanism for understanding, evaluating, and predicting the development, management, and economics of large-scale systems and processes. Dashboards enable measurement and interactive graphical displays of complex information and support flexible analytic capabilities for user customizability and extensibility. Dashboards commonly include system requirements and design metrics because they provide leading indicators for project size, growth, and volatility. This article focuses on dashboards that have been used on actual large-scale software projects as well as example empirical relationships revealed by the dashboards. The empirical results focus on leading indicators for requirements and designs of large-scale software systems based on insights from two sets of software projects containing 14 systems and 23 systems.",2009,0, 4264,Model Checking Timed and Stochastic Properties with CSL^{TA},"Markov chains are a well-known stochastic process that provide a balance between being able to adequately model the system's behavior and being able to afford the cost of the model solution. The definition of stochastic temporal logics like continuous stochastic logic (CSL) and its variant asCSL, and of their model-checking algorithms, allows a unified approach to the verification of systems, allowing the mix of performance evaluation and probabilistic verification. In this paper we present the stochastic logic CSLTA, which is more expressive than CSL and asCSL, and in which properties can be specified using automata (more precisely, timed automata with a single clock). The extension with respect to expressiveness allows the specification of properties referring to the probability of a finite sequence of timed events. A typical example is the responsiveness property ""with probability at least 0.75, a message sent at time 0 by a system A will be received before time 5 by system B and the acknowledgment will be back at A before time 7"", a property that cannot be expressed in either CSL or asCSL. We also present a model-checking algorithm for CSLTA.",2009,0, 4265,Special Section on the 2007 IEEE AUTOTESTCON,"In this Special Section, we highlight several of the key significant results that were presented at this conference by presenting nine papers, each of which highlights the theme of the conference and has been technically extended beyond the conference version. Thus, we are sharing new and updated results and advances in automatic testing. It is also exciting to note that these papers exhibit strong collaboration between academia and industry, thus leading to a strong balance between theory and practice in their contributions. The nine papers in this Special Section have been organized into three general categories that have strong interest within the automatic testing community. The first group of papers focuses on the advances in test and diagnosis technology. The second group of papers focuses on the result in the emerging field of prognostics and health management. And the final group of papers focuses on recent work in the area of synthetic instrumentation for automatic test.",2009,0, 4266,An Evidential-Reasoning-Interval-Based Method for New Product Design Assessment,"A key issue in successful new product development is how to determine the best product design among a lot of feasible alternatives. In this paper, the authors present a novel rigorous assessment methodology to improve the decision-making analysis in the complex multiple-attribute environment of new product design (NPD) assessment in early product design stage, where several performance measures, like product functions and features, manufacturability and cost, quality and reliability, maintainability and serviceability, etc., must be accounted for, but no concrete and reliable data are available, in which conventional approaches cannot be applied with confidence. The developed evidential reasoning (ER) interval methodology is able to deal with uncertain and incomplete data and information in forms of both qualitative and quantitative nature, data expressed in interval and range, judgment with probability functions, judgment in a comparative basis, unknown embedded, etc. An NPD assessment model, incorporated with the ER-based methodology, is then developed and a software system is built accordingly for validation. An industrial case study of electrical appliances is used to illustrate the application of the developed ER methodology and the product design assessment system.",2009,0, 4267,Design on the Fault Diagnostic System Based on Virtual Instrument Technique,"Virtual instrument (VI) is one of the most prevalent technologies in the domain of testing, control and fault diagnosis systems, etc. In order to update entirely the means of the equipment testing for shipboard equipment, the fault diagnostic system for shipboard equipment is developed, based on VI technology, Delphi and database. The performance and constitutes of VI are introduced briefly. The modularization and universalization are proposed in its database-based design concept, realizing the design of software and hardware. The ODBC technique is applied for the interconnection of databases to ensure the generality and flexibility of the system. The aim of this design is to resolve the problems existing in the usage of testing equipments. The system mode the best of VI platform and grey diagnosis method, broke through conventional check diagnosis patterns for warships equipment, solved the problems of state prediction and trouble-mode recognition of warships equipment. It has been proved from the application that the system has merits both the testing speed and high accuracy.",2009,0, 4268,Application of an Improved Particle Swarm Optimization for Fault Diagnosis,"In this paper, the feasibility of using probabilistic causal-effect model is studied and we apply it in particle swarm optimization algorithm (PSO) to classify the faults of mine hoist. In order to enhance the PSO performance, we propose the probability function to nonlinearly map the data into a feature space in probabilistic causal-effect model, and with it, fault diagnosis is simplified into optimization problem from the original complex feature set. The proposed approach is applied to fault diagnosis, and our implementation has the advantages of being general, robust, and scalable. The raw datasets obtained from mine hoist system are preprocessed and used to generate networks fault diagnosis for the system. We studied the performance of the improved PSO algorithm and generated a Probabilistic Causal-effect network that can detect faults in the test data successfully. It can get >90% minimal diagnosis with cardinal number of fault symptom sets greater than 25.",2009,0, 4269,Fault Detection for Fuzzy Systems With Intermittent Measurements,"This paper investigates the problem of fault detection for Takagi-Sugeno (T-S) fuzzy systems with intermittent measurements. The communication links between the plant and the fault detection filter are assumed to be imperfect (i.e., data packet dropouts occur intermittently, which appear typically in a network environment), and a stochastic variable satisfying the Bernoulli random binary distribution is utilized to model the unreliable communication links. The aim is to design a fuzzy fault detection filter such that, for all data missing conditions, the residual system is stochastically stable and preserves a guaranteed performance. The problem is solved through a basis-dependent Lyapunov function method, which is less conservative than the quadratic approach. The results are also extended to T--S fuzzy systems with time-varying parameter uncertainties. All the results are formulated in the form of linear matrix inequalities, which can be readily solved via standard numerical software. Two examples are provided to illustrate the usefulness and applicability of the developed theoretical results.",2009,0, 4270,Combined Reconstruction and Motion Correction in SPECT Imaging,"Due to the long imaging times in SPECT, patient motion is inevitable and constitutes a serious problem for any reconstruction algorithm. The measured inconsistent projection data lead to reconstruction artifacts which can significantly affect the diagnostic accuracy of SPECT if not corrected. To address this problem a new approach for motion correction is introduced. It is purely based on the measured SPECT data and therefore belongs to the data-driven motion correction algorithm class. However, it does overcome some of the shortcomings of conventional methods. This is mainly due to the innovative idea to combine reconstruction and motion correction in one optimization problem. The scheme allows for the correction of abrupt and gradual patient motion. To demonstrate the performance of the proposed scheme extensive 3D tests with numerical phantoms for 3D rigid motion are presented. In addition, a test with real patient data is shown. Each test shows an impressive improvement of the quality of the reconstructed image. In this note, only rigid movements are considered. The extension to non-linear motion, as for example breathing or cardiac motion, is straightforward and will be investigated in a forthcoming paper.",2009,0, 4271,Quality Characteristics of Collaborative Systems,This paper describes the new concepts of collaborative systems quality evaluation. There are identified structures of collaborative systems. The paper defines the quality characteristics of collaborative systems. There are proposed a metric to estimate the quality level of collaborative systems. There are performed measurements of collaborative systems quality using a specially designed software.,2009,0, 4272,Implementing a Software-Based 802.11 MAC on a Customized Platform,"Wireless network (WLAN) platforms today are based on complex and inflexible hardware solutions to meet performance and deadline constraints for media access control (MAC). Contrary to that, physical layer solutions, such as software defined radio, become more and more flexible and support several competing standards. A flexible MAC counterpart is needed for system solutions that can keep up with the rising variety of WLAN protocols. We revisit the case for programmable MAC implementations looking at recent WLAN standards and show the feasibility of a software-based MAC. Our exploration uses IEEE 802.11a-e as design driver, which makes great demands on Quality-of-Service, security, and throughput. We apply our SystemC framework for efficiently assessing the quality of design points and reveal trade-offs between memory requirements, core performance, application-specific optimizations, and responsiveness of MAC implementations. In the end, two embedded cores at moderate speed (< 200 MHz) with lightweight support for message passing using small buffers are sufficient for sustaining worst-case scenarios in software.",2009,0, 4273,"Noninvasive Fault Classification, Robustness and Recovery Time Measurement in Microprocessor-Type Architectures Subjected to Radiation-Induced Errors","In critical digital designs such as aerospace or safety equipment, radiation-induced upset events (single-event effects or SEEs) can produce adverse effects, and therefore, the ability to compare the sensitivity of various proposed solutions is desirable. As custom-hardened microprocessor solutions can be very costly, the reliability of various commercial off-the-shelf (COTS) processors can be evaluated to see if there is a commercially available microprocessor or microprocessor-type intellectual property (IP) with adequate robustness for the specific application. Most existing approaches for the measurement of this robustness of the microprocessor involve diverting the program flow and timing to introduce the bit flips via interrupts and embedded handlers added to the application program. In this paper, a tool based on an emulation platform using Xilinx field programmable gate arrays (FPGAs) is described, which provides an environment and methodology for the evaluation of the sensitivity of microprocessor architectures, using dynamic runtime fault injection. A case study is presented, where the robustness of MicroBlaze and Leon3 microprocessors executing a simple signal processing task written in C language is evaluated and compared. A hardened version of the program, where the key variables are protected, has also been tested, and its contributions to system robustness have also been evaluated. In addition, this paper presents a further improvement in the developed tool that allows not only the measurement of microprocessor robustness but, in addition, the study and classification of single-event upset (SEU) effects and the exact measurement of the recovery time (the time that the microprocessor takes to self repair and recover the fault-free state). The measurement of this recovery time is important for real-time critical applications, where criticality depends on both data correctness and timing. To demonstrate the proposed improvements, a new software program t- - hat implements two different software hardening techniques (one for Data and another for Control Flow) has been made, and a study of the recovery times in some significant fault-injection cases has been performed over the Leon3 processor.",2009,0, 4274,CPS: A Cooperative-Probe Based Failure Detection Scheme for Application Layer Multicast,"The failure of non-leaf nodes in application layer multicast (ALM) tree partitions the multicast tree, which may significantly degrade the quality of multicast service. Accurate and timely failure recovery from node failures in ALM tree, which is based on the premise of efficient failure detection scheme, is critical to minimize the disruption of service. In fact, failure detection has hardly been studied in the context of ALM. Firstly, this paper analyzes the failure detection issues in ALM and then proposes a model which is based on the relationship between parent and child nodes for it. Secondly, the cooperative-probe based failure detection scheme (CPS), in which the information about probe loss is shared among monitor nodes, is designed. Thirdly, the performance of CPS is analyzed. The Numerical results show that our proposed scheme can simultaneously reduce the detection time and the probability of false positive at the cost of little increased control overhead by comparing with the basic-probe based failure detection scheme (BPS). Finally, some possible directions for the future work are discussed.",2009,0, 4275,Usability Measurement Using a Fuzzy Simulation Approach,"Usability is a three-dimensional quality of software product, encompassing efficiency, effectiveness and satisfaction. Satisfaction always relies on subjective judgment, but on some degree, the measurement of efficiency and effectiveness is able to be objectively calculated by some approaches, which can help designers to compare different prototype design candidates in the early stage of product development. To address the limitation of state-redundancy and inter-uncertainty of FSM, this paper introduces an existing approach to simplify a general FSM by reducing the redundant states, and proposes a fuzzy mathematical formula for calculating the probability of state-transition. Then, an algorithm for measuring efficiency and effectiveness data on simulated FSM is designed based on the fuzzy function. This measurement approach is very appropriate to be applied on the small-medium types of embedded software in practice.",2009,0, 4276,A Method to Detect the Microshock Risk During a Surgical Procedure,"During a surgical procedure, the patient is exposed to a risk that is inherent in the use of medical electric equipment. The situations involved in this study presume that, in open thorax surgery, 60-Hz currents with milliampere units can pass through the myocardium. A current as small as 50 or 70 muA crossing the heart can cause cardiac arrest. This risk exists due to the electrical supply in the building and medical electric equipment. Even while following established standards and technical recommendations, the equipment use electricity and can cause problems, even with a few milliamperes. This study simulates by software an electrical fault possibility of this type and shows the efficiency of the proposed method for detecting microshock risks. In addition to the simulation, a laboratory experiment is conducted with an electric circuit that is designed to produce leakage currents of known values that are detected by equipment named Protegemed. This equipment, as well as the simulation, also proves the efficiency of the proposed method. The developed method and the applied equipment for this method are covered in the Brazilian Invention Patent (PI number 9701995).",2009,0, 4277,Eliminating microarchitectural dependency from Architectural Vulnerability,"The architectural vulnerability factor (AVF) of a hardware structure is the probability that a fault in the structure will affect the output of a program. AVF captures both microarchitectural and architectural fault masking effects; therefore, AVF measurements cannot generate insight into the vulnerability of software independent of hardware. To evaluate the behavior of software in the presence of hardware faults, we must isolate the software-dependent (architecture-level masking) portion of AVF from the hardware-dependent (microarchitecture-level masking) portion, providing a quantitative basis to make reliability decisions about software independent of hardware. In this work, we demonstrate that the new program vulnerability factor (PVF) metric provides such a basis: PVF captures the architecture-level fault masking inherent in a program, allowing software designers to make quantitative statements about a program's tolerance to soft errors. PVF can also explain the AVF behavior of a program when executed on hardware; PVF captures the workload-driven changes in AVF for all structures. Finally, we demonstrate two practical uses for PVF: choosing algorithms and compiler optimizations to reduce a program's failure rate.",2009,0, 4278,M-MAC: Mobility-Based Link Management Protocol for Mobile Sensor Networks,"In wireless sensor networks with mobile sensors, frequent link failures caused by node mobility generate wasteful retransmissions, resulting in increased energy consumption and decreased network performance. In this paper we propose a new link management protocol called M-MAC that can dynamically measure and predict the link quality. Based on the projected link status information each node may drop, relay, or selectively forward a packet, avoiding unnecessary retransmissions. Our simulation results show that M-MAC can effectively reduce the per-node energy consumption by as much as 25.8% while improving the network performance compared to a traditional sensor network MAC protocol in the case of both low and high mobility scenarios.",2009,0, 4279,Impact on Reader Performance for Lesion-Detection/ Localization Tasks of Anatomical Priors in SPECT Reconstruction,"With increasing availability of multimodality imaging systems, high-resolution anatomical images can be used to guide the reconstruction of emission tomography studies. By measuring reader performance on a lesion detection task, this study investigates the improvement in image-quality due to use of prior anatomical knowledge, for example organ or lesion boundaries, during SPECT reconstruction. Simulated 67Ga -citrate source and attenuation distributions were created from the mathematical cardiac-torso (MCAT) anthropomorphic digital phantom. The SIMIND Monte Carlo software was then used to generate SPECT projection data. The data were reconstructed using the De Pierro maximum a posteriori (MAP) algorithm and the rescaled-block-iterative (RBI) algorithm for comparison. We compared several degrees of prior knowledge about the anatomy: no knowledge about the anatomy; knowledge of organ boundaries; knowledge of organ and lesion boundaries; and knowledge of organ, lesion, and pseudo-lesion (non-emission uptake altering) boundaries. The MAP reconstructions used quadratic smoothing within anatomical regions, but not across any provided region boundaries. The reconstructed images were read by human observers searching for lesions in a localization receiver operating characteristic (LROC) study of the relative detection/localization accuracies of the reconstruction algorithms. Area under the LROC curve was computed for each algorithm as the comparison metric. We also had humans read images reconstructed using different prior strengths to determine the optimal trade-off between data consistency and the anatomical prior. Finally by mixing together images reconstructed with and without the prior, we tested to see if having an anatomical prior only some of the time changes the observer's detection/localization accuracy on lesions where no boundary prior is available. We found that anatomical priors including organ and lesion boundaries improve observer performance on- the lesion detection/localization task. Use of just organ boundaries did not provide a statistically significant improvement in performance however. We also found that optimal prior strength depends on the level of anatomical knowledge, with a broad plateau in which observer performance is near optimal. We found no evidence that having anatomical priors use lesion boundaries only when available changes the observer's performance when they are not available. We conclude that use of anatomical priors with organ and lesion boundaries improves reader performance on a lesion-detection/localization task, and that pseudo-lesion boundaries do not hurt reader performance. However, we did not find evidence that a prior using only organ boundaries helps observer performance. Therefore we suggest prior strength should be tuned to the organ-only case, since a prior will likely not be available for all lesions.",2009,0, 4280,An application of infrared sensors for electronic white stick,"Presently, blind people use a white stick as a tool for directing them when they move or walk. Although, the white stick is useful, it cannot give a high guarantee that it can protect blind people away from all level of obstacles. Many researchers have been interested in developing electronic devices to protect blind people away from obstacles with a higher guarantee. This paper introduces an obstacles avoidance alternative by using an electronic stick that serves as a tool for blind people in walking. It employs an infrared sensor for detecting obstacles along the pathway. With all level of obstacles, the infrared stick enables to detect all type of materials available in the course such as concrete, wood, metal, glass, and human being. The result also shows that the stick detects obstacles in range of 80 cm which is the same as the length of white stick. The stick is designed to be small and light, so that blind people can carry it comfortably.",2009,0, 4281,Mobility prediction for wireless network resource management,"User mobility prediction has been studied for various applications under diverse scenarios to improve network performance. Examples include driving habits in cell stations, roaming habits of cell phone users, student movement in campuses, etc. Use of such information enables use to better manage and adapt resources to provide improved Quality Of Service (QoS). In particular, advance reservation of resources at future destinations can provide better service to a mobile user. However, there is a certain cost associated with each of these kinds of prediction schemes. We concentrate on a campus network to predict user movement and demonstrate a better movement prediction which can be used to design the network architecture differently. This user prediction scheme uses a second order Markov chain to capture user movement history to make predictions. This is highly suitable for a campus environment because of its simplicity and can also be extended to several other network architectures.",2009,0, 4282,Automated Diagnosis of System Failures with Fa,"While quick failure diagnosis and system recovery is critical, database and system administrators continue to struggle with this problem. The spectrum of possible causes of failure is huge: performance problems like resource contention, crashes due to hardware faults or software bugs, misconfiguration by system operators, and many others. The scale, complexity, and dynamics of modern systems make it laborious and time-consuming to track down the cause of failures manually. Conventional data-mining techniques like clustering and classification have a lot to offer to the hard problem of failure diagnosis. These techniques can be applied to the wealth of monitoring data that operational systems collect. However, some novel challenges need to be solved before these techniques can deliver an automated, efficient, and reasonably-accurate tool for diagnosing failures using monitoring data; a tool that is easy and intuitive to use. Fa is a new system for automated diagnosis of system failures that is designed to address the above challenges. When a system is running, Fa collects monitoring data periodically and stores it in a database.",2009,0, 4283,Tracking High Quality Clusters over Uncertain Data Streams,"Recently, data mining over uncertain data streams has attracted a lot of attentions because of the widely existed imprecise data generated from a variety of streaming applications. In this paper, we try to resolve the problem of clustering over uncertain data streams. Facing uncertain tuples with different probability distributions, the clustering algorithm should not only consider the tuple value but also emphasis on its uncertainty. To fulfill these dual purposes, a metric named tuple uncertainty will be integrated into the overall procedure of clustering. Firstly, we survey uncertain data model and propose our uncertainty measurement and corresponding properties. Secondly, based on such uncertainty quantification method, we provide a two phase stream clustering algorithm and elaborate implementation detail. Finally, performance experiments over a number of real and synthetic data sets demonstrate the effectiveness and efficiency of our method.",2009,0, 4284,To Use or Not to Use? The Metrics to Measure Software Quality (Developers' View),"The research related to software product metrics is becoming increasingly important and widespread. One of the most popular research topics in this area is the development of different software quality models that are based on product metrics. These quality models can be used to assist software project managers in making decisions e.g. if the most “criticalâ€?parts of the code have been identified by using an error-proneness model, the testing resources can be concentrated on these parts of the code. As another example, the quality models based on complexity and coupling metrics may assist in estimating the cost of software changes (impact analysis). Many empirical studies have been conducted to validate these quality models. The main aim of these empirical investigations has been to identify which metrics are the best predictors to estimate the changes of software quality. The examination of the applicability of the quality models to different application domains and development platforms is also a very interesting research area.Besides the intensive research activities, software product metrics have also been integrated into the practical development processes of many software companies. This is partly due to the fact that generally a top-level management prefers to make its decisions relying on measurable data (“You can’t manage what you can’t control, and you can’t control what you don’t measureâ€?â€?DeMarco). Our industrial experiences show that the people responsible for software quality and top-level project managers are also interested in applying quality monitoring tools based on product metrics. Obviously, the advantages of using such a tool should be clearly defined, but it should be noted as well that its use in the daily development process may require significant additional resources. On the other hand, according to our experiences, convincing developers about the usefulness - - of software product metrics can be very difficult. When analysing concrete metrics, the developersâ€?first reaction is usually looking for examples that justify the substantial deviation from the proposed metric value. Developers can hardly accept that there is a correlation between the metrics and the quality of the software developed by them. Consequently, we conducted a study where we asked the opinion of developers working in different fields and on different platforms about software product metrics. The experts participating in the study possessed different development skills (young developers and senior ones with many years of experience). The participants worked on C/C++, C#, Java and SQL platforms. Since some of the developers worked in open-source projects, open-source related questions were also included in the study. Besides the questions related to size, complexity, and coupling metrics, we asked the developersâ€?opinion about coding rule violations and copy-paste programming. We asked every developer to mark a so-called reference system (a system whose development process he had actively participated in). These reference systems have been analysed with the Columbus tool, and we calculated metrics, rule violations and clones. Besides the general questions, in the study we included questions that were related to the metrics derived from the given reference system. The answers were evaluated on the basis of different aspects. We examined to what extent the developersâ€?professional skills, the development platforms and the specialities of a given application domain can affect the answers.",2009,0, 4285,Automatic Failure Diagnosis Support in Distributed Large-Scale Software Systems Based on Timing Behavior Anomaly Correlation,"Manual failure diagnosis in large-scale software systems is time-consuming and error-prone. Automatic failure diagnosis support mechanisms can potentially narrow down, or even localize faults within a very short time which both helps to preserve system availability. A large class of automatic failure diagnosis approaches consists of two steps: 1) computation of component anomaly scores; 2) global correlation of the anomaly scores for fault localization. In this paper, we present an architecture-centric approach for the second step. In our approach, component anomaly scores are correlated based on architectural dependency graphs of the software system and a rule set to address error propagation. Moreover, the results are graphically visualized in order to support fault localization and to enhance maintainability. The visualization combines architectural diagrams automatically derived from monitoring data with failure diagnosis results. In a case study, the approach is applied to a distributed sample Web application which is subject to fault injection.",2009,0, 4286,Evaluating Defect Prediction Models for a Large Evolving Software System,"A plethora of defect prediction models has been proposed and empirically evaluated, often using standard classification performance measures. In this paper, we explore defect prediction models for a large, multi-release software system from the telecommunications domain. A history of roughly 3 years is analyzed to extract process and static code metrics that are used to build several defect prediction models with random forests. The performance of the resulting models is comparable to previously published work. Furthermore, we develop a new evaluation measure based on the comparison to an optimal model.",2009,0, 4287,SQUALE – Software QUALity Enhancement,"The Squale project was born from industrial effort to control software quality. Its goals are to refine and enhance Qualixo model, a software-metric based quality model already used by large companies in France (Air France-KLM, PSA Peugeot-Citroen) and to support the estimation of return on investment produced by software quality. Qualixo model is a software quality model based on the aggregation of software metrics into higher level indicators called practices, criterias and factors. The coordination of Squale is carried out by Qualixo.",2009,0, 4288,Software Metrics Suites for Project Landscapes,"Many software metrics have been proposed over the past decades. Selecting a small custom suite of metrics is desirable for quality assessment and defect prediction in industrial practice since developers cannot easily cope with dozens of metrics. In large software architectures, structurally similar projects are subject to similar defect conditions and can be analyzed by the same metrics suite. A large Java application developed at Continentale Insurance contains structurally similar subprojects for different insurance branches (health, life, accident, car, property). These branches are integrated in the IT-architecture in a technically uniform way. We investigated which subsets of metrics are predictive but uncorrelated with each other and compared the results for structurally similar projects.",2009,0, 4289,Recovery scheme for hardening system on programmable chips,"The checkpoint and rollback recovery techniques enable a system to survive failures by periodically saving a known good snapshot of the system's state, and rolling back to it in case a failure is detected. The approach is particularly interesting for developing critical systems on programmable chips that today offers multiple embedded processor cores, as well as configurable fabric that can be used to implement error detection and correction mechanisms. This paper presents an approach that aims at developing a safety- or mission-critical systems on programmable chip able to tolerate soft errors by exploiting processor duplication to implement error detection, as well as checkpoint and rollback recovery to correct errors in a cost-efficient manner. We developed a prototypical implementation of the proposed approach targeting the Leon processor core, and we collected preliminary results that outline the capability of the technique to tolerate soft errors affecting the processor's internal registers. This paper is the first step toward the definition of an automatic design flow for hardening processor cores (either hard of soft) embedded in programmable chips, like for example SRAM-based FPGAs.",2009,0, 4290,Modeling Defect Enhanced Detection at 1550 nm in Integrated Silicon Waveguide Photodetectors,"Recent attention has been attracted by photo-detectors integrated onto silicon-on-insulator (SOI) waveguides that exploit the enhanced sensitivity to subbandgap wavelengths resulting from absorption via point defects introduced by ion implantation. In this paper, we present the first model to describe the carrier generation process of such detectors, based upon modified Shockley-Read-Hall generation/recombination, and, thus, determine the influence of the device design on detection efficiency. We further describe how the model may be incorporated into commercial software, which then simulates the performance of previously reported devices by assuming a single midgap defect level (with properties commensurate with the single negatively charged divacancy). We describe the ability of the model to highlight the major limitations to responsivity, and thus suggest improvements which diminish the impact of such limitations.",2009,0, 4291,A Flexible Software-Based Framework for Online Detection of Hardware Defects,"This work proposes a new, software-based, defect detection and diagnosis technique. We introduce a novel set of instructions, called access-control extensions (ACE), that can access and control the microprocessor's internal state. Special firmware periodically suspends microprocessor execution and uses the ACE instructions to run directed tests on the hardware. When a hardware defect is present, these tests can diagnose and locate it, and then activate system repair through resource reconfiguration. The software nature of our framework makes it flexible: testing techniques can be modified/upgraded in the field to trade-off performance with reliability without requiring any change to the hardware. We describe and evaluate different execution models for using the ACE framework. We also describe how the proposed ACE framework can be extended and utilized to improve the quality of post-silicon debugging and manufacturing testing of modern processors. We evaluated our technique on a commercial chip-multiprocessor based on Sun's Niagara and found that it can provide very high coverage, with 99.22 percent of all silicon defects detected. Moreover, our results show that the average performance overhead of software-based testing is only 5.5 percent. Based on a detailed register transfer level (RTL) implementation of our technique, we find its area and power consumption overheads to be modest, with a 5.8 percent increase in total chip area and a 4 percent increase in the chip's overall power consumption.",2009,0, 4292,Impact of Budget and Schedule Pressure on Software Development Cycle Time and Effort,"As excessive budget and schedule compression becomes the norm in today's software industry, an understanding of its impact on software development performance is crucial for effective management strategies. Previous software engineering research has implied a nonlinear impact of schedule pressure on software development outcomes. Borrowing insights from organizational studies, we formalize the effects of budget and schedule pressure on software cycle time and effort as U-shaped functions. The research models were empirically tested with data from a 25 billion/year international technology firm, where estimation bias is consciously minimized and potential confounding variables are properly tracked. We found that controlling for software process, size, complexity, and conformance quality, budget pressure, a less researched construct, has significant U-shaped relationships with development cycle time and development effort. On the other hand, contrary to our prediction, schedule pressure did not display significant nonlinear impact on development outcomes. A further exploration of the sampled projects revealed that the involvement of clients in the software development might have ldquoerodedrdquo the potential benefits of schedule pressure. This study indicates the importance of budget pressure in software development. Meanwhile, it implies that achieving the potential positive effect of schedule pressure requires cooperation between clients and software development teams.",2009,0, 4293,An Initial Characterization of Industrial Graphical User Interface Systems,"To date we have developed and applied numerous model-based GUI testing techniques; however, we are unable to provide definitive improvement schemes to real-world GUI test planners, as our data was derived from open source applications, small compared to industrial systems. This paper presents a study of three industrial GUI-based software systems developed at ABB, including data on classified defects detected during late-phase testing and customer usage, test suites, and source code change metrics. The results show that (1) 50% of the defects found through the GUI are categorized as data access and handling, control flow and sequencing, correctness, and processing defects, (2) system crashes exposed defects 12-19% of the time, and (3) GUI and non-GUI components are constructed differently, in terms of source code metrics.",2009,0, 4294,"A Flexible Framework for Quality Assurance of Software Artefacts with Applications to Java, UML, and TTCN-3 Test Specifications","Manual reviews and inspections of software artefacts are time consuming and thus, automated analysis tools have been developed to support the quality assurance of software artefacts. Usually, software analysis tools are implemented for analysing only one specific language as target and for performing only one class of analyses. Furthermore, most software analysis tools support only common programming languages, but not those domain-specific languages that are used in a test process. As a solution, a framework for software analysis is presented that is based on a flexible, yet high-level facade layer that mediates between analysis rules and the underlying target software artefact; the analysis rules are specified using high-level XQuery expressions. Hence, further rules can be quickly added and new types of software artefacts can be analysed without needing to adapt the existing analysis rules. The applicability of this approach is demonstrated by examples from using this framework to calculate metrics and detect bad smells in Java source code, in UML models, and in test specifications written using the testing and test control notations (TTCN-3).",2009,0, 4295,Transforming and Selecting Functional Test Cases for Security Policy Testing,"In this paper, we consider typical applications in which the business logic is separated from the access control logic, implemented in an independent component, called the Policy Decision Point (PDP). The execution of functions in the business logic should thus include calls to the PDP, which grants or denies the access to the protected resources/functionalities of the system, depending on the way the PDP has been configured. The task of testing the correctness of the implementation of the security policy is tedious and costly. In this paper, we propose a new approach to reuse and automatically transform existing functional test cases for specifically testing the security mechanisms. The method includes a three-step technique based on mutation applied to security policies (RBAC, XACML, OrBAC) and AOP for transforming automatically functional test cases into security policy test cases. The method is applied to Java programs and provides tools for performing the steps from the dynamic analyses of impacted test cases to their transformation. Three empirical case studies provide fruitful results and a first proof of concepts for this approach, e.g. by comparing its efficiency to an error-prone manual adaptation task.",2009,0, 4296,Predicting Attack-prone Components,"Limited resources preclude software engineers from finding and fixing all vulnerabilities in a software system. This limitation necessitates security risk management where security efforts are prioritized to the highest risk vulnerabilities that cause the most damage to the end user. We created a predictive model that identifies the software components that pose the highest security risk in order to prioritize security fortification efforts. The input variables to our model are available early in the software life cycle and include security-related static analysis tool warnings, code churn and size, and faults identified by manual inspections. These metrics are validated against vulnerabilities reported by testing and those found in the field. We evaluated our model on a large Cisco software system and found that 75.6% of the system's vulnerable components are in the top 18.6% of the components predicted to be vulnerable. The model's false positive rate is 47.4% of this top 18.6% or 9.1% of the total system components. We quantified the goodness of fit of our model to the Cisco data set using a receiver operating characteristic curve that shows 94.4% of the area is under the curve.",2009,0, 4297,Evaluating the Effect of the Number of Naturally Occurring Faults on the Estimates Produced by Capture-Recapture Models,"Project managers can use the capture-recapture models to estimate the number of faults in a software artifact. The capture-recapture estimates are calculated using the number of unique faults and the number of times each fault is found. The accuracy of the estimates is affected by the number of inspectors and the number of faults. Our earlier research investigated the effect that the number of inspectors had on the accuracy of the estimates. In this paper, we investigate the effect of the number of faults on the performance of the estimates using real requirement artifacts. These artifacts have an unknown amount of naturally occurring faults. The results show that while the estimators generally underestimate, they improve as the number of faults increases. The results also show that the capture-recapture estimators can be used to make correct re-inspection decisions.",2009,0, 4298,A Test-Driven Approach to Developing Pointcut Descriptors in AspectJ,"Aspect-oriented programming (AOP) languages introduce new constructs that can lead to new types of faults, which must be targeted by testing techniques. In particular, AOP languages such as AspectJ use a pointcut descriptor (PCD) that provides a convenient way to declaratively specify a set of joinpoints in the program where the aspect should be woven. However, a major difficulty when testing that the PCD matches the intended set of joinpoints is the lack of precise specification for this set other than the PCD itself. In this paper, we propose a test-driven approach for the development and validation of the PCD. We developed a tool, AdviceTracer, which enriches the JUnit API with new types of assertions that can be used to specify the expected joinpoints. In order to validate our approach, we also developed a mutation tool that systematically injects faults into PCDs. Using these two tools, we perform experiments to validate that our approach can be applied for specifying expected joinpoints and for detecting faults in the PCD.",2009,0, 4299,Development of an Embedded CPU-Based Instrument Control Unit for the SIR-2 Instrument Onboard the Chandrayaan-1 Mission to the Moon,"This paper presents a computer architecture developed for the instrument control unit (ICU) of the Spectrometer Infrared 2 (SIR-2) instrument onboard the Chandrayaan-1 mission to the Moon. Characteristic features of this architecture are its high autonomy, its high reliability, and its high performance, which are obtained by the following methods: 1) adopting state-of-the-art digital-construction techniques using one single radiation-tolerant field-programmable gate array for implementing an embedded system with a 32-bit central processing unit, commercial intellectual-property cores, and custom-specified logic; 2) implementing two independent communication buses, one for instrument commanding and instrument health monitoring and another one for transferring scientific and housekeeping data to the spacecraft; 3) implementing simple and well-arranged hardware, firmware, and software; and 4) implementing in-flight software-reconfiguration capabilities available from ground command. The SIR-2 ICU performs data acquisition, data processing, and temperature regulation. Per-spectrum averaging and per-pixel oversampling are supported to reduce measurement noise. A temperature regulator for the instrument sensor unit is also implemented, with the purpose of reducing dark current noise from the detector. The embedded real-time software is implemented as a multirate cyclic executive with interrupts. Five different tasks are maintained, running with a 10-ms base cycle time. A safe mode is implemented in the boot-loader, allowing in-flight software patching through the MIL-STD-1553B bus. The advanced features of this architecture make it an excellent choice for the control unit of the scientific SIR-2 instrument, compared with architectures from previous heritage.",2009,0, 4300,Finding All Small Error-Prone Substructures in LDPC Codes,"It is proven in this work that it is NP-complete to exhaustively enumerate small error-prone substructures in arbitrary, finite-length low-density parity-check (LDPC) codes. Two error-prone patterns of interest include stopping sets for binary erasure channels (BECs) and trapping sets for general memoryless symmetric channels. Despite the provable hardness of the problem, this work provides an exhaustive enumeration algorithm that is computationally affordable when applied to codes of practical short lengths n ap 500. By exploiting the sparse connectivity of LDPC codes, the stopping sets of size les 13 and the trapping sets of size les11 can be exhaustively enumerated. The central theorem behind the proposed algorithm is a new provably tight upper bound on the error rates of iterative decoding over BECs. Based on a tree-pruning technique, this upper bound can be iteratively sharpened until its asymptotic order equals that of the error floor. This feature distinguishes the proposed algorithm from existing non-exhaustive ones that correspond to finding lower bounds of the error floor. The upper bound also provides a worst case performance guarantee that is crucial to optimizing LDPC codes when the target error rate is beyond the reach of Monte Carlo simulation. Numerical experiments on both randomly and algebraically constructed LDPC codes demonstrate the efficiency of the search algorithm and its significant value for finite-length code optimization.",2009,0, 4301,Tradeoff and Sensitivity Analysis of a Hybrid Model for Ranking Commercial Off-the-Shelf Products,"Despite its popularity, The COTS-based development still faces some challenges, in particular the evaluation and selection process in which uncertainty plays a major role. A hybrid model, composed of the analytic hierarchy process (AHP) and Bayesian belief network (BBN), is proposed to evaluate and rank various COTS candidates while explicitly considering uncertainty. Several input parameters such as weights assigned to evaluation criteria, relative scores for various COTS candidates, and prior belief about the satisfaction of various attributes associated with the evaluation criteria need to be estimated. The estimation process of these input parameters is subject to uncertainty that limits the applicability of the modelpsilas results. In this paper, we apply sensitivity analysis to check the validity and robustness of the model. Further, we apply tradeoff analysis to explore the impact of relaxing one criterion in order to achieve an increase in another criterion that is considered as more desirable in a particular project context. A digital library system is used as a case study to illustrate how the proposed tradeoff and sensitivity analysis was performed.",2009,0, 4302,Hop-by-hop transport for satellite networks,"This paper research the transport control scheme for satellite networks. The special characteristics make TCP protocols incapable of providing satisfying service for satellite networks. We analyze the performance of hop-by-hop and end-to-end transport under Bernoulli loss model. Then we propose a novel protocol based on hop-by-hop acknowledgment against the traditional end-to-end TCP for satellite networks, which uses ACK associated with NACK together, allows out-of-sequence packet transmission and has the capability of re-routing in arbitrary node. Both theoretical and simulation results showed that hop-by-hop communication scheme outperforms TCP under highly error prone, multi-hop and long delay conditions.",2009,0, 4303,Defining requirements for advanced PHM technologies for optimal reliability centered maintenance,"Condition based maintenance plus (CBM+) can be described as optimal condition based maintenance (CBM) procedures defined by applying the principles and process of reliability centered maintenance. This approach offers a rigorous and disciplined method, based on the system FMECA, to determine the least cost maintenance policy and procedures that are consistent with acceptable levels of safety and readiness, applying available prognosis and health management tools. It is argued that the same process is the preferred method to define requirements for advanced PHM technologies based on RCM derived capability gaps, preferably accounting for synergies with concurrent continuous (maintenance) process improvement. There may be synergies in coupling this process with Continuous Process Improvement programs, such as NAVAIR's AIRSPEED. In discussing this proposed approach, several issues are addressed. The first is the question of interdependence between incommensurable safety, affordability and readiness objectives and metrics. The second is the problem of uncertainty in the FMECA failure modes and probabilities until the system and equipment has accumulated considerable service history, while still subject to the emergence or aggravation of failure modes by mission exposure, component deterioration, quality escapes and intentional configuration change. In practice it may be necessary to fall back on less rigorous (semi)qualitative methods to target innovation. In any case, more adaptable PHM architectures are needed to mitigate inevitable uncertainty in requirements.. Note: the terms equipment health management (also, more specifically, engine health management) [EHM] and prognostic health management (or prognosis and health management) [PHM] are used with little distinction in this paper, but in general PHM is restricted to methods generating estimates of remaining useful life.",2009,0, 4304,The data quality estimation for the information web resources,"The article is devoted to problems, which are related to the quality evaluation for the information Web resources. Basic tasks and perspective information technology of quality evaluation by the searching robot are selected.",2009,0, 4305,A Novel Dual-Core Architecture for the Analysis of DNA Microarray Images,"A deoxyribonucleic acid (DNA) microarray is a collection of microscopic DNA spots attached to a solid surface, such as a glass, plastic, or silicon chip forming an array. DNA microarray technologies are an essential part of modern biomedical research. DNA microarray allows compressing hundreds of thousands of different DNA nucleotide sequences in a little microscope glass and permits having all this information on a single image. The analysis of DNA microarray images allows the identification of gene expressions to draw biological conclusions for applications ranging from genetic profiling to diagnosis of cancer. Unfortunately, DNA microarray technology has a high variation of data quality. Therefore, to obtain reliable results, complex and extensive image analysis algorithms should be applied before the actual DNA microarray information can be used for biomedical purposes. In this paper, we present a novel hardware architecture that is specifically designed to analyze DNA microarray images. The architecture is based on a dual-core system that implements several units working in a single-instruction/multiple-data fashion. A field-programmable-gate-array (FPGA)-based prototypal implementation of the proposed architecture is presented. The effectiveness of the novel dual-core architecture is demonstrated by several analyses performed on original DNA microarray images, showing that the capability of detecting DNA spots increases by more than the 30% with respect to that of previously developed software techniques.",2009,0, 4306,A Metric for Judicious Relaxation of Timing Constraints in Soft Real-Time Systems,"For soft real-time systems, timing constraints are not as stringent as those in hard real-time systems: some constraint violations are permitted as long as the amount of violation is within a given limit. The allowed flexibility for soft real-time systems can be utilized to improve system's other quality-of-service (QoS) properties, such as energy consumption. One way to enforce constraint violation limit is to allow an expansion of timing constraint feasible region, but restrict the expansion in such a way that the relaxed constraint feasible region sufficiently resembles the original one. In this paper, we first introduce a new metric, constraint set similarity, to quantify the resemblance between two different timing constraint sets. Because directly calculating the exact value of the metric involves calculating the size of a polytope which is a #P-hard problem, we instead introduce an efficient method for estimating its bound. We further discuss how this metric can be exploited for evaluating trade-offs between timing constraint compromises and system's other QoS property gains. We use energy consumption reduction as an example to show the application of the proposed metric.",2009,0, 4307,A reliable â€?double edged strategy for HLR mobility database failure detection and recovery in PCS networks,"Since the main objective of PCS networks is to provide ldquoanytime-anywhererdquo cellular service, the existence of lost calls has become the major problem that hardly degrades the performance of such networks. A lost call arises when the location information of mobile users stored in Home Location Register (HLR) mobility database is corrupted or outdated. In order to tackle this situation, it is becoming necessary to tolerate HLR failure by implementing efficient recovery schemes. Traditionally, HLR mobility database is periodically checkpointed. After failure, HLR database is restored by reloading the backup. However, checkpointing suffers from major two hurdles, which are; (i) choosing the optimal checkpointing interval is still a challenge, and (ii) backup information may be obsolete. This paper proposes a reliable HLR failure detection and recovery strategy built upon a double edged architecture of HLR mobility database. The main contribution is to introduce novel registration and call delivery schemes for PCS networks. The proposed strategy is fault tolerant as it examines the availability of both the main HLR of the system as well the backup (BC) continuously, then take the suitable actions to keep the system working with relatively no lost calls. On the other hand, our architecture is double edged as both HLR and BC are employed in the system operation all the time. Experimental results have shown that the proposed failure detection and recovery strategy outperforms traditional checkpointing techniques as it introduces better performance with relatively no lost calls.",2009,0, 4308,Analysis and enhancement of software dynamic defect models,"Dynamic defect models are used to estimate the number of defects in a software project, predict the release date and required effort of maintenance, and measure the progress and quality of development. The literature suggests that defects projection over time follows a Rayleigh distribution. In this paper, data concerning defects are collected from several software projects and products. Data projection showed that the previous assumption of the Rayleigh distribution is not valid for current projects which are much more complex. Empirical data collected showed that defect distribution in even simpler software projects cannot be represented by the Rayleigh curves due to the adoption of several types of testing on different phases in the project lifecycle. The findings of this paper enhance the well known Puntam's defect model and propose new performance criteria to support the changes occurred during the project. Results of fitting and predicting the collected data show the superiority of the new enhanced defect model over the original defect model.",2009,0, 4309,ESoftCheck: Removal of Non-vital Checks for Fault Tolerance,"As semiconductor technology scales into the deep submicron regime the occurrence of transient or soft errors will increase. This will require new approaches to error detection. Software checking approaches are attractive because they require little hardware modification and can be easily adjusted to fit different reliability and performance requirements. Unfortunately, software checking adds a significant performance overhead. In this paper we present ESoftCheck, a set of compiler optimization techniques to determine which are the vital checks, that is, the minimum number of checks that are necessary to detect an error and roll back to a correct program state. ESoftCheck identifies the vital checks on platforms where registers are hardware-protected with parity or ECC, when there are redundant checks and when checks appear in loops. ESoftCheck also provides knobs to trade reliability for performance based on the support for recovery and the degree of trustiness of the operations. Our experimental results on a Pentium 4 show that ESoftCheck can obtain 27.1% performance improvement without losing fault coverage.",2009,0, 4310,Fault-Tolerant Algorithm for Distributed Primary Detection in Cognitive Radio Networks,"This paper attempts to identify the reliability of detection of licensed primary transmission based on cooperative sensing in cognitive radio networks. With a parallel fusion network model, the correlation issue of the received signals between the nodes in the worst case is derived. Leveraging the property of false sensing data due to malfunctioning or malicious software, the optimizing strategy, namely fault-tolerant algorithm for distributed detection (FTDD) is proposed, and quantitative analysis of false alarm reliability and detection probability under the scheme is presented. In particular, the tradeoff between licensed transmissions and user cooperation among nodes is discussed. Simulation experiments are also used to evaluate the fusion performance under practical settings. The model and analytic results provide useful tools for reliability analysis for other wireless decentralization-based applications (e.g., those involving robust spectrum sensing).",2009,0, 4311,A New Multi-metric QoS Routing Protocol in Wireless Mesh Network,"Real-time applications such as multimedia applications have high requirements on bandwidth, delay, jitter etc, which requires WMN (Wireless Mesh Networks) to support QoS. QoS routing is crucial to provide QoS. This paper focuses on QoS routing with bandwidth constraint in multi-radio multi-channel WMN, and proposes a new multi-metric and a QoS routing protocol MMQR. The routing metric has two advantages. First, it replaces the transmission rate of ETT with available bandwidth so that the nodes with light load are more likely to be selected. Second, it takes the channel diversity into account and assigns a weight to each link according to the channels of links within the range of three hops. MMQR addresses the problems that real QoS cannot be satisfied due to interference and congestion and the link doesn't work because of node failure.",2009,0, 4312,Research and Design of Intelligent Wireless Electric Power Parameter Detection Algorithm Combining FFT with Wavelet Analysis,"In modern society, power quality problem has become an important problem, which effects industrial production and product quality, etc. So, we need to monitor and analyze electric power parameter to assure electric power quality in prescribed limit. Aiming to disadvantages of present electric parameter detection of power harmonic, we design a kind of wireless power electric parameter detection system based on single chip microprocessor (SCM) technology, wireless communication technology, FFT, wavelet resolution analysis, and visual instrument (VI) technology, etc. System is composed of upper PC and lower PC. Lower PC uses transformer module, signal processor module to realize electric power signal conversion, which can be recognized by MSP430F149. Lower PC controls nRF905 wireless transceiver module to realize signal transmitting between lower PC and upper PC installing of VI software of LabVIEW. System software combines FFT and wavelet analysis method to realize electric power parameter of harmonic analysis, which solves shortcoming of single FFT analysis. Through simulation experiment, method of combining FFT with wavelet transform is verified to be accurate.",2009,0, 4313,Allocation of extra components to ki-out-of-mi subsystems using the NPI method,"The allocation of components to systems remains a challenge due to the components success and failure rate which is unpredictable to design engineers. Optimal algorithms often assume a restricted class for the allocation and yet still require a high-degree polynomial time complexity. Heuristic methods may be time-efficient but they do not guarantee optimality of the allocation. This paper introduces a new and efficient model of a system consisting of ki-out-of-mi subsystems for allocation of extra components. This model is more general than the traditional k-out-of-n one. This system which consists of subsystem i (i = 1, 2, ..., x) is working if at least ki(out of mi) components are working. All subsystems are independent and the components within subsystem i (i = 1, 2, ..., x) are exchangeable. Components exchangeable with those of each subsystem have been tested. For subsystem i, ni components have been tested for faults and none were discovered in si of these ni components. We assume zero-failure testing, that is, we are assuming that none of the components tested is faulty so si = ni, i = 1, 2, ..., x. We are using lower and upper probability that a system consisting of x independent ki-out-of-mi subsystems works. This allocation problem dealt with in this paper can be categorised as either to which subsystem the expected number of extra components should be allocated subject to achieving maximum reliability (Lower probability) of the system consisting subsystems, so si = ni, i = 1, 2, ..., x. The resulting component allocation problems are too complicated to be solved by traditional approaches; therefore, the nonparametric predictive inference (NPI) method is used to solve them. These results show that NPI is a powerful tool for solving these kinds of problems which are helpful for design engineers to make optimal decisi- ons. The paper also includes suggestions for further research.",2009,0, 4314,Automation of testing and qualification of inertial rate sensors,"A very high assurance of performance is required in case of inertial rate sensors known as gyroscopes, before finalizing the product. This assurance is provided with a high quality automated test station (ATS). The major objectives of ATS are accurate measurement of gyroscope characteristic, the comprehensive characterization and performance evaluation, while reducing testing time cycle. To assist gyro testing an ATS is created to replace manual testing processes which are very time consuming and requires expensive test equipment. The present system automates most of the processes for a fraction of the cost. The system is based on Pentium IV computer with a 16-bit data acquisition card, serial ports, rate table with integrated thermal chamber and printer. All the instruments are controlled by the host computer through RS-232. The software ldquoGyro VIEWrdquo written in Visual C++ has capabilities of data acquisition, online estimation of gyroscope parameters, and run time analysis of graphs which makes it very flexible than existing softwares which support post processing. The algorithms are developed for run time processing that include mean, moving average filter, standard deviation and noise. Software allows carrying out different tests such as over thermal test, scale factor test (rotation test), misalignment test, and run-to-run test e.t.c.",2009,0, 4315,FAST Failure Detection Service for Large Scale Distributed Systems,"This paper addresses the problem of building a failure detection service for large scale distributed systems. We describe failure detection service, which merges some novel proposals and satisfies scalability, flexibility and adaptability properties. Afterwards, we present the architecture of such a service, show detailed information about its components and present the simulation results concerning performance.",2009,0, 4316,Proactive Fault Tolerance Using Preemptive Migration,"Proactive fault tolerance (FT) in high-performance computing is a concept that prevents compute node failures from impacting running parallel applications by preemptively migrating application parts away from nodes that are about to fail. This paper provides a foundation for proactive FT by defining its architecture and classifying implementation options. This paper further relates prior work to the presented architecture and classification, and discusses the challenges ahead for needed supporting technologies.",2009,0, 4317,The Perk Station: Systems design for percutaneous intervention training suite,"Image-guided percutaneous needle-based surgery has become part of routine clinical practice in performing procedures such as biopsies, injections and therapeutic implants. A novice physician typically performs needle interventions under the supervision of a senior physician; a slow and inherently subjective training process that lacks objective, quantitative assessment of the surgical skill and performance. Current evaluations of needle-based surgery are also rather simplistic: usually only needle tip accuracy and procedure time are recorded, the latter being used as an indicator of economical feasibility. Shortening the learning curve and increasing procedural consistency are critically important factors in assuring high-quality medical care for all segments of society. This paper describes the design and development of a laboratory validation system for measuring operator performance under different assistance techniques for needle-based surgical guidance systems - The perk station. The initial focus of the perk station is to assess and compare three different techniques: the image overlay, bi-plane laser guide, and conventional freehand. The integrated system comprises of a flat display with semi-transparent mirror (image overlay), bi-plane laser guide, a magnetic tracking system, a tracked needle, a phantom, and a stand-alone laptop computer running the planning and guidance software. The prototype Perk Station has been successfully developed, the associated needle insertion phantoms have been built, and the graphic surgical interface has been implemented.",2009,0, 4318,FMEA is not enough,"A failure modes and effects analysis is a methodology for analysis of potential failure modes within a system which includes the classification of the failure modes severity and impact on the system. Most organizations stop at the completion of the FMEA and assume that the work is done and a solid design will now be created. It is not always the case. There is a significant amount of work that needs to go into a solid design before and after an FMEA from the fault management (FM) perspective. Also, there are challenges to prove that the content of the FMEA is correct and it will add value to the overall product. The key is to connect the dots that make up a fault management infrastructure. This paper will discuss these points and provide the blue print of their relationships and how to take advantage of each element and effectively connect them to observe their power. This approach provides tangible results that customers can understand and relate to. In addition, it provides clear, straight forward and meaningful results that can not be disputed and can even be tested in the customer environment. After completion of an FMEA, it is important to verify the information provided by the hardware, software and the rest of the FMEA team. Finally, this paper discusses the concept of Fault Insertion Testing or FIT, relation of FIT to FMEA and how FIT enables design engineers and all interested parties to observe if the product is performing as expected. There are different methods for performing FIT. Each method will be discussed at a high level. One of the key differences in the approach being suggested in this paper is the final step of the fault management process which is to tie the FMEA and fault insertion testing together. These two activities complement each other and significantly increase their value by being performed right after one another. The FMEA provides the necessary information such as fault insertion points and the required parameters to execute fault ins- ertion testing.",2009,0, 4319,Study on the Prediction Model and Software of Environment Contaminants in Coal-Burning Power Plants,"The model and software for systematically estimating the concentration of the main contaminants (dust, SO2, NOx, CO2, F, As, Hg, Cd, Pd etc) in exhaust gas and waste waters from coal-burning power plants were studied. The prediction of contaminants in exhaust gas was based on conventional calculating models. The key factors are ascertained the material conservation theory and the on-line data of coal quality are used to build the formula for calculating the quantity and the concentration of the gas contaminants. The prediction of contaminants in water was based on the status quo of ash-flushing systems in national power plants through studying the ash-water in two different systems separately. The power plant monitor and prediction software using VB and access database were developed based on the above prediction model. The prediction result from the software shows that the opposite error value about the prediction and measure was all less than 10%, this proved the estimate was authentic.",2009,0, 4320,Transforming traditional iris recognition systems to work on non-ideal situations,"Non-ideal iris images can significantly affect the accuracy of iris recognition systems for two reasons: 1) they cannot be properly preprocessed by the system; and/or 2) they have poor image quality. However, many traditional iris recognition systems have been deployed in law enforcement, military, or many other important locations. It will be expensive to replace all these systems. It will be desirable if the traditional systems can be transformed to perform in non-ideal situations without an expensive update. In this paper, we propose a method that can help traditional iris recognition systems to work on the non-ideal situation using a video image approach. The proposed method will quickly identify and eliminate the bad quality images from iris videos for further processing. The segmentation accuracy is critical in recognition and would be challenging for traditional systems. The segmentation evaluation is designed to evaluate if the segmentation is valid. The information distance based quality measure is used to evaluate if the image has enough quality for recognition. The segmentation evaluation score and quality score are combined to predict the recognition performance. The research results show that the proposed methods can work effectively and objectively. The combination of segmentation and quality scores is highly correlated with the recognition accuracy and can be used to improve the performance of iris recognition systems in a non-ideal situation. The deployment of such a system would not cost much since the core parts of the traditional systems are not changed and we only need to add software modules. It will be very practical to transform the traditional system using the proposed method.",2009,0, 4321,Decision trees as information source for attribute selection,"Attribute selection (AS) is known to help improve the results of algorithmic learning processes by selecting fewer, but predictive, input attributes. This study introduces a new ranking filter AS method, the tree node selection (TNS) method. The idea of TNS is to determine significant but fewer attributes by searching through the pre-generated decision tree as information source in the manner of a pruning process. To test the performance of TNS, 33 benchmark datasets (UCI) with various numbers of instances, attributes and classes were investigated along with five known AS methods, and the results were tested with the C4.5 (unpruned) and naive Bayes classifiers. The performance, in terms of classification accuracy improvement, reduction in the number of attributes and the size of the generated decision tree are assessed by various statistical analyses for multiple comparisons. TNS was found to provide the most consistent performance for C4.5 and naive Bayes classifiers, and generated unpruned C4.5 trees with selected fewer attributes were generally found to achieve similar quality to pruned C4.5 trees without any attribute selection.",2009,0, 4322,A Tool for the Application of Software Metrics to UML Class Diagram,"How to improve software quality are the important directions in software engineering research field. The complexity has a close relationship with the developing cost, time spending and the number of detects which a program may exist. OOA and OOD have been widely used, so the requirement of measuring software complexity written in object-oriented language is emerging. UML class diagrams describe the static view of a system in terms of classes and relationships among the classes. In order to objectively assess UML class diagrams, this paper presents a suite of metrics based on UML class diagram that is adapted to Java to assess the complexity of UML class diagrams in various aspects, and verifies them with a suite of evaluation rules suggested by Weyuker.",2009,0, 4323,Metric-based Evolution Analysis and Evaluation of Software Core Assets Library,"The core assets library is the most important component of any software product line; it evolved responding to organization businesses, origination object, technology renovation and time. From the metric of the core assets library, this paper gives several measurements and their definitions which correlate with the evolution of a core assets library. It is combing with data processing, putting forward an evaluation model with the evolution of a core assets library,using analytic hierarchy process to estimate the core assets librarypsilas evolution level, illuminating with an example at last.",2009,0, 4324,One-pass multi-layer rate-distortion optimization for quality scalable video coding,"In this paper, a one-pass multi-layer rate-distortion optimization algorithm is proposed for quality scalable video coding. To improve the overall coding efficiency, the MB mode in the base layer is selected not only based on its rate-distortion performance relative to this layer but also according to its impact on the enhancement layer. Moreover, the optimization module for residues is also improved to benefit inter-layer prediction. Simulations show that the proposed algorithm outperforms the most recent SVC reference software. For eight test sequences, a gain of 0.35 dB on average and 0.75 dB at maximum is achieved at a cost of less than 8% increase of the total coding time.",2009,0, 4325,A fast CABAC rate estimator for H.264/AVC mode decision,"H.264/AVC coders use the rate-distortion (R-D) cost function to decide the coding mode for each coding unit for better R-D tradeoff. To evaluate the R-D cost function, both the bit rate and the video quality degradation of the candidate mode must be calculated. In this paper, a fast context-adaptive binary arithmetic coding (CABAC) rate estimation scheme is proposed to accelerate the rate calculation. The speed of the proposed rate estimator depends only on the number of contexts used in the coding unit. Experimental results show that the proposed rate estimator reduces about 20% of the computational complexity of the R-D optimized mode decision when it is implemented as software. The entire encoder implemented as software is then accelerated by 16% with negligible degradation in the R-D performance. If implemented as hardware, the proposed scheme is expected to accelerate the rate estimation for a macroblock by 5 to 18 times faster than the conventional CABAC operation.",2009,0, 4326,Video quality monitoring of streamed videos,"This paper describes a video quality analysis system for inservice monitoring of streamed videos, particularly over mobile/wireless networks. The algorithm adopts the no-reference method, and enables real-time measurement of video quality at any point in the content production and delivery chain using any given video. The technologies developed include no-reference methods for measuring picture freeze, picture loss, and blockiness. The developed system (where the software has not been optimized for speed) is able to process video of CIF size (352 times 288 pixels) at more than 30 fps on a Pentium-IV 3 GHz computer. The experimental results show that the proposed video quality analysis system gives good accuracy for picture freeze, picture loss, and blocking detections.",2009,0, 4327,Resource usage prediction for groups of dynamic image-processing tasks using Markov modeling,"With the introduction of dynamic image processing, such as in image analysis, the computational complexity has become data dependent and memory usage irregular. Therefore, the possibility of runtime estimation of resource usage would be highly attractive and would enable quality-of-service (QoS) control for dynamic image-processing applications with shared resources. A possible solution to this problem is to characterize the application execution using model descriptions of the resource usage. In this paper, we attempt to predict resource usage for groups of dynamic image-processing tasks based on Markov-chain modeling. As a typical application, we explore a medical imaging application to enhance a wire mesh tube (stent) under X-ray fluoroscopy imaging during angioplasty. Simulations show that Markov modeling can be successfully applied to describe the resource usage function even if the flow graph dynamically switches between groups of tasks. For the evaluated sequences, an average prediction accuracy of 97% is reached with sporadic excursions of the prediction error up to 20-30%.",2009,0, 4328,A study of pronunciation verification in a speech therapy application,Techniques are presented for detecting phoneme level mispronunciations in utterances obtained from a population of impaired children speakers. The intended application of these approaches is to use the resulting confidence measures to provide feedback to patients concerning the quality of pronunciations in utterances arising within interactive speech therapy sessions. The pronunciation verification scenario involves presenting utterances of known words to a phonetic decoder and generating confusion networks from the resulting phone lattices. Confidence measures are derived from the posterior probabilities obtained from the confusion networks. Phoneme level mispronunciation detection performance was significantly improved with respect to a baseline system by optimizing acoustic models and pronunciation models in the phonetic decoder and applying a nonlinear mapping to the confusion network posteriors.,2009,0, 4329,PDF Research on IEEE802.11 DCF in Wireless Distributed Measurement System,"This paper analyzes the basic and RTS/CTS and fragment MAC access mechanism of IEEE802.11 DCF (distribution coordination function), and puts forward the model of wireless distributed measurement system (WDMS). It analyzes the elements that affect system real-time communication. It sets up the simulation network scenario of real-time communication through software, then simulates to research the MAC PDF (probability distribution function) performance in wireless distributed measurement system, which gives theory argument and reference data for further system design.",2009,0, 4330,Staffing Level and Cost Analyses for Software Debugging Activities Through Rate-Based Simulation Approaches,"Research in the field of software reliability, dedicated to the analysis of software failure processes, is quite diverse. In recent years, several attractive rate-based simulation approaches have been proposed. Thus far, it appears that most existing simulation approaches do not take into account the number of available debuggers (or developers). In practice, the number of debuggers will be carefully controlled. If all debuggers are busy, they may not address newly detected faults for some time. Furthermore, practical experience shows that fault-removal time is not negligible, and the number of removed faults generally lags behind the total number of detected faults, because fault detection activities continue as faults are being removed. Given these facts, we apply the queueing theory to describe and explain possible debugging behavior during software development. Two simulation procedures are developed based on G/G/infin, and G/G/m queueing models, respectively. The proposed methods will be illustrated using real software failure data. The analysis conducted through the proposed framework can help project managers assess the appropriate staffing level for the debugging team from the standpoint of performance, and cost-effectiveness.",2009,0, 4331,Evolutionary Sampling and Software Quality Modeling of High-Assurance Systems,"Software quality modeling for high-assurance systems, such as safety-critical systems, is adversely affected by the skewed distribution of fault-prone program modules. This sparsity of defect occurrence within the software system impedes training and performance of software quality estimation models. Data sampling approaches presented in data mining and machine learning literature can be used to address the imbalance problem. We present a novel genetic algorithm-based data sampling method, named evolutionary sampling, as a solution to improving software quality modeling for high-assurance systems. The proposed solution is compared with multiple existing data sampling techniques, including random undersampling, one-sided selection, Wilson's editing, random oversampling, cluster-based oversampling, synthetic minority oversampling technique (SMOTE), and borderline-SMOTE. This paper involves case studies of two real-world software systems and builds C4.5- and RIPPER-based software quality models both before and after applying a given data sampling technique. It is empirically shown that evolutionary sampling improves performance of software quality models for high-assurance systems and is significantly better than most existing data sampling techniques.",2009,0, 4332,LA1 testBed: Evaluation testbed to assess the impact of network impairments on video quality,"Currently, a complete system for analyzing the effect of packet loss on a viewer's perception is not available. Given the popularity of digital video and the growing interest in live video streams where channel coding errors cannot be corrected, such a system would give great insight into the problem of video corruption through transmission errors and how they are perceived by the user. In this paper we introduce such a system, where digital video can be corrupted according to established loss patterns and the effect is measured automatically. The corrupted video is then used as input for user tests. Their results are analyzed and compared with the automatically generated. Within this paper we present the complete testing system that makes use of existing software as well as introducing new modules and extensions. With the current configuration the system can test packet loss in H.264 coded video streams and produce a statistical analysis detailing the results. The system is fully modular allowing for future developments such as other types of statistical analysis, different video measurements and new video codecs.",2009,0, 4333,Collaborative defense as a pervasive service Architectural insights and validation methodologies of a trial deployment,"Network defense is an elusive art. The arsenal to defend our devices from attack is constantly lagging behind the latest methods used by attackers to break into them and subsequently into our networks. To counteract this trend, we developed a distributed, scalable approach that harnesses the power of collaborative end-host detectors or sensors. Simulation results reveal order of magnitude improvements over stand-alone detectors in the accuracy of detection (fewer false alarms) and in the quality of detection (the ability to capture stealthy anomalies that would otherwise go undetected). Although these results arise out of a proof of concept in the arena of botnet detection in an enterprise network, they have broader applicability to the area of network self-manageability of pervasive computing devices. To test the efficacy of these ideas further, Intel Corporation partnered with British Telecommunications plc to launch a trial deployment. In this paper, we report on results and insights gleaned from the development of a testbed infrastructure and phased experiments; (1) the design of a re-usable measurement-inference architecture into which 3rd party sensor developers can integrate a wide variety of ldquoanomaly detectionrdquo algorithms to derive the same correlation-related performance benefits; (2) the development of a series of validation methodologies necessitated by the lack of mature tools and approaches to attest to the security of distributed networked systems; (3) the critical role of learning and adaptation algorithms to calibrate a fully-distributed architecture of varied devices in varied contexts, and (4) the utility of large-scale data collections to assess what's normal behavior for Enterprise end-host background traffic as well as malware command-and-control protocols. Finally, we propose collaborative defense as a blueprint for emergent collaborative systems and its measurement-everywhere approach as the adaptive underpinnings needed for pervasive- services.",2009,0, 4334,Formal Correctness of a Passive Testing Approach for Timed Systems,"In this paper we extend our previous work on passive testing of timed systems to establish a formal criterion to determine correctness of an implementation under test. In our framework, an invariant expresses the fact that if the implementation under test performs a given sequence of actions, then it must exhibit a behavior in a lapse of time reflected in the invariant. In a previous paper we gave an algorithm to establish the correctness of an invariant with respect to a specification. In this paper we continue the work by providing an algorithm to check the correctness of a log, recorded form the implementation under test, with respect to an invariant. We show the soundness of our method by relating it to an implementation relation. In addition to the theoretical framework we have developed a tool, called PASTE, that facilitates the automation of our passive testing approach.",2009,0, 4335,Evolving the Quality of a Model Based Test Suite,"Redundant test cases in newly generated test suites often remain undetected until execution and waste scarce project resources. In model-based testing, the testing process starts early on in the developmental phases and enables early fault detection. The redundancy in the test suites generated from models can be detected earlier as well and removed prior to its execution. The article presents a novel model-based test suite optimization technique involving UML activity diagrams by formulating the test suite optimization problem as an Equality Knapsack Problem. The aim here is the development of a test suite optimization framework that could optimize the model-based test suites by removing the redundant test cases. An evolution-based algorithm is incorporated into the framework and is compared with the performances of two other algorithms. An empirical study is conducted with four synthetic and industrial scale Activity Diagram models and results are presented.",2009,0, 4336,Assertion-Driven Development: Assessing the Quality of Contracts Using Meta-Mutations,"Agile development methods have gained momentum in the last few years and, as a consequence, test-driven development has become more prevalent in practice. However, test cases are not sufficient for producing dependable software and we rather advocate approaches that emphasize the use of assertions or contracts over that of test cases. Yet, writing self-checks in code has been shown to be difficult and is itself prone to errors. A standard technique to specify runtime properties is design-by contract(DbC). But how can one test if the contracts themselves are sensible and sufficient? We propose a measure to quantify the goodness of contracts (or assertions in a broader sense). We introduce meta-mutations at the source code level to simulate common programmer errors that the self-checks are supposed to detect. We then use random mutation testing to determine a lower and upper bound on the detectable mutations and compare these bounds with the number of mutants detected by the contracts. Contracts are considered ldquogoodrdquo if they detect a certain percentage of the detectable mutations.We have evaluated our tools on Java classes with contracts specified using the Java Modeling Language (JML). We have additionally tested the contract quality of 19 implementations, written independently by students, based on the same specification.",2009,0, 4337,AjMutator: A Tool for the Mutation Analysis of AspectJ Pointcut Descriptors,"Aspect-oriented programming introduces new challenges for software testing. In particular the pointcut descriptor (PCD) requires particular attention from testers. The PCD describes the set of join points where the advices are woven.In this paper we present a tool, AjMutator, for the mutation analysis of PCDs. AjMutator implements several mutation operators that introduce faults in the PCDs to generate a set of mutants. AjMutator classifies the mutants according to the set of join points they match compared to the set of join points matched by the initial PCD. An interesting result is that this automatic classification can identify equivalent mutants for a particular class of PCDs. AjMutator can also run a set of test cases on the mutants to give a mutation score. We have applied AjMutator on two systems to show that this tool is suitable for the mutation analysis of PCDs on large AspectJ systems.",2009,0, 4338,Work in Progress: Building a Distributed Generic Stress Tool for Server Performance and Behavior Analysis,"One of the primary tools for performance analysis of multi-tier systems are standardized benchmarks. They are used to evaluate system behavior under different circumstances to assess whether a system can handle real workloads in a production environment. Such benchmarks are also helpful to resolve situations when a system has an unacceptable performance or even crashes. System administrators and developers use these tools for reproducing and analyzing circumstances which provoke the errors or performance degradation. However, standardized benchmarks are usually constrained to simulating a set of pre-fixed workload distributions. We present a benchmarking framework which overcomes this limitation by generating real workloads from pre-recorded system traces. This distributed tool allows more realistic testing scenarios, and thus exposes the behavior and limits of a tested system with more details. Further advantage of our framework is its flexibility. For example, it can be used to extend standardized benchmarks like TPC-W thus allowing them to incorporate workload distributions derived from real workloads.",2009,0, 4339,Comparing design of experiments and evolutionary approaches to multi-objective optimisation of sensornet protocols,"The lifespan, and hence utility, of sensornets is limited by the energy resources of individual motes. Network designers seek to maximise energy efficiency while maintaining an acceptable network quality of service. However, the interactions between multiple tunable protocol parameters and multiple sensornet performance metrics are generally complex and unknown. In this paper we address this multi-dimensional optimisation problem by two distinct approaches. Firstly, we apply a Design Of Experiments approach to obtain a generalised linear interaction model, and from this derive an estimated near-optimal solution. Secondly, we apply the Two-Archive evolutionary algorithm to improve solution quality for a specific problem instance. We demonstrate that, whereas the first approach yields a more generally applicable solution, the second approach yields a broader range of viable solutions at potentially lower experimental cost.",2009,0, 4340,Designing for Feel: Contrasts between Human and Automated Parametric Capture of Knob Physics,"We examine a crucial aspect of a tool intended to support designing for feel: the ability of an objective physical-model identification method to capture perceptually relevant parameters, relative to human identification performance. The feel of manual controls, such as knobs, sliders, and buttons, becomes critical when these controls are used in certain settings. Appropriate feel enables designers to create consistent control behaviors that lead to improved usability and safety. For example, a heavy knob with stiff detents for a power plant boiler setting may afford better feedback and safer operations, whereas subtle detents in an automobile radio volume knob may afford improved ergonomics and driver attention to the road. To assess the quality of our identification method, we compared previously reported automated model captures for five real mechanical reference knobs with captures by novice and expert human participants who were asked to adjust four parameters of a rendered knob model to match the feel of each reference knob. Participants indicated their satisfaction with the matches their renderings produced. We observed similar relative inertia, friction, detent strength, and detent spacing parameterizations by human experts and our automatic estimation methods. Qualitative results provided insight on users' strategies and confidence. While experts (but not novices) were better able to ascertain an underlying model in the presence of unmodeled dynamics, the objective algorithm outperformed all humans when an appropriate physical model was used. Our studies demonstrate that automated model identification can capture knob dynamics as perceived by a human, and they also establish limits to that ability; they comprise a step towards pragmatic design guidelines for embedded physical interfaces in which methodological expedience is informed by human perceptual requirements.",2009,0, 4341,De-interlacing using priority processing for guaranteed real time performance with limited processing resources,Multi-media-systems (MMS) increasingly use software solutions for signal processing instead of dedicated hardware. Scalable video algorithms (SVA) using novel priority processing can guarantee real-time performance on programmable platforms even with limited resources. In this paper we propose a scalable de-interlacing algorithm with priority processing. The de-interlacer adapts to the available hardware resources and shows a wide range of scalability with good output quality. The output quality has been compared with other proposed de-interlacers.,2009,0, 4342,How Can Optimization Models Support the Maintenance of Component-Based Software?,"The maintenance phase of software systems is ever more increasing its incidence, in terms of effort, to the whole software lifecycle. Therefore the introduction of automated techniques that can help software maintainers to take decision on the basis of quantitative evaluation would be a suitable phenomenon.Search-based techniques offer today a very promising view on the automation of searching processes in the software engineering domain. Component-based software is a very interesting paradigm to apply such type of techniques, for example for component selection. In this paper we introduce optimization techniques to manage the problem of failures at maintenance time. In particular,we introduce two approaches that provide maintenance actions to be taken in order to overcome system failures in case of monitored and non-monitored software systems.",2009,0, 4343,"Embedded Software: Facts, Figures, and Future","Due to the complex system context of embedded-software applications, defects can cause life-threatening situations, delays can create huge costs, and insufficient productivity can impact entire economies. Providing better estimates, setting objectives, and identifying critical hot spots in embedded-software engineering requires adequate benchmarking data.",2009,0, 4344,OneClick: A Framework for Measuring Network Quality of Experience,"As the service requirements of network applications shift from high throughput to high media quality, interactivity, and responsiveness, the definition of QoE (Quality of Experience) has become multidimensional. Although it may not be difficult to measure individual dimensions of the QoE, how to capture users' overall perceptions when they are using network applications remains an open question. In this paper, we propose a framework called OneClick to capture users' perceptions when they are using network applications. The framework only requires a subject to click a dedicated key whenever he/she feels dissatisfied with the quality of the application in use. OneClick is particularly effective because it is intuitive, lightweight, efficient, time-aware, and application-independent. We use two objective quality assessment methods, PESQ and VQM, to validate OneClick's ability to evaluate the quality of audio and video clips. To demonstrate the proposed framework's efficiency and effectiveness in assessing user experiences, we implement it on two applications, one for instant messaging applications, and the other for first- person shooter games. A Flash implementation of the proposed framework is also presented.",2009,0, 4345,Quantifying the Importance of Vantage Points Distribution in Internet Topology Measurements,"The topology of the Internet has been extensively studied in recent years, driving a need for increasingly complex measurement infrastructures. These measurements have produced detailed topologies with steadily increasing temporal resolution, but concerns exist about the ability of active measurement to measure the true Internet topology. Difficulties in ensuring the accuracy of every individual measurement when millions of measurements are made daily, and concerns about the bias that might result from measurement along the tree of routes from each vantage point to the wider reaches of the Internet must be addressed. However, early discussions of these concerns were based mostly on synthetic data, oversimplified models or data with limited or biased observer distributions. In this paper, we show the importance that extensive sampling from a broad distribution of vantage points has on the resulting topology and bias. We present two methods for designing and analyzing the topology coverage by vantage points: one, when system-wide knowledge exists, provides a near-optimal assignment of measurements to vantage points; while the second one is suitable for an oblivious system and is purely probabilistic. The majority of the paper is devoted to a first look at the importance of the distribution's quality. We show that diversity in the locations and types of vantage points is required for obtaining an unbiased topology. We analyze the effect that broad distribution has over the convergence of various autonomous systems topology characteristics. We show that although diverse and broad distribution is not required for all inspected properties, it is required for some. Finally, some recent bias claims that were made against active traceroute sampling are revisited, and we empirically show that diverse and broad distribution can question their conclusions.",2009,0, 4346,Keynote: Event Driven Software Quality,"Summary form only given. Event-driven programming has found pervasive acceptance, from high-performance servers to embedded systems, as an efficient method for interacting with a complex world. The fastest research Web servers are event- driven, as is the most common operating system for sensor nodes. An event-driven program handles concurrent logical tasks using a cooperative, application-level scheduler. The application developer separates each logical task into event handlers; the scheduler runs multiple handlers in an interleaved fashion. Unfortunately, the loose coupling of the event handlers obscures the program's control flow and makes dependencies hard to express and detect, leading to subtle bugs. As a result, event-driven programs can be difficult to understand, making them hard to debug, maintain, extend, and validate. This talk presents recent approaches to event-driven software quality based on static analysis and testing, along with some open problems. We will discuss progress on how to avoid buffer overflow in TCP servers, stack overflow and missed deadlines in microcontrollers, and rapid battery drain in sensor networks. Our work is part of the Event Driven Software Quality project at UCLA, which is aimed at building the next generation of language and tool support for event-driven programming.",2009,0, 4347,Proposal of Stateful Relilability Counter in Small-World Cellular Neural Networks,"Cellular neural networks (CNN) is a neural network model linked to only neighborhoods. CNN is suited for image processing such as noise reduction and edge detection. Small world cellular neural networks (SWCNN) is a CNN extended by adding a small world link, which is global short-cut. SWCNN has better performance than CNN. One of weak points of SWCNN is fault tolerance. We proposed multiple SWCNN layers in order to improve fault tolerance of SWCNN. However, it is not sufficient because only stop failure is considered. In this paper, we propose stateful reliability counter for triple modular redundancy (stateful RC-TMR) method in order to improve tolerance.",2009,0, 4348,Performance Evaluation of Link Quality Extension in Multihop Wireless Mobile Ad-hoc Networks,"Recently, mobile ad-hoc networks (MANET) are continuing to attract the attention for their potential use in several fields. Most of the work has been done in simulation, because a simulator can give a quick and inexpensive understanding of protocols and algorithms. However, experimentation in the real world are very important to verify the simulation results and to revise the models implemented in the simulator. In this paper, we present the implementation and analysis of our testbed considering the link quality window size (LQWS) parameter for optimized link state routing (OLSR) protocol. We investigate the effect of mobility in the throughput of a MANET. The mobile nodes move toward the destination at a regular speed. When the mobile nodes arrive at the corner, they stop for about three seconds. In our experiments, we consider two cases: only one node is moving (mobile node)and two nodes (intermediate nodes) are moving at the same time. We assess the performance of our testbed in terms of throughput, round trip time, jitter and packet loss. From our experiments, we found that throughput of TCP was improved by reducing LQWS.",2009,0, 4349,Bandwidth Scalable Wideband Codec Using Hybrid Matching Pursuit Harmonic/CELP Scheme,"A novel hybrid harmonic/CELP scheme for bandwidth scalable wideband codec is proposed that utilizes a band-split technique, where the low-band (0-4 kHz) is critically subsampled and coded using 11.8 kbps G.729E. The high-band signal is divided into stationary mode (SM) and non-stationary mode (NSM) components based on its unique characteristics. In the SM portion, the high-band signal is compressed using a multi-stage coding combined sinusoidal model based on the matching pursuit (MP) algorithm and CELP with the circular codebook. In the NSM portion, the high-band signals are coded by CELP with both pulse and circular codebooks. For efficient bit allocation and enhanced performance, the pitch of the high-band codec is estimated using the quantized pitch parameter in low-band codec. In an informal listening test, the subjective speech quality was rated as comparable to that obtainable with 48 kbps G.722 and 12.85 kbps G.722.2.",2009,0, 4350,Automatic modulation classification for cognitive radios using cyclic feature detection,"Cognitive radios have become a key research area in communications over the past few years. Automatic modulation classification (AMC) is an important component that improves the overall performance of the cognitive radio. Most modulated signals exhibit the property of cyclostationarity that can be exploited for the purpose of classification. In this paper, AMCs that are based on exploiting the cyclostationarity property of the modulated signals are discussed. Inherent advantages of using cyclostationarity based AMC are also addressed. When the cognitive radio is in a network, distributed sensing methods have the potential to increase the spectral sensing reliability, and decrease the probability of interference to existing radio systems. The use of cyclostationarity based methods for distributed signal detection and classification are presented. Examples are given to illustrate the concepts. The Matlab codes for some of the algorithms described in the paper are available for free download at http://filebox.vt.edu/user/bramkum.",2009,0, 4351,Experiments on the test case length in specification based test case generation,"Many different techniques have been proposed to address the problem of automated test case generation, varying in a range of properties and resulting in very different test cases. In this paper we investigate the effects of the test case length on resulting test suites: Intuitively, longer test cases should serve to find more difficult faults but will reduce the number of test cases necessary to achieve the test objectives. On the other hand longer test cases have disadvantages such as higher computational costs and they are more difficult to interpret manually. Consequently, should one aim to generate many short test cases or fewer but longer test cases? We present the results of a set of experiments performed in a scenario of specification based testing for reactive systems. As expected, a long test case can achieve higher coverage and fault detecting capability than a short one, while giving preference to longer test cases in general can help reduce the size of test suites but can also have the opposite effect, for example, if minimization is applied.",2009,0, 4352,Calculating BPEL test coverage through instrumentation,"Assessing the quality of tests for BPEL processes is a difficult task in projects following SOA principles. Since insufficient testing can lead to unforeseen defects that can be extremely costly in complex and mission critical environments, this problem needs to be addressed. By using formally defined test metrics that can be evaluated automatically by using an extension to the BPELUnit testing framework, testers are able to assess whether their white box tests cover all important areas of a BPEL process. This leads to better tests and thus to better BPEL processes because testers can improve their test cases by knowing which important areas of the BPEL process have not been tested yet.",2009,0, 4353,Testing for trustworthiness in scientific software,"Two factors contribute to the difficulty of testing scientific software. One is the lack of testing oracles - a means of comparing software output to expected and correct results. The second is the large number of tests required when following any standard testing technique described in the software engineering literature. Due to the lack of oracles, scientists use judgment based on experience to assess trustworthiness, rather than correctness, of their software. This is an approach well established for assessing scientific models. However, the problem of assessing software is more complex, exacerbated by the problem of code faults. This highlights the need for effective and efficient testing for code faults in scientific software. Our current research suggests that a small number of well chosen tests may reveal a high percentage of code faults in scientific software and allow scientists to increase their trust.",2009,0, 4354,VLSI oriented fast motion estimation algorithm based on macroblock and motion feature analysis,"The latest H.264/AVC standard can provide us superior coding performance. However, the new technique also brings about complexity problem, especially in motion estimation (ME) part. In hardware, the pipeline stage division of H.264 based ME engine degrades many software oriented complexity reduction schemes. In our paper, we propose one VLSI friendly fast ME algorithm. Firstly, pixel difference based adaptive subsampling achieves complexity reduction for homogeneous macroblock (MB). Secondly, a multiple reference frame elimination scheme is introduced to early terminate ME process for static MB. Thirdly, based on the motion feature analysis, the search range is adjusted to remove redundant search points. Experimental results show that, compared with hardware friendly full search algorithm, our proposed algorithm can reduce 71.09% to 95.26%ME time with negligible video quality degradation. Moreover, our fast algorithm can be combined with existing fast motion estimation algorithms such as UMHexagon search for further reduction in complexity and it is friendly to hardware implementation.",2009,0, 4355,Bi-direction Motion Vector retrieval based error concealment scheme for H.264/AVC,"As the newest video coding standard, H.264/AVC adopts the high-efficiently predictive coding and variable length entropy coding to achieve high compression efficiency. On the other side, transmission errors become the major problem faced by video broadcasting service providers. Error concealment (EC) here is adopted to handle slices with huge conjunctive corrupted areas inside. Considering error propagation from corrupted slice to succeeding ones is the key factor affecting the video quality, this paper proposes a novel temporal EC scheme including the bi-direction motion vector (MV) retrieval method and an adaptive EC ordering basing on it. Background and motional steady shift part of slice will be given top and second priority, respectively. Combined with our proposed improved boundary matching algorithm (IBMA) which provides more accurate distortion function, experiments results show that our proposal achieves better performance under different error rate channel, compared with EC algorithm adopted in H.264 reference software.",2009,0, 4356,Software reliability prediction using multi-objective genetic algorithm,"Software reliability models are very useful to estimate the probability of the software fail along the time. Several different models have been proposed to predict the software reliability growth (SRGM); however, none of them has proven to perform well considering different project characteristics. The ability to predict the number of faults in the software during development and testing processes. In this paper, we explore Genetic Algorithms (GA) as an alternative approach to derive these models. GA is a powerful machine learning technique and optimization techniques to estimate the parameters of well known reliably growth models. Moreover, machine learning algorithms, proposed the solution overcome the uncertainties in the modeling by combining multiple models using multiple objective function to achieve the best generalization performance where. The objectives are conflicting and no design exists which can be considered best with respect to all objectives. In this paper, experiments were conducted to confirm these hypotheses. Then evaluating the predictive capability of the ensemble of models optimized using multi-objective GA has been calculated. Finally, the results were compared with traditional models.",2009,0, 4357,A platform for software engineering research,"Research in the fields of software quality, maintainability and evolution requires the analysis of large quantities of data, which often originate from open source software projects. Collecting and preprocessing data, calculating metrics, and synthesizing composite results from a large corpus of project artifacts is a tedious and error prone task lacking direct scientific value. The Alitheia Core tool is an extensible platform for software quality analysis that is designed specifically to facilitate software engineering research on large and diverse data sources, by integrating data collection and preprocessing phases with an array of analysis services, and presenting the researcher with an easy to use extension mechanism. Alitheia Core aims to be the basis of an ecosystem of shared tools and research data that will enable researchers to focus on their research questions at hand, rather than spend time on re-implementing analysis tools. In this paper, we present the Alitheia Core platform in detail and demonstrate its usefulness in mining software repositories by guiding the reader through the steps required to execute a simple experiment.",2009,0, 4358,Does calling structure information improve the accuracy of fault prediction?,"Previous studies have shown that software code attributes, such as lines of source code, and history information, such as the number of code changes and the number of faults in prior releases of software, are useful for predicting where faults will occur. In this study of an industrial software system, we investigate the effectiveness of adding information about calling structure to fault prediction models. The addition of calling structure information to a model based solely on non-calling structure code attributes provided noticeable improvement in prediction accuracy, but only marginally improved the best model based on history and non-calling structure code attributes. The best model based on history and non-calling structure code attributes outperformed the best model based on calling and non-calling structure code attributes.",2009,0, 4359,Mining the coherence of GNOME bug reports with statistical topic models,"We adapt latent Dirichlet allocation to the problem of mining bug reports in order to define a new information-theoretic measure of coherence. We then apply our technique to a snapshot of the GNOME Bugzilla database consisting of 431,863 bug reports for multiple software projects. In addition to providing an unsupervised means for modeling report content, our results indicate substantial promise in applying statistical text mining algorithms for estimating bug report quality. Complete results are available from our supplementary materials Web site at http://sourcerer.ics.uci.edu/msr2009/gnome_coherence.html.",2009,0, 4360,Colony image classification on fuzzy mathematics,"Due to colony image quality variation very much, an ordinary colony delineation algorithm is difficult to segment all kinds of colony images, therefore, image classification is necessary before image segmentation. The developed special colony image classification method in this study is to use definition of judgment set, determination of fuzzy judgment matrix, and defining weight set based on colony image characteristics, which are: (1) colony density; (2) colony area percentage (the ratio between colony area and whole area of the image); (3) colony area variance; and (4) grey contrast between colony and nutrient fluid. Experiments prove that the studied method make the classification reasonable, it can be used for colony image recognition and image pre-segmentation, and can also be expanded into the other similar applications.",2009,0, 4361,Accurate Interprocedural Null-Dereference Analysis for Java,"Null dereference is a commonly occurring defect in Java programs, and many static-analysis tools identify such defects. However, most of the existing tools perform a limited interprocedural analysis. In this paper, we present an interprocedural path-sensitive and context-sensitive analysis for identifying null dereferences. Starting at a dereference statement, our approach performs a backward demand-driven analysis to identify precisely paths along which null values may flow to the dereference. The demand-driven analysis avoids an exhaustive program exploration, which lets it scale to large programs. We present the results of empirical studies conducted using large open-source and commercial products. Our results show that: (1) our approach detects fewer false positives, and significantly more interprocedural true positives, than other commonly used tools; (2) the analysis scales to large subjects; and (3) the identified defects are often deleted in subsequent releases, which indicates that the reported defects are important.",2009,0, 4362,MINTS: A general framework and tool for supporting test-suite minimization,"Test-suite minimization techniques aim to eliminate redundant test cases from a test-suite based on some criteria, such as coverage or fault-detection capability. Most existing test-suite minimization techniques have two main limitations: they perform minimization based on a single criterion and produce suboptimal solutions. In this paper, we propose a test-suite minimization framework that overcomes these limitations by allowing testers to (1) easily encode a wide spectrum of test-suite minimization problems, (2) handle problems that involve any number of criteria, and (3) compute optimal solutions by leveraging modern integer linear programming solvers. We implemented our framework in a tool, called MINTS, that is freely-available and can be interfaced with a number of different state-of-the-art solvers. Our empirical evaluation shows that MINTS can be used to instantiate a number of different test-suite minimization problems and efficiently find an optimal solution for such problems using different solvers.",2009,0, 4363,Alitheia Core: An extensible software quality monitoring platform,"Research in the fields of software quality and maintainability requires the analysis of large quantities of data, which often originate from open source software projects. Pre-processing data, calculating metrics, and synthesizing composite results from a large corpus of project artefacts is a tedious and error prone task lacking direct scientific value. The Alitheia Core tool is an extensible platform for software quality analysis that is designed specifically to facilitate software engineering research on large and diverse data sources, by integrating data collection and preprocessing phases with an array of analysis services, and presenting the researcher with an easy to use extension mechanism. The system has been used to process several projects successfully, forming the basis of an emerging ecosystem of quality analysis tools.",2009,0, 4364,Clustering and Metrics Thresholds Based Software Fault Prediction of Unlabeled Program Modules,"Predicting the fault-proneness of program modules when the fault labels for modules are unavailable is a practical problem frequently encountered in the software industry. Because fault data belonging to previous software version is not available, supervised learning approaches can not be applied, leading to the need for new methods, tools, or techniques. In this study, we propose a clustering and metrics thresholds based software fault prediction approach for this challenging problem and explore it on three datasets, collected from a Turkish white-goods manufacturer developing embedded controller software. Experiments reveal that unsupervised software fault prediction can be automated and reasonable results can be produced with techniques based on metrics thresholds and clustering. The results of this study demonstrate the effectiveness of metrics thresholds and show that the standalone application of metrics thresholds (one-stage) is currently easier than the clustering and metrics thresholds based (two-stage) approach because the selection of cluster number is performed heuristically in this clustering based method.",2009,0, 4365,Automated substring hole analysis,"Code coverage is a common measure for quantitatively assessing the quality of software testing. Code coverage indicates the fraction of code that is actually executed by tests in a test suite. While code coverage has been around since the 60's there has been little work on how to effectively analyze code coverage data measured in system tests. Raw data of this magnitude, containing millions of data records, is often impossible for a human user to comprehend and analyze. Even drill-down capabilities that enable looking at different granularities starting with directories and going through files to lines of source code are not enough. Substring hole analysis is a novel method for viewing the coverage of huge data sets. We have implemented a tool that enables automatic substring hole analysis. We used this tool to analyze coverage data of several large and complex IBM software systems. The tool identified coverage holes that suggested interesting scenarios that were untested.",2009,0, 4366,Configuration aware prioritization techniques in regression testing,"Configurable software lets users customize applications in many ways, and is becoming increasingly prevalent. Regression testing is an important but expensive way to build confidence that software changes introduce no new faults as software evolves, resulting in many attempts to improve its performance given limited resources. Whereas problems such as test selection and prioritization at the test case level have been extensively researched in the regression testing literature, they have rarely been considered for configurations, though there is evidence that we should not ignore the effects of configurations on regression testing. This research intends to provide a framework for configuration aware prioritization techniques, evaluated through empirical studies.",2009,0, 4367,Reducing search space of auto-tuners using parallel patterns,"Auto-tuning is indispensable to achieve best performance of parallel applications, as manual tuning is extremely labor intensive and error-prone. Search-based auto-tuners offer a systematic way to find performance optimums, and existing approaches provide promising results. However, they suffer from large search spaces. In this paper we propose the idea to reduce the search space using parameterized parallel patterns. We introduce an approach to exploit context information from Master/Worker and Pipeline patterns before applying common search algorithms. The approach enables a more efficient search and is suitable for parallel applications in general. In addition, we present an implementation concept and a corresponding prototype for pattern-based tuning. The approach and the prototype have been successfully evaluated in two large case studies. Due to the significantly reduced search space a common hill climbing algorithm and a random sampling strategy require on average 54% less tuning iterations, while even achieving a better accuracy in most cases.",2009,0, 4368,Discovering determinants of high volatility software,"This topic paper presents a line of research that we are proposing that incorporates, in a very explicit and intentional way, human and organizational aspects in the prediction of troublesome (defect-prone or change-prone or volatile, depending on the environment) software modules. Much previous research in this area tries to identify these modules by looking at their structural characteristics, so that increased effort can be concentrated on those modules in order to reduce future maintenance costs. The outcome of this work will be a set of models that describe the relationships between software change characteristics (in particular human, organizational, and process characteristics), changes in the structural properties of software modules (e.g. complexity), and the future volatility of those modules. The impact of such models is two-fold. First, maintainers will have an improved technique for estimation of maintenance effort based on recent change characteristics. Also, the models will help identify management practices that are associated with high future volatility.",2009,0, 4369,Software aging assessment through a specialization of the SQuaRE quality model,"In the last years the software application portfolio has become a key asset for almost all companies. During their lives, applications undergo lots of changes to add new functionalities or to refactor older ones; these changes tend to reduce the quality of the applications themselves, causing the phenomenon known as software aging. Monitoring of software aging is very important for companies, but up to now there are no standard approaches to perform this task. In addition many of the suggested models assess software aging basing on few software features, whereas this phenomenon affects all of the software aspects. In 2005ISO/IEC released the SQuaRE quality model which covers several elements of software quality assessment, but some issues make SQuaRE usage quite difficult. The purpose of this paper is to suggest an approach to software aging monitoring that considers the software product in its wholeness and to provide a specialization of the SQuaRE quality model which allows to perform this task.",2009,0, 4370,Existing model metrics and relations to model quality,"This paper presents quality goals for models and provides a state-of-the-art analysis regarding model metrics. While model-based software development often requires assessing the quality of models at different abstraction and precision levels and developed for multiple purposes, existing work on model metrics do not reflect this need. Model size metrics are descriptive and may be used for comparing models but their relation to model quality is not well-defined. Code metrics are proposed to be applied on models for evaluating design quality while metrics related to other quality goals are few. Models often consist of a significant amount of elements, which allows a large amount of metrics to be defined on them. However, identifying useful model metrics, linking them to model quality goals, providing some baseline for interpretation of data, and combining metrics with other evaluation models such as inspections requires more theoretical and empirical work.",2009,0, 4371,An investigation of using Neuro-Fuzzy with software size estimation,"Neuro-fuzzy refers to a hybrid intelligent system using both neural network and fuzzy logic. In this study, neuro-fuzzy is applied to a backfiring and categorical data size estimation model. Evaluation was conducted to determine whether a neuro-fuzzy approach improves software size estimations. It was found that a neuro-fuzzy approach provides minimal improvement over the traditional backfiring sizing technique.",2009,0, 4372,Assessing Quality of Derived Non Atomic Data by Considering Conflict Resolution Function,We present a data quality manager (DQM) prototype providing information regarding the elements of derived non-atomic data values. Users are able to make effective decisions by trusting data according to the description of the conflict resolution function that was utilized for fusing data along with the quality properties of data ancestor. The assessment and ranking of non-atomic data is possible by the specification of quality properties and priorities from users at any level of experience.,2009,0, 4373,SPEWS: A Framework for the Performance Analysis of Web Services Orchestrated with BPEL4WS,"This paper addresses quality of service aspects of Web services (WS) orchestrations created using the business process execution language for Web services (BPEL4WS). BPEL4WS is a promising language describing the WS orchestrations in form of business processes, but it lacks of a sound formal semantic, which hinders the formal analysis and verification of business processes specified in it. Formal methods, like Petri nets (PN), may provide a means to analyse BPEL4WS processes, evaluating its performance, detecting weaknesses and errors in the process model already at design-time. A framework for transformation of BPEL4WS into generalized stochastic Petri nets (GSPN) is proposed to analyse the performance and throughput of WS, based on the execution of orchestrated processes.",2009,0, 4374,A State Transition Model for Anti-Interference on the Embedded System,"Microprogrammed control unit (MCU) and embedded systems are widely used in all kinds of industrial, communication devices and household appliances. However, for the embedded system, even with well hardware protection, the stability are often impacted by the complicated electro magnetic interference (EMI) and the others interference. To get a better system stability ratio, this paper introduces a potent state transition model for the anti-interference intention instead of the empiristic methods. The states of the embedded system are classified into three types: I/O, computing, and error. By the state transition diagram, we deduced a set of anti-interference probability model, which guide the software measures for the anti-interference intention and reduce the unstable probability of the embedded system.",2009,0, 4375,Adaptable Link Quality Estimation for Multi Data Rate Communication Networks,"QoS-sensitive applications transmitted over wireless links require precise knowledge of the wireless environment. However, the dynamic nature of wireless channel, together with its different configurations and types, makes so called link quality estimation (LQE) a difficult task. This paper looks into the design of an accurate and fast LQE method for a multi data rate environment. We investigate the impact of various conditions on the LQE accuracy. In result, two different link quality estimation sources, i.e., based on hello packet delivery ratio and signal strength, are measured and their performances are compared. We find that these two methods are not always accurate. As an improvement we propose an adaptive LQE method that chooses different LQE indicators depending on the wireless environment. The performance of the proposed method is verified via an extensive measurement campaign.",2009,0, 4376,Are real-world power systems really safe?,"With the advent of new power system analysis software, a more detailed arc flash (AF) analysis can be performed under various load conditions. These new tools can also evaluate equipment damage, design systems with lower AF, and predict electrical fire locations based on high AF levels. This article demonstrates how AF levels change with available utility mega volt amperes (MVA), additions in connected load, and selection of system components. This article summarizes a detailed analysis of several power systems to illustrate the possible misuses of the ""Risk Category Classification Tables"" in the Standard for Electrical Safety Requirements for Employee Workplaces, 2004 (NFPA 70E), while pointing toward future improvements of such standards.",2009,0, 4377,Level-2 Calorimeter Trigger Upgrade at CDF,"The CDF Run II level 2 calorimeter trigger is implemented in hardware and is based on a simple algorithm that was used in Run I. This system has worked well for Run II at low luminosity. As the Tevatron instantaneous luminosity increases, the limitation due to this simple algorithm starts to become clear. As a result, some of the most important jet and MET (missing ET) related triggers have large growth terms in cross section at higher luminosity. In this paper, we present an upgrade of the L2CAL system which makes the full calorimeter trigger tower information directly available to the level 2 decision CPU. This upgrade is based on the Pulsar, a general purpose VME board developed at CDF and already used for upgrading both the level 2 global decision crate and the level 2 silicon vertex tracking. The upgrade system allows more sophisticated algorithms to be implemented in software and both level 2 jets and MET can be made nearly equivalent to offline quality, thus significantly improving the performance and flexibility of the jet and MET related triggers. This is a natural expansion of the already-upgraded level 2 trigger system, and is a big step forward to improve the CDF triggering capability at level 2. This paper describes the design, the hardware and software implementation and the performance of the upgrade system.",2009,0, 4378,Improved Spatial Resolution in PET Scanners Using Sampling Techniques,"Increased focus towards improved detector spatial resolution in PET has led to the use of smaller crystals in some form of light sharing detector design. In this work we evaluate two sampling techniques that can be applied during calibrations for pixelated detector designs in order to improve the reconstructed spatial resolution. The inter-crystal positioning technique utilizes sub-sampling in the crystal flood map to better sample the Compton scatter events in the detector. The Compton scatter rejection technique, on the other hand, rejects those events that are located further from individual crystal centers in the flood map. We performed Monte Carlo simulations followed by measurements on two whole-body scanners for point source data. The simulations and measurements were performed for scanners using scintillators with Zeff ranging from 46.9 to 63 for LaBr3 and LYSO, respectively. Our results show that near the center of the scanner, inter-crystal positioning technique leads to a gain of about 0.5-mm in reconstructed spatial resolution (FWHM) for both scanner designs. In a small animal LYSO scanner the resolution improves from 1.9-mm to 1.6-mm with the inter-crystal technique. The Compton scatter rejection technique shows higher gains in spatial resolution but at the cost of reduction in scanner sensitivity. The inter-crystal positioning technique represents a modest acquisition software modification for an improvement in spatial resolution, but at a cost of potentially longer data correction and reconstruction times. The Compton scatter rejection technique, while also requiring a modest acquisition software change with no increased data correction and reconstruction times, will be useful in applications where the scanner sensitivity is very high and larger improvements in spatial resolution are desirable.",2009,0, 4379,QoS Enhancements and Performance Analysis for Delay Sensitive Applications,This paper presents a comprehensive system modeling and analysis approach for both predicting and controlling queuing delay at an expected value under multi-class traffic in a single buffer. This approach could effectively enhance QoS delivery for delay sensitive applications. Six major contributions are given in the paper: (1) a discrete-time analytical model is developed for capturing multi-class traffic with binomial distribution; (2) a control strategy with dynamic queue thresholds is used in simulation experiments to control the delay at a specified value within the buffer; (3) the feasibility of the system is validated by comparing theoretical analysis with simulation scenarios; (4) the arrival rate can be adjusted for each forthcoming time window during the simulation with multi-packet sources; (5) statistical evaluation is performed to show both efficiency and accuracy of the analytical and simulation results; (6) a graphical user interface is developed that can provide flexible configuration for the simulation and validate input values.,2009,0, 4380,Packet Scheduling Algorithm with QoS Provision in HSDPA,"The 3rd generation WCDMA standard has been enhanced to offer significantly increased performance for packet data. But coming application like multimedia on desiring data rates will spur the UMTS can support. To support for such high data rates, high speed downlink access (HSDPA), labeled as a 3.5G wireless system , has been published in UMTS Release05. Under the HSDPA system, have to support high speed transmission and promises a peak data rate of up to 10 Mb/s. To achieve such high speed rate, system provides the channel quality indicator (CQI) information to detect the air interface condition. Then, there are many channel condition related scheduling schemes have been proposed to attend achievement of high system performance and guarantee the quality-of-service (QoS) requirements. However, there is no solution of packet scheduling algorithm can consider differences of data-types priority management under the hybrid automatic repeat (H-ARQ) scenario. In this paper, we propose the weight combination packet scheduling algorithm to target to enhance the system efficiency and balanced QoS requirements. The proposed schemes is simulated with OPNET simulator and compared with the Max CIR and PF algorithms in fast fading channel. Simulation result shows that the proposed algorithm can both effectively increase the cell throughput and meet userpsilas satisfaction base on QoS requirements.",2009,0, 4381,Mining Frequent Flows Based on Adaptive Threshold with a Sliding Window over Online Packet Stream,"Traffic measurement is an important component of network applications including usage-based charging, anomaly detection and traffic engineering. With high-speed linksiquestthe main problem with traffic measurement is its lack of scalability. Aiming at circumvent this deficiency, we develop a novel and scalable sketch to mine frequent flows over online packet stream. Dividing the sliding window into buckets, the sketch can not only be easily-implemented, but also remove obsolete data to identify recent usage trends. Besides, an unbiased estimator is introduced based on a pruning function to preserve large flows. In particular, we illustrate a mechanism of configuring adaptive thresholds which are bound to the actual data without artificial behavior. The adaptive threshold can be regulated to target the mean number of the reserved flows in order to protect memory resources. Experiments are also conducted based on real network traces. Results demonstrate that the proposed method can achieve adaptability and controllability of resource consumption without sacrificing accuracy.",2009,0, 4382,Delay and Throughput Trade-Off in WiMAX Mesh Networks,"We investigate WiMAX mesh networks in terms of delay and throughput trade-off. For a given topology, using our proposed analytical model we study how slot allocation policy and forwarding probability affects per-node and network performance.",2009,0, 4383,Packet Loss Based Real Time Network Condition Estimation,"Packet loss behaviors have been studied for many years. Traditional packet loss evaluation criteria are not sufficient to describe characteristics of losses: the loss rate is too simple while traditional loss episodes sometimes are not stable due to the randomness of loss events. In this paper, we propose an extended loss episode approach, merged loss episode based on the traditional definition, to estimate the network condition. Also we presented the corresponding algorithm for calculation. Simulation results show that compared with traditional loss episodes, the merged version could be more stable and effective to measure network performance on loss issues.",2009,0, 4384,Analysis of Real-time Multimedia Transmission over PR-SCTP with Failover Detection Delay and Reliability Level Differential,The growing availability of different wireless access technologies provides the opportunity for real-time distribution of multimedia content using multi-homing technology. Investigating the behaviors and quality of such applications under heavy network load is necessary. This paper studies the effect of path failure detection threshold and reliability level on Stream Control Transmission Protocol Partial Reliability Extension (PR-SCTP) performance in symmetric and asymmetric path conditions respectively with different path loss rates. The platform Evalvid-SCTP implemented in the University of Delawarepsilas SCTP ns-2 module performs the emulation experiment.,2009,0, 4385,Predicting Quality of Object-Oriented Systems through a Quality Model Based on Design Metrics and Data Mining Techniques,"Most of the existing object-oriented design metrics and data mining techniques capture similar dimensions in the data sets, thus reflecting the fact that many of the metrics are based on similar hypotheses, properties, and principles. Accurate quality models can be built to predict the quality of object-oriented systems by using a subset of the existing object-oriented design metrics and data mining techniques. We propose a software quality model, namely QUAMO (QUAlity MOdel) which is based on divide-and-conquer strategy to measure the quality of object-oriented systems through a set of object-oriented design metrics and data mining techniques. The primary objective of the model is to make similar studies on software quality more comparable and repeatable. The proposed model is augmented from five quality models, namely McCall Model, Boehm Model, FURPS/FURPS+ (i.e. functionality, usability, reliability, performance, and supportability), ISO 9126, and Dromey Model. We empirically evaluated the proposed model on several versions of JUnit releases. We also used linear regression to formulate a prediction equation. The technique is useful to help us interpret the results and to facilitate comparisons of results from future similar studies.",2009,0, 4386,Dn-based architecture assessment of Java Open Source software systems,"Since their introduction in 1994 the Martin's metrics became popular in assessing object-oriented software architectures. While one of the Martin metrics, normalised distance from the main sequence Dn, has been originally designed with assessing individual packages, it has also been applied to assess quality of entire software architectures. The approach itself, however, has never been studied. In this paper we take the first step to formalising the Dn-based architecture assessment of Java open source software. We present two aggregate measures: average normalised distance from the main sequence Dmacrn, and parameter of the fitted statistical model lambda. Applying these measures to a carefully selected collection of benchmarks we obtain a set of reference values that can be used to assess quality of a system architecture. Furthermore, we show that applying the same measures to different versions of the same system provides valuable insights in system architecture evolution.",2009,0, 4387,Classifying software visualization tools using the Bloom's taxonomy of cognitive domain,"There are a lot of software visualization tools existing for various purposes. Therefore, how to choose the right visualization tool for a specific task becomes an important issue. Although there are a few taxonomies for classifying visualization tools, each has its own defects. This paper proposes a new classification schema based on the widely used Bloom's cognitive taxonomy. We summarize a set of questions for each level of the Bloom taxonomy using the well defined verbs, to identify if the visualization tool belongs to the individual cognitive level, which in turn can determine the purposes of the tool. We also conduct case studies on four tools to evaluate this new approach. The result demonstrates that the new classification schema can provide good guidance for people to choose a useful visualization tool.",2009,0, 4388,Program phase and runtime distribution-aware online DVFS for combined Vdd/Vbb scaling,"Complex software programs are mostly characterized by phase behavior and runtime distributions. Due to the dynamism of the two characteristics, it is not efficient to make workload predictions during design-time. In our work, we present a novel online DVFS method that exploits both phase behavior and runtime distribution during runtime in combined Vdd/Vbb scaling. The presented method performs a bi-modal analysis of runtime distribution, and then a runtime distribution-aware workload prediction based on the analysis. In order to minimize the runtime overhead of the sophisticated workload prediction method, it performs table lookups to the pre-characterized data during runtime without compromising the quality of energy reduction. It also offers a new concept of program phase suitable for DVFS. Experiments show the effectiveness of the presented method in the case of H.264 decoder with two sets of long-term scenarios consisting of total 4655 frames. It offers 6.6% ~ 33.5% reduction in energy consumption compared with existing offline and online solutions.",2009,0, 4389,pTest: An adaptive testing tool for concurrent software on embedded multicore processors,"More and more processor manufacturers have launched embedded multicore processors for consumer electronics products because such processors provide high performance and low power consumption to meet the requirements of mobile computing and multimedia applications. To effectively utilize computing power of multicore processors, software designers interest in using concurrent processing for such architecture. The master-slave model is one of the popular programming models for concurrent processing. Even if it is a simple model, the potential concurrency faults and unreliable slave systems still lead to anomalies of entire system. In this paper, we present an adaptive testing tool called pTest to stress test a slave system and to detect the synchronization anomalies of concurrent software in the master-slave systems on embedded multicore processors. We use a probabilistic finite-state automaton(PFA) to model the test patterns for stress testing and shows how a PFA can be applied to pTest in practice.",2009,0, 4390,Generation of compact test sets with high defect coverage,"Multi-detect (N-detect) testing suffers from the drawback that its test length grows linearly with N. We present a new method to generate compact test sets that provide high defect coverage. The proposed technique makes judicious use of a new pattern-quality metric based on the concept of output deviations. We select the most effective patterns from a large N-detect pattern repository, and guarantee a small test set as well as complete stuck-at coverage. Simulation results for benchmark circuits show that with a compact, 1-detect stuck-at test set, the proposed method provides considerably higher transition-fault coverage and coverage ramp-up compared to another recently-published method. Moreover, in all cases, the proposed method either outperforms or is as effective as the competing approach in terms of bridging-fault coverage and the surrogate BCE+ metric. In many cases, higher transition-fault coverage is obtained than much larger N-detect test sets for several values of N. Finally, our results provide the insight that, instead of using N-detect testing with as large N as possible, it is more efficient to combine the output deviations metric with multi-detect testing to get high-quality, compact test sets.",2009,0, 4391,Software Sensor-Based STATCOM Control Under Unbalanced Conditions,"A new control strategy for static compensators (STATCOMs) operating under unbalanced voltages and currents is presented in this paper. The proposed strategy adopts a state observer (software sensor) to estimate ac voltages at the STATCOM connection point. This way, physical voltage sensors are not needed and the hardware gets simplified, among other advantages. Using the superposition principle, a controller is designed to independently control both positive and negative sequence currents, eliminating the risk of overcurrents during unbalanced conditions and improving the power quality at the STATCOM connection bus. Two operating modes are proposed for the computation of the reference currents, depending on whether the objective is to compensate unbalanced load currents or regulate bus voltages. The proposed observer allows positive and negative sequences to be estimated in a fraction of the fundamental cycle, avoiding the delay often introduced by filter-based methods. Overall, the STATCOM performance is improved under unbalanced conditions, according to simulation results presented in the paper.",2009,0, 4392,Risk-Aware SLA Brokering Using WS-Agreement,"Service level agreements (SLAs) are facilitators for widening the commercial uptake of grid technology. They provide explicit statements of expectation and obligation between service consumers and providers. However, without the ability to assess the probability that an SLA might fail, commercial uptake will be restricted, since neither party will be willing to agree. Therefore, risk assessment mechanisms are critical to increase confidence in grid technology usage within the commercial sector. This paper presents an SLA brokering mechanism including risk assessment techniques which evaluate the probability of SLA failure. WS-agreement and risk metrics are used to facilitate SLA creation between service consumers and providers within a typical grid resource usage scenario.",2009,0, 4393,Quantifying software reliability and readiness,"As the industry moves to more mature software processes (e.g., CMMI) there is increased need to adopt more rigorous, sophisticated (i.e., quantitative) metrics. While quantitative product readiness criteria are often used for business cases and related areas, software readiness is often assessed more subjectively & qualitatively. Quite often there is no explicit linkage to original performance and reliability requirements for the software. The criteria are primarily process-oriented (versus product oriented) and/or subjective. Such an approach to deciding software readiness increases the risk of poor field performance and unhappy customers. Unfortunately, creating meaningful and useful quantitative in-process metrics for software development has been notoriously difficult. This paper describes novel and quantitative software readiness criteria to support objective and effective decision-making at product shipment. The method organizes and streamlines existing quality and reliability data into a simple metric and visualizations that are applicable across products and releases. The methodology amalgamates two schools of thoughts in quantitative terms: product and process parameters that have been adequately represented to formalize the software readiness index. Parameters from all aspects of software development life cycle (e.g., requirements, project management & resources, development & testing, audits & assessments, stability and reliability, and technical documentation) that could impact the readiness index are considered.",2009,0, 4394,Accuracy improvement of multi-stage change-point detection scheme by weighting alerts based on false-positive rate,"One promising approach for large-scale simultaneous events (e.g., DDoS attacks and worm epidemics) is to use a multi-stage change-point detection scheme. The scheme adopts two-stage detection. In the first stage, local detectors (LDs), which are deployed on each monitored subnet, detects a change point in a monitored metric such as outgoing traffic rate. If an LD detects a change-point, it sends an alert to global detector (GD). In the second stage, GD checks whether the proportion of LDs that send alerts simultaneously is greater than or equal to a threshold value. If so, it judges that large-scale simultaneous events are occurring. In previous studies for the multi-stage change-point detection scheme, it is assumed that weight of each alert is identical. Under this assumption, false-positive rate of the scheme tends to be high when some LDs sends false-positive alerts frequently. In this paper, we weight alerts based on false-positive rate of each LD in order to decrease false-positive rate of the multi-stage change-point detection scheme. In our scheme, GD infers false-positive rate of each LD and gives lower weight to LDs with higher false-positive rate. Simulation results show that our proposed scheme can achieve lower false-positive rate than the scheme without alert weighting under the constraint that detection rate must be 1.0.",2009,0, 4395,Static Detection of Un-Trusted Variables in PHP Web Applications,"Web applications support more and more our daily activities, it's important to improve their reliability and security. The content which users input to Web applications' server-side is named un-trusted content. Un-trusted content has a significant impact on the reliability and security of Web applications, so detecting the un-trusted variables in server-side program is important for improving the quality of Web applications. The previous methods have poor performance on weak typed and none typed server-side programs. To address this issue, this paper proposed a new technique for detecting un-trusted variables in PHP web applications (PHP is a weak typed server- side language). The technique is based upon a two phases static analysis algorithm. In the first phase, we extract modules from the Web application. Then un-trusted variables are detected from modules in the second phase. An implementation of the proposed techniques DUVP was also presented in the paper and it's successfully applied to detect un-trusted variables in large-scale PHP web application.",2009,0, 4396,Study on the Quality Prediction Model of Software Development,"Based on the fundamental quality data given by the software development centre and in accordance of the principle analysis, the project quality standard had undertaken analysis and predication. Three stages were respectively as follows: project estimation, project budget and project propagation. The three stages had co-related the differential value of workload, changes of project scope and quality standards between the influences behind of quality standards. The result shown that, the model reflected tremendous effect in its defect elimination rate and rate of problem occurrences [defects density] during the prediction process in the software development.",2009,0, 4397,Power quality analysis of traction supply systems with high speed train,"The characteristic of harmonic currents of high speed train is significantly different from DC-drived locomotives. The rectifier of CRH5 electric multiple units (EMUs) in China is researched in this paper.The principle of four-quadrant converter and its predictive current control strategy are studied. The formula of fundamental and harmonic currents of the parallel two-level four-quadrant converter is given under different power and grid voltage based on double Fourier transform method. Therefore the distribution characteristics of harmonic currents are analyzed. The simulation software based on Matlab/Simulink is developed for harmonic analysis of CRH5 EMUs. With different traction power, braking power and traction grid voltage, the harmonic currents of EMUs injecting to the traction grid are studied and compared.",2009,0, 4398,An efficient parallel approach to Random Sample Matching (pRANSAM),"This paper introduces a parallelized variant of the Random Sample Matching (RANSAM) approach, which is a very time and memory efficient enhancement of the common Random Sample Consensus (RANSAC). RANSAM exploits the theory of the birthday attack whose mathematical background is known from cryptography. The RANSAM technique can be applied to various fields of application such as mobile robotics, computer vision, and medical robotics. Since standard computers feature multi-core processors nowadays, a considerable speedup can be obtained by distributing selected subtasks of RANSAM among the available cores. First of all this paper addresses the parallelization of the RANSAM approach. Several important characteristics are derived from a probabilistic point of view. Moreover, we apply a fuzzy criterion to compute the matching quality, which is an important step towards real-time capability. The algorithm has been implemented for Windows and for the QNX RTOS. In an experimental section the performance of both implementations is compared and our theoretical results are validated.",2009,0, 4399,Research of micro integrated navigation system based on MIMU,"According to the characteristics of applicable environment and MIMU itself, a new Kalman filter was designed. The shallow integrated mode was adopted for improving fault-tolerant and reliability, and considering the low-precision MIMU, mathematics model of integrated navigation system was reasonably simplified. An adoptive Kalman filter was designed through judging average value and variance of the residual, and the weights of the filter were adoptively adjusted along with external environment. Thus it can avoid divergence and enhance system robustness. The simulation test show attitude angle error, velocity error and position error were well estimated; and then vehicle test illustrated the design was feasible.",2009,0, 4400,A Proposal for Stable Semantic Metrics Based on Evolving Ontologies,"In this paper, we propose a set of semantic cohesion metrics for ontology measurement, which can be used to assess the quality of evolving ontologies in the context of dynamic and changing Web. We argue that these metrics are stable and can be used to measure ontology semantics rather than ontology structures. Measuring ontology quality with semantic inconsistencies caused by ontology evolutions, is mainly considered in this paper. The proposed semantic cohesion metrics are theoretically and empirically validated.",2009,0, 4401,Realization of Wavelet Soft Threshold De-noising Technology Based on Visual Instrument,"Electric power network injects with amount of harmonic current because of widespread use of nonlinear load, which does great harm to the using electricity consumption. In order to prevent harmonic current from influencing safety of systempsilas operation, we should know how much the distorted wave contains harmonic and take corresponding measure to make suppression or compensation of it. But due to a lot of noise affect existing, so detection result is inaccuracy, by using multi-resolution wavelet method, we get more accurate network voltage and currency, which can carry on next harmonic detection, etc. By simulation software of MATLAB combing with LabVIEW, wavelet de-noising has better function in filtering high frequency and noise signal, etc than traditional low-passing filter of Butterworth. Through harmonic detection simulation, result is exact through THD% calculation, which difference between standard value and measurement value is very small in THD% measurement error of 0.01%. Wavelet soft threshold de-noising technology can be applied into other monitor, such as three-phase unbalance factor monitor, frequency tracking monitor, fundamental wave monitor, etc.",2009,0, 4402,Aerosol Lidar Intercomparison in the Framework of SPALINET—The Spanish Lidar Network: Methodology and Results,"A group of eight Spanish lidars was formed in order to extend the European Aerosol Research Lidar Network-Advanced Sustainable Observation System (EARLINET-ASOS) project. This study presents intercomparisons at the hardware and software levels. Results of the system intercomparisons are based on range-square-corrected signals in cases where the lidars viewed the same atmospheres. Comparisons were also made for aerosol backscatter coefficients at 1064 nm (2 systems) and 532 nm (all systems), and for extinction coefficients at 532 nm (2 systems). In total, three field campaigns were carried out between 2006 and 2007. Comparisons were limited to the highest layer found before the free troposphere, i.e., either the atmospheric boundary layer or the aerosol layer just above it. Some groups did not pass the quality assurance criterion on the first attempt. Following modification and improvement to these systems, all systems met the quality criterion. The backscatter algorithm intercomparison consisted of processing lidar signal profiles simulated for two types of atmospheric conditions. Three stages with increasing knowledge of the input parameters were considered. The results showed that all algorithms work well when all inputs are known. They also showed the necessity to perform, when possible, additional measurements to attain better estimation of the lidar ratio, which is the most critical unknown in the elastic lidar inversion.",2009,0, 4403,A cross-input adaptive framework for GPU program optimizations,"Recent years have seen a trend in using graphic processing units (GPU) as accelerators for general-purpose computing. The inexpensive, single-chip, massively parallel architecture of GPU has evidentially brought factors of speedup to many numerical applications. However, the development of a high-quality GPU application is challenging, due to the large optimization space and complex unpredictable effects of optimizations on GPU program performance. Recently, several studies have attempted to use empirical search to help the optimization. Although those studies have shown promising results, one important factor-program inputs-in the optimization has remained unexplored. In this work, we initiate the exploration in this new dimension. By conducting a series of measurement, we find that the ability to adapt to program inputs is important for some applications to achieve their best performance on GPU. In light of the findings, we develop an input-adaptive optimization framework, namely G-ADAPT, to address the influence by constructing cross-input predictive models for automatically predicting the (near-)optimal configurations for an arbitrary input to a GPU program. The results demonstrate the promise of the framework in serving as a tool to alleviate the productivity bottleneck in GPU programming.",2009,0, 4404,Crash fault detection in celerating environments,"Failure detectors are a service that provides (approximate) information about process crashes in a distributed system. The well-known ldquoeventually perfectrdquo failure detector, diamP, has been implemented in partially synchronous systems with unknown upper bounds on message delay and relative process speeds. However, previous implementations have overlooked an important subtlety with respect to measuring the passage of time in ldquoceleratingrdquo environments, in which absolute process speeds can continually increase or decrease while maintaining bounds on relative process speeds. Existing implementations either use action clocks, which fail in accelerating environments, or use real-time clocks, which fail in decelerating environments. We propose the use of bichronal clocks, which are a composition of action clocks and real-time clocks. Our solution can be readily adopted to make existing implementations of diamP robust to process celeration, which can result from hardware upgrades, server overloads, denial-of-service attacks, and other system volatilities.",2009,0, 4405,Evaluating the performance and intrusiveness of virtual machines for desktop grid computing,"We experimentally evaluate the performance overhead of the virtual environments VMware Player, QEMU, VirtualPC and VirtualBox on a dual-core machine. Firstly, we assess the performance of a Linux guest OS running on a virtual machine by separately benchmarking the CPU, file I/O and the network bandwidth. These values are compared to the performance achieved when applications are run on a Linux OS directly over the physical machine. Secondly, we measure the impact that a virtual machine running a volunteer @home project worker causes on a host OS. Results show that performance attainable on virtual machines depends simultaneously on the virtual machine software and on the application type, with CPU-bound applications much less impacted than IO-bound ones. Additionally, the performance impact on the host OS caused by a virtual machine using all the virtual CPU, ranges from 10% to 35%, depending on the virtual environment.",2009,0, 4406,Predicting cache needs and cache sensitivity for applications in cloud computing on CMP servers with configurable caches,"QoS criteria in cloud computing require guarantees about application runtimes, even if CMP servers are shared among multiple parallel or serial applications. Performance of computation-intensive application depends significantly on memory performance and especially cache performance. Recent trends are toward configurable caches that can dynamically partition the cache among cores. Then, proper cache partitioning should consider the applications' different cache needs and their sensitivity towards insufficient cache space. We present a simple, yet effective and therefore practically feasible black-box model that describes application performance in dependence on allocated cache size and only needs three descriptive parameters. Learning these parameters can therefore be done with very few sample points. We demonstrate with the SPEC benchmarks that the model adequately describes application behavior and that curve fitting can accomplish very high accuracy, with mean relative error of 2.8% and maximum relative error of 17%.",2009,0, 4407,Generalized Tardiness Quantile Metric: Distributed DVS for Soft Real-Time Web Clusters,"Performing QoS (Quality of Service) control in large computing systems requires an on line metric that is representative of the real state of the system. The Tardiness Quantile Metric (TQM) introduced earlier allows control of QoS by measuring efficiently how close to the specified QoS the system is, assuming specific distributions. In this paper we generalize this idea and propose the Generalized Tardiness Quantile Metric (GTQM). By using an online convergent sequential process, defined from a Markov chain, we derive quantile estimations that do not depend on the shape of the workload probability distribution. We then use GTQM to keep QoS controlled in a fine grain manner, saving energy in soft real-time Web clusters. To evaluate the new metric, we show practical results in a real Web cluster running Linux, Apache, and MySQL, with our QoS control and for both a deterministic workload and an e-commerce workload. The results show that the GTQM method has excellent workload prediction capabilities, which immediately translates in more accurate QoS control, allowing for slower speeds and larger energy savings than the state-of-the-art in soft real-time Web cluster systems.",2009,0, 4408,Machine Vision for Automated Inspection of Surgical Instruments,"In order to improve the measuring speed and accuracy of the surgical instruments inspection, a computer-aided measuring and analysis method was highly needed. The purpose of this study was provide surgical instruments manufactures with a surgical and microsurgical instruments inspection system, so as to ensure that surgical instruments came from assembly lines were manufactured correctly. Software and hardware components were designed and developed into a machine vision inspection system. Different surgical instruments such as scissors, forceps, clamps, microsurgical instruments were inspected; the system accuracy could be 0.001 mm. The result showed that the detection system had significant effects. The detection system could measure the vast amount of measurements of surgical instruments required for mass-produced devices.",2009,0, 4409,FTTH Network Management Software Tool: SANTAD ver 2.0,"This paper focused on developing a fiber-to-the-home (FTTH) network management software tool named Smart Access Network Testing, Analyzing and Database (SANTAD) ver 2.0 based on Visual Basic for transmission surveillance and fiber fault identification. SANTAD will be installed with optical line terminal (OLT) at central office (CO) to allow the network service providers and field engineers to detect a fiber fault, locate the failure location, and monitor the network performance downwardly from CO towards customer residential locations. SANTAD is able to display the status of each optical network unit (ONU) connected line on a single computer screen with capability to configure the attenuation and detect the failure simultaneously. The failure information will be delivered to the field engineers for promptly actions, meanwhile the failure line will be diverted to protection line to ensure the traffic flow continuously. The database system enable the history of network scanning process be analyzed and studied by the engineer.",2009,0, 4410,Autonomous SPY©: Intelligent Web proxy caching detection using neurocomputing and Particle Swarm Optimization,"The demand for Internet content rose dramatically. Servers became more and more powerful and the bandwidth of end user connections and backbones grew constantly. Nevertheless users often experience poor performance when they access Web sites or download files. Reasons for such problems are often performance problems which occur directly on the servers (e.g. poor performance of server-side applications or during flash crowds), problems concerning the network infrastructure (e.g. long geographical distances, network overloads, etc.) and a surprising fact is that many people tend to access the same piece of information repeatedly. Web caching has been recognized as the effective schemes to alleviate the service bottleneck, minimize the user access latency and reduce the network traffic. Consequently, in this paper we discuss an alternative way to tackle these problems with an implementation of autonomous SPY tool. This tool is capable to self directed either to cache or not to cache the objects in a document based on the behavior of users' activities (number of object hits, script size of objects, and time to receive objects) in Internet based electronic services (e-services) for enhancing Web access.",2009,0, 4411,Application of Service Oriented Architecture to emulation of onboard processing satellite systems,"With increased service demands on remote sensing and communication satellites researchers are considering equipping the next generation of satellite constellations with onboard computer systems capable of general purpose computing in space. This project focuses on the design of an emulator capable of using a local area network to emulate both the passing of data between satellites and to ground-based users and the execution of data processing applications. Because satellite constellations perform a wide variety of services, the emulator design incorporates Service Oriented Architecture (SOA) to allow the flexibility of running a variety of different software applications and to easily support new, as-yet designed applications. This emulator uses the Hypercast package of Java network interfaces and a modular programming structure for passing information and files between simulated satellite and user objects. The data routing is controlled through the use of the commercial software Satellite Toolkit (STK), which calculates all communication capability and quality information for any satellite constellation. The system is capable of passing any necessary data allowing for the design of a universal interface for how applications receive information and return results. Currently, this capability is tested using image capture/passing and image classification software both of which have proved successful applications on a Service Oriented Architecture system.",2009,0, 4412,Notice of Violation of IEEE Publication Principles
Evaluation of GP Model for Software Reliability,"Notice of Violation of IEEE Publication Principles

""Evaluation of GP Model for Software Reliability,""
by S. Paramasivam, and M. Kumaran,
in the 2009 International Conference on Signal Processing Systems, May 2009

After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

This paper contains significant portions of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.

Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following article:

""A Comparative Evaluation of Using Genetic Programming for Predicting Fault Count Data,""
by W. Afzal, R. Torkar,
in the Third International Conference on Software Engineering Advances, 2008. ICSEA '08, pp.407-414, October 2008

There has been a number of software reliability growth models (SRGMs) proposed in literature. Due to several reasons, such as violation of modelspsila assumptions and complexity of models, the practitioners face difficulties in knowing which models to apply in practice. This paper presents a comparative evaluation of traditional models and use of genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The motivation of using a GP approach is its ability to evolve a model based entirely on prior data without the need of making underlying assumptions. The results show the strengths of using GP for predicting fault count.",2009,0, 4413,Estimation of Defect Proneness Using Design Complexity Measurements in Object-Oriented Software,"Software engineering is continuously facing the challenges of growing complexity of software packages and increased level of data on defects and drawbacks from software production process. This makes a clarion call for inventions and methods which can enable a more reusable, reliable, easily maintainable and high quality software systems with deeper control on software generation process. Quality and productivity are indeed the two most important parameters for controlling any industrial process. Implementation of a successful control system requires some means of measurement. Software metrics play an important role in the management aspects of the software development process such as better planning, assessment of improvements, resource allocation and reduction of unpredictability. The process involving early detection of potential problems, productivity evaluation and evaluating external quality factors such as reusability, maintainability, defect proneness and complexity are of utmost importance. Here we discuss the application of CK metrics and estimation model to predict the external quality parameters for optimizing the design process and production process for desired levels of quality. Estimation of defect-proneness in object-oriented system at design level is developed using a novel methodology where models of relationship between CK metrics and defect-proneness index is achieved. A multifunctional estimation approach captures the correlation between CK metrics and defect proneness level of software modules.",2009,0, 4414,Graph-Based Task Replication for Workflow Applications,"The Grid is an heterogeneous and dynamic environment which enables distributed computation. This makes it a technology prone to failures. Some related work uses replication to overcome failures in a set of independent tasks, and in workflow applications, but they do not consider possible resource limitations when scheduling the replicas. In this paper, we focus on the use of task replication techniques for workflow applications, trying to achieve not only tolerance to the possible failures in an execution, but also to speed up the computation without demanding the user to implement an application-level checkpoint, which may be a difficult task depending on the application. Moreover, we also study what to do when there are not enough resources for replicating all running tasks. We establish different priorities of replication depending on the graph of the workflow application, giving more priority to tasks with a higher output degree. We have implemented our proposed policy in the GRID superscalar system, and we have run the fastDNAml as an experiment to prove our objectives are reached. Finally, we have identified and studied a problem which may arise due to the use of replication in workflow applications: the replication wait time.",2009,0, 4415,Evaluating Provider Reliability in Grid Resource Brokering,"If Grid computing is to experience widespread commercial adoption, then incorporating risk assessment and management techniques is essential,both during negotiation between service provider and service requester and during run-time. This paper focuses on the role of a resource broker in this context. Specifically, an approach to evaluating the reliability of risk information received from resource providers is presented, using historical data to provide a statistical estimate of the average integrity of their risk assessments, with respect to systematic overestimation or underestimation of the probability of failure. Simulation results are presented, indicating the effectiveness of this approach.",2009,0, 4416,Efective multi-frame motion estimation method for H.264,"In order to achieve a high coding gain, the H.264 video coding standard allows one or more reference frames. However, this greatly increases the computational burden in proportion to the number of the searched reference frames. Therefore, we propose a method to reduce the complexity by selecting the proper reference frames. The proposed algorithm uses the optimal reference frame information in both the P16 times 16 mode and the adjacent blocks, thus decreasing any unnecessary searching process in the inter modes except for P16 times 16. Experimental results show that the proposed method considerably saves coding time with negligible quality and bit-rate difference. It can also be adopted with any of the existing motion estimation algorithms to obtain additional performance improvement.",2009,0, 4417,Indoor monitoring of respiratory distress triggering factors using a wireless sensing network and a smart phone,"A wireless sensing network Bluetooth enabled was designed and implemented for continuous monitoring of indoor humidity and temperature conditions as well as to detect pollutant gases and vapors. The novelty of the work is related to the development of an embedded software using Java2ME technology for a smart phone that materializes a user friendly HMI. Two mobile software modules assure sensor nodes data reading through Bluetooth connection, primary data processing, data storage and alarm generation according with imposed thresholds for air quality parameters. Additional .NET developed software for a Notebook PC platform permits to remotely configure the mobile application and to receive the data logged in the mobile phone. Using the implemented distributed measurement system, including the smart phone, an intelligent assessment of air conditions for risk factor reduction of asthma or chronic obstructive pulmonary disease is carried out. Several experimental results are also included.",2009,0, 4418,Detecting Software Faults in Distrubted Systems,"We are concerned with the problem of detecting faults in distributed software, rapidly and accurately. We assume that the software is characterized by events or attributes, which determine operational modes; some of these modes may be identified as failures. We assume that these events are known and that their probabilistic structure, in their chronological evolution, is also known, for a finite set of different operational modes. We propose and analyze a sequential algorithm that detects changes in operational modes rapidly and reliably. Further more, a threshold operational parameter of the algorithm controls effectively the induced speed versus correct detection versus false detection tradeoff.",2009,0, 4419,Real-Time Designing Software for the Sewing Path Design on the Broidery Industry,The software with the user graphic interface has been developed for the sewing path design in the embroidery industry. This new version of software is an advanced version of the last one and is incorporated many advanced new functions. The main purpose of the software is to create an easy and painless environment for the designer to create their own artistic sewing pattern with suitable sewing path. The resulting sawing path will be automatically transformed into a file with the absolutely or relatively coordinate value of each stitch point on the sewing path. The coordinate file will feed into the embroidery machine for the recreation of the design pattern on the surface of the product. Commercial graphic software does not provide the suitable function to transform each stitch point into their respective coordinate values. This drawback is resulted in the transformation process to be done by hand and by means of measuring each stitch point by designer - one point by one point. Most of the designed sewing pattern will involve more than hundreds of stitch points. It is a tedious and error-prone process for the designer and will deprive most of the design time away from the designer. This software will provide an immediately help for the designer to do the painful work of the process of the coordinate transformation. This will leave the designer to focus only on the artistic pattern design.,2009,0, 4420,Using the Number of Faults to Improve Fault-Proneness Prediction of the Probability Models,"The existing fault-proneness prediction methods are based on unsampling and the training dataset does not contain the information on the number of faults of each module and the fault distributions among these modules. In this paper, we propose an oversampling method using the number of faults to improve fault-proneness prediction. Our method uses the information on the number of faults in the training dataset to support better prediction of fault-proneness. Our test illustrates that the difference between the predictions of oversampling and unsampling is statistically significant and our method can improve the prediction of two probability models, i.e. logistic regression and naive Bayes with kernel estimators.",2009,0, 4421,Reliability Analysis in the Early Development of Real-Time Reactive Systems,"The increasing trend toward complex software systems has highlighted the need to incorporate quality requirements earlier in the development process. Reliability is one of the important quality indicators of such systems. This paper proposes a reliability analysis approach to measure reliability in the early development of real-time reactive systems (RTRS). The goal is to provide decision support and detect the first signs of low or decreasing reliability as the system design evolves. The analysis is conducted in a formal development environment for RTRS, formalized mathematically and illustrated using a train-gate-controller case study.",2009,0, 4422,Structuring Cognitive Information for Software Complexity Measurement,"Cognitive complexity of software has emerged as an interesting research area recently with the growing studies in cognitive informatics disciplines. Cognitive complexity measurement attempts to quantify the software from the perspective of the difficulty for human brain to process and comprehend the software, in order to enable more precise prediction of the critical information about testability, reliability, and maintainability, as well as the effort spent in the software project. Cognitive informatics theories suggest that cognitive complexity of software depends on fundamental factors such as inputs, outputs, loops/branches structure, and number of operators and operands. Analysis in this paper shows the significant flaw of current cognitive complexity measures that they quantify the factors without considering the dependencies among them. We therefore propose a new method to solve this problem by structuring the factors. The proposed measure was evaluated comparatively to existing metrics, and also proven by satisfying all nine Weyuker's properties.",2009,0, 4423,Testing of High Resolution ADCs Using Lower Resolution DACs via Iterative Transfer Function Estimation,"Linearity testing of high resolution analog-to-digital Converters (ADCs) requires test instrumentation that has high precision digital-to-analog conversion (DAC) capability. Further, a large number of samples need to be collected for linearity testing of high resolution ADCs (18-24 bit) to guarantee test quality. In this paper a novel fast linearity testing approach is proposed for testing high resolution ADCs using a low precision DAC and a potentiometer. A polynomial fit of the transfer function of the ADC is generated using measurements made at intermediate code points. The test setup and analysis procedure makes no assumption about the linearity of the lower precision DAC or the potentiometer used to generate the ADC test stimulus. A least squares based polynomial fitting approach is used to characterize the transfer function of the ADC. The computed transfer function is then used to estimate the Integral Non-Linearity (INL) and the Differential Non-Linearity(DNL) of the system accurately. Software simulations and hardware experiments are performed to validate the proposed methodology.",2009,0, 4424,PID-Based Feature Weight Learning and Its Application in Intrusion Detection,"In case-based reasoning (CBR), cases are generally represented by features. Different features have different importance, which are often described by weights. So how to adaptively learning weights of different features is a very key issue in CBR, which impact directly the quality and performance of case extraction. Currently, in most practical CBR systems, the feature weights are given by domain experts subjectively. In this paper, we propose a PID operator-based feature weight learning method based on the fundamental theory of the control system. PID-based feature weighting method is a self-adaptive method, which utilizing the similar neural network architecture to construct the case base. Through designing 3 kinds of adjusting operators: Proportional, integral and derivative operator (PID), and each operator with different properties: reactive, prudent and sensitive, we can adjust the feature weight from different point of views, such as the current adjust results, the history results or the last two results. In order to evaluate the effectiveness of the method, the experiment of network anomaly detection is conducted and the experimental results show that all 3 operators are effective which can converge the intrusion detection system into a stable state in relative small iterations.",2009,0, 4425,Ensemble Similarity Measures for Clustering Terms,"Clustering semantically related terms is crucial for many applications such as document categorization, and word sense disambiguation. However, automatically identifying semantically similar terms is challenging. We present a novel approach for automatically determining the degree of relatedness between terms to facilitate their subsequent clustering. Using the analogy of ensemble classifiers in machine learning, we combine multiple techniques like contextual similarity and semantic relatedness to boost the accuracy of our computations. A new method, based on Yarowskypsilas word sense disambiguation approach, to generate high-quality topic signatures for contextual similarity computations, is presented. A technique to measure semantic relatedness between multi-word terms, based on the work of Hirst and St. Onge is also proposed. Experimental evaluation reveals that our method outperforms similar related works. We also investigate the effects of assigning different importance levels to the different similarity measures based on the corpus characteristics.",2009,0, 4426,Quadruple bandwidth true time delay printed microwave lens beam former for ultra wideband multifunctional phased array applications,"A 9 X 13 printed microwave lens beam former was investigated numerically using MoM, and good performance was predicted across the quadruple band 6 GHz - 25 GHz. TTD behavior across entire band was achieved by array factor comparisons, with good beam quality demonstrations. Future work on this lens will involve the power efficiency study and prototype measurement validations.",2009,0, 4427,Performance analysis of workflow instances,"In workflow systems, time performance of workflow instances is key to business management and scheduling. Firstly, the concept of active transition was proposed and its performance equivalent model was analyzed under resources-limited circumstances. Then, the concept of active pattern was proposed and time performances of four basic active patterns were deduced. Finally, an example was given to demonstrate how to use above results to estimate time performance of workflow instances. The proposed method can be used to estimate time performance of workflow instances at any time point and build efficient performance analysis algorithm.",2009,0, 4428,Multiobjective Evolutionary Optimization Algorithm for Cognitive Radio Networks,"Under Cognitive radio (CR), the Quality of Service (QoS) suffers from many dimensions or metrics of communication quality for improving spectrum utilization. To investigate this issue, this paper develops a methodology based on the multiobjective optimization model with genetic algorithms (GAs). The influence of evolving a radio defined by a chromosome is identified. The Multiobjective Cognitive Radio (MOCR) algorithm from genetically manipulating the chromosomes is proposed. Using adaptive component as an example, the bounds for the maximum benefit is predicted by a proposed model that considers Pareto front. To find a set of parameters that optimize the radio for userpsilas current needs, several solutions are presented. Simulation results show that MOCR is able to find a comparatively better spread of compromise solutions.",2009,0, 4429,Toward a Framework for Evaluating Heterogeneous Architecture Styles,"Evaluating architectures and choosing the correct one is a critical issue in software engineering domain, in accordance with extremely extension of architecture-driven designs. In the first years of defining architecture styles, some special quality attributes were introduced as their basic attributes. After a moment, by utilizing them in practice, some results were obtained confirming some of attributes; some others meanwhile were not witnessed. As software architecture construction process is dependent on and addressed by both usage conditions and quality attributes, in this paper a framework has been proposed to provide an environment and a platform that can cover evaluation of architecture styles with a technique that not only exploits both qualitative and quantitative information but also considering users' needs is possible precisely and with high quality. Moreover, we define a classification and notation in order to describe heterogeneous architectures. It provides us with the ability of evaluating heterogeneous architecture styles of a software system.",2009,0, 4430,Using Virtualization to Improve Software Rejuvenation,"In this paper, we present an approach for software rejuvenation based on automated self-healing techniques that can be easily applied to off-the-shelf application servers. Software aging and transient failures are detected through continuous monitoring of system data and performability metrics of the application server. If some anomalous behavior is identified, the system triggers an automatic rejuvenation action. This self-healing scheme is meant to disrupt the running service for a minimal amount of time, achieving zero downtime in most cases. In our scheme, we exploit the usage of virtualization to optimize the self-recovery actions. The techniques described in this paper have been tested with a set of open-source Linux tools and the XEN virtualization middleware. We conducted an experimental study with two application benchmarks (Tomcat/Axis and TPC-W). Our results demonstrate that virtualization can be extremely helpful for failover and software rejuvenation in the occurrence of transient failures and software aging.",2009,0, 4431,Probabilistic fault diagnosis for IT services in noisy and dynamic environments,"The modern society has come to rely heavily on IT services. To improve the quality of IT services it is important to quickly and accurately detect and diagnose their faults which are usually detected as disruption of a set of dependent logical services affected by the failed IT resources. The task, depending on observed symptoms and knowledge about IT services, is always disturbed by noises and dynamic changing in the managed environments. We present a tool for analysis of IT services faults which, given a set of failed end-to-end services, discovers the underlying resources of faulty state. We demonstrate empirically that it applies in noisy and dynamic changing environments with bounded errors and high efficiency. We compare our algorithm with two prior approaches, Shrink and Max coverage, in two well-known types of network topologies. Experimental results show that our algorithm improves the overall performance.",2009,0, 4432,Heteroscedastic models to track relationships between management metrics,"Modern software systems expose management metrics to help track their health. Recently, it was demonstrated that correlations among these metrics allow faults to be detected and their causes localized. In particular, linear regression models have been used to capture metric correlations. We show that for many pairs of correlated metrics in software systems, such as those based on Java Enterprise Edition (JavaEE), the variance of the predicted variable is not constant. This behaviour violates the assumptions of linear regression, and we show that these models may produce inaccurate results. In this paper, leveraging insight from the system behaviour, we employ an efficient variant of linear regression to capture the non-constant variance. We show that this variant captures metric correlations, while taking the changing residual variance into consideration. We explore potential causes underlying this behaviour, and we construct and validate our models using a realistic multi-tier enterprise application. Using a set of 50 fault-injection experiments, we show that we can detect all faults without any false alarm.",2009,0, 4433,Monitoring probabilistic SLAs in Web service orchestrations,"Web services are software applications that are published over the Web, and can be searched and invoked by other programs. New Web services can be formed by composing elementary services, such composite services are called Web service orchestrations. Quality of service (QoS) issues for Web service orchestrations deeply differ from corresponding QoS issues in network management. In an open world of Web services, service level agreements (SLAs) play an important role. They are contracts defining the obligations and rights between the provider of a Web service and a client with respect to the services' function and quality. In a previous work we have advocated using soft contracts of probabilistic nature, for the QoS part of contracts. Soft contracts have no hard bounds on QoS parameters, but rather probability distributions for them. An essential component of SLA management is the continuous monitoring of the performance of called Web services, to check for violation of the agreed SLA. In this paper we propose a statistical technique for QoS contract run time monitoring. Our technique is compatible with the use of soft probabilistic contracts.",2009,0, 4434,Queuing model based end-to-end performance evaluation for MPLS Virtual Private Networks,"Monitoring the end-to-end Quality-of-Service (QoS) is an important work for service providers' Operation Support System (OSS), because it is the fundamental requirement for QoS provisioning. However, it is in fact a challenging work, and there are few efficient approaches to address it. In this paper, for multi-protocol label switching (MPLS) virtual private networks (VPNs), we propose a queuing model based end-to-end performance evaluation scheme for OSS to monitor the end-to-end delay, which is one of the important QoS metrics. By means of a queuing model, we deduce the relationship between the end-to-end delay and the information available in the Management Information Base (MIB) of routers, and then we present the evaluation scheme which avoids the costly per-packet measurement. The complexity of the proposed scheme is much lower than existing schemes. Extensive simulation results show that the proposed scheme can efficiently evaluate the end-to-end performance metrics (the estimation error is nearly 10%).",2009,0, 4435,An approach to measurement based Quality of Service control for communications networks,"This paper presents a purely empirical approach to estimating the effective bandwidth of aggregated traffic flows independent of traffic model assumptions. The approach is shown to be robust when used in a variety of traffic scenarios such as both elastic and streaming traffic flows of varying degrees of aggregation. The method then forms the basis of two quality of service related traffic performance optimisation strategies. The paper presents a cost efficient approach to supplying suitably accurate demand matrix input for QoS related network planning and a QoS provisioning, revenue maximising admission control algorithm for an IPTV services network. This paper summarises these approaches and discusses the major benefits of an appropriately accurate effective bandwidth estimation algorithm.",2009,0, 4436,Reasoning on Scientific Workflows,"Scientific workflows describe the scientific process from experimental design, data capture, integration, processing, and analysis that leads to scientific discovery. Laboratory Information Management Systems (LIMS) coordinate the management of wet lab tasks, samples, and instruments and allow reasoning on business-like parameters such as ordering (e.g., invoicing) and organization (automation and optimization) whereas workflow systems support the design of workflows in-silico for their execution. We present an approach that supports reasoning on scientific workflows that mix wet and digital tasks. Indeed, experiments are often first designed and simulated with digital resources in order to predict the quality of the result or to identify the parameters suitable for the expected outcome. ProtocolDB allows the design of scientific workflows that may combine wet and digital tasks and provides the framework for prediction and reasoning on performance, quality, and cost.",2009,0, 4437,Queuing Theoretic and Evolutionary Deployment Optimization with Probabilistic SLAs for Service Oriented Clouds,"This paper focuses on service deployment optimization in cloud computing environments. In a cloud, each service in an application is deployed as one or more service instances. Different service instances operate at different quality of service (QoS) levels. In order to satisfy given service level agreements (SLAs) as end-to-end QoS requirements of an application, the application is required to optimize its deployment configuration of service instances. E3/Q is a multiobjective genetic algorithm to solve this problem. By leveraging queuing theory, E3/Q estimates the performance of an application and allows for defining SLAs in a probabilistic manner. Simulation results demonstrate that E3/Q efficiently obtains deployment configurations that satisfy given SLAs.",2009,0, 4438,Gossip-Based Workload Prediction and Process Model for Composite Workflow Service,"In this paper, we propose to predict the workloads of the service components within the composite workflow based on the communication of the queue condition of service nodes. With this information, we actively discard the requests that has high probability to be dropped in the later stages of the workflow. The benefit of our approach is the efficient saving of limited system resource as well as the SLA-satisfied system performance. We present mechanisms for four basic workflow patterns and evaluate our mechanism through simulation. The experiment results show that our mechanism can help to successfully serve more requests than the normal mechanism. In addition, our mechanism can maintain more stable response time than the normal one.",2009,0, 4439,A defect prediction model for software based on service oriented architecture using EXPERT COCOMO,"Software that adopts service oriented architecture has specific features on project scope, lifecycle and technical methodology, therefore demands for specific defect management process to ensure quality. By leveraging recent best practices in projects that adopt service oriented architecture, this paper presents a defect prediction model for software based on service oriented architecture, and discusses the defect management process based on the presented model.",2009,0, 4440,The research on a new fault wave recording device in generator-transformer units,"This paper presents a new kind of distributed fault recorder including the design of the system structure, the hardware and software design of the recorder. The recorder adopts NI CompaceRIO series programmable automation controller (PAC) in the hardware while virtual instrument technology in the software. The network communication based on TCP/IP between the client and the server is adopted in the power plant. Moreover, an improved frequency tracking algorithm is presented in the monitoring of the electric quantity to improve the detection precision and the processing speed. The detection and operation results show that it has improved the performance greatly and realized authenticity, integrity and reliability and so on.",2009,0, 4441,A novel QoS modeling approach for soft real-time systems with performance guarantees,"This paper introduces a systematic approach for modeling QoS requirements of soft real-time systems with stochastic responsiveness guarantees. While deadline miss ratio and its proposed extensions have been considered for evaluating firm real-time systems, this work brings out its limitations for assessing the performance of emerging computer services operating over communication infrastructures with non-deterministic dynamics. This work explains how delay frequencies and delay lengths can be both represented into a single quantitative meaningful measure for performance evaluation of soft real-time constrained computer applications. It also explores the presented approach in the design of scheduling strategies which can ground novel business models for QoS-enabled service-oriented systems.",2009,0, 4442,Usability Evaluation Driven by Cooperative Software Description Framework,"The usability problems of CSCW system mainly come from two aspects of limitation. On the higher level, it lacks sufficient simulation of the use-context influenced by social, political and organizational features; on the lower level, it lacks suitable measurement and description of basic collaborative behavior happened in common cooperative activities. However, the traditional task-based user interface models and discount usability inspection methods can not address the above problems. Therefore, an overall framework for describing CSCW system was proposed in this paper from the perspective of usability evaluation. This framework covers all aspects in understanding the use-context, so that it is very helpful to better instruct low-cost usability walkthrough technique in the early stage of software development cycle to detect usability problems.",2009,0, 4443,Optimizing a highly fault tolerant software RAID for many core systems,We present a parallel software driver for a RAID architecture to detect and correct corrupted disk blocks in addition to tolerate disk failures. The necessary computations demand parallel execution to avoid the processor being the bottleneck for a RAID with high bandwidth. The driver employs the processing power of multicore and manycore systems. We report on the performance of a prototype implementation on a quadcore processor that indicates linear speedup and promises good scalability on larger machines. We use reordering of I/O orders to ensure balance between CPU and disk load.,2009,0, 4444,Built-in aging monitoring for safety-critical applications,"Complex electronic systems for safety or mission-critical applications (automotive, space) must operate for many years in harsh environments. Reliability issues are worsening with device scaling down, while performance and quality requirements are increasing. One of the key reliability issues is to monitor long-term performance degradation due to aging in such harsh environments. For safe operation, or for preventive maintenance, it is desirable that such monitoring may be performed on chip. On-line built-in aging sensors (activated from time to time) can be an adequate solution for this problem. The purpose of this paper is to present a novel methodology for electronic systems aging monitoring, and to introduce a new architecture for an aging sensor. Aging monitoring is carried out by observing the degrading timing response of the digital system. The proposed solution takes into account power supply voltage and temperature variations and allows several levels of failure prediction. Simulation results are presented, that ascertain the usefulness of the proposed methodology.",2009,0, 4445,"Quality Indicators on Global Software Development Projects: Does ""Getting to Know You"" Really Matter?","In Spring 2008, five student teams were put into competition to develop software for a Cambodian client. Each extended team comprised students distributed across a minimum of three locations, drawn from the US, India, Thailand and Cambodia. This paper describes a couple of exercises conducted with students to examine their basic awareness of the countries of their collaborators and competitors, and to assess their knowledge of their own extended team members during the course of the project. The results from these exercises are examined in conjunction with the high-level communication patterns exhibited by the participating teams and provisional findings are drawn with respect to quality, as measured through a final product selection process. Initial implications for practice are discussed.",2009,0, 4446,A Method of Software Reliability Test Based on Relative Reliability in Object-Oriented Programming,"This paper presents an approach about how to use the relative reliability test and operation pathspsila reliability prediction to adjust the test allocation of software reliability test in object-oriented programming which is based on the original operational profile. In this new method, software reliability test is not only based on the operational profiles but also guided by the relative reliability prediction results of operation paths. The adjustable range is decided by the original operation running rate and the operation independence factor which can be gotten by neural network learning algorithm and differentiated from software to software.",2009,0, 4447,Interpreting a Successful Testing Process: Risk and Actual Coverage,"Testing is inherently incomplete; no test suite will ever be able to test all possible usage scenarios of a system. It is therefore vital to assess the implication of a system passing a test suite. This paper quantifies that implication by means of two distinct, but related, measures: the risk quantifies the confidence in a system after it passes a test suite, i.e., the number of faults still expected to be present (weighted by their severity); the actual coverage quantifies the extent to which faults have been shown absent, i.e., the fraction of possible faults that has been covered. We provide evaluation algorithms that calculate these metrics for a given test suite, as well as optimisation algorithms that yield the best test suite for a given optimisation criterion.",2009,0, 4448,A Review of Uncertainty Methods Employed in Water Quality Modeling,"The aquatic environment is a complicated system with uncertainty. The study of uncertainty in water quality modeling can improve peoplepsilas understanding of the nature of the aquatic environment and it is important to the development of water quality modeling. The typology and sources of uncertainty is represented to improve the understanding and characterizing of the uncertainty in water quality modeling process. And according to the different sources, a comprehensive review on different methods of uncertainty study is made. The integration of different uncertainty methods, mathematical theories and algorithms is an important trend of water quality modeling research. The research on uncertainty is an essential part of water quality modeling. It is suggested that uncertainty should be investigated systemically within the entire process of water quality modeling.",2009,0, 4449,"Congestion location detection: Methodology, algorithm, and performance","We address the following question in this study: Can a network application detect not only the occurrence, but also the location of congestion? Answering this question will not only help the diagnostic of network failure and monitor server's QoS, but also help developers to engineer transport protocols with more desirable congestion avoidance behavior. The paper answers this question through new analytic results on the two underlying technical difficulties: 1) synchronization effects of loss and delay in TCP, and 2) distributed hypothesis testing using only local loss and delay data. We present a practical congestion location detection (CLD) algorithm that effectively allows an end host to distributively detect whether congestion happens in the local access link or in more remote links. We validate the effectiveness of CLD algorithm with extensive experiments.",2009,0, 4450,Admission control for roadside unit access in Intelligent Transportation Systems,"Roadside units can provide a variety of potential services for passing-by vehicles in future intelligent transportation systems. Since each vehicle has a limited time period when passing by a roadside unit, it is important to predict whether a service can be finished in time. In this paper, we focus on admission control problems, which are important especially when a roadside unit is in or close to overloaded conditions. Traditional admission control schemes mainly concern long-term flows, such as VOIP and multimedia service. They are not applicable to highly mobile vehicular environments. Our analysis finds that it is not necessarily accurate to use deadline to evaluate the risk whether a flow can be finished in time. Instead, we introduce a potential metric to more accurately predict the total data size that can be transmitted to/from a vehicle. Based on this new concept, we formulate the admission control task as a linear programming problem and then propose a lexicographically maxmin algorithm to solve the problem. Simulation results demonstrate that our scheme can efficiently make admission decisions for coming transmission requests and effectively avoid system overload.",2009,0, 4451,Experiments results and large scale measurement data for web services performance assessment,"Service provisioning is a challenging research area for the design and implementation of autonomic service- oriented software systems. It includes automated QoS management for such systems and their applications. Monitoring and Measurement are two key features of QoS management. They are addressed in this paper as elements of a main step in provisioning of self-healing web services. In a previous work, we defined and implemented a generic architecture applicable for different services within different business activities. Our approach is based on meta-level communications defined as extensions of the SOAP envelope of the exchanged messages, and implemented within handlers provided by existing web service containers. Using the web services technology, we implemented a complete prototype of a service-oriented Conference Management System (CMS). We experienced our monitoring and measurement architecture using the implemented application and assessed successfully the scalability of our approach under the French grid5000. In this paper, experimental results are analyzed and concluding remarks are given.",2009,0, 4452,Oil Contamination Monitoring Based on Dielectric Constant Measurement,"Fault information of machinery equipments is often contained in the process of lubricating oil wearing off. Meanwhile, the dielectric constant of the oil would change accordingly during the process. The principle of online oil contamination monitoring based on dielectric constant measurement is proposed. The measure system is also developed, which includes capacitance sensor, small capacitance detecting circuit and software of monitoring and analysis. Experiments are carried out on lubricating oil polluted by different contamination. The results show that change of lubricating oil's relative dielectric constant can be detected effectively and properly distinguished using the developed measure system, which can be used to determine the proper oil-replacing period, and to perform fault prediction of the machinery equipments.",2009,0, 4453,Research on the Gear Fault Diagnosis Using Order Envelope Spectrum,"The speed-up and speed-down of the gearbox are non-stationary process and the vibration signal can not be processed by traditional processing method. In order to process the non-stationary vibration signals such as speed-up or speed-down signals effectively, the order envelope analysis technique is presented. This new method combines order tracking technique with envelope spectrum analysis. Firstly, the vibration signal is sampled at constant time increments and then uses software to resample the data at constant angle increments. Therefore, the time domain non-stationary signal is changed into angle domain stationary signal. In the end, the resampled signals are processed by envelope spectrum analysis. The experimental results show that order envelope spectrum analysis can effectively diagnosis the faults of the gear crack.",2009,0, 4454,Vibration Controller of Marine Electromagnetic Vibrator,"Marine electromagnetic vibrator is a non-explosive seismic source, for high resolution detection of marine seismic exploration, which used sublevel sweep mode of marine electromagnetic vibrator. The vibration controller provides sublevel Chirp signal by marine geology condition. The Chirp signal was sent to exciter through power amplifier. Exciter coupled with marine water by shaking base plate, the elastic wave was sent to marine .The vibration controller is the source of marine vibrator, so the quality of signal influences the whole performance index of seismic explosive system. This thesis discusses hardware and software of marine electromagnetic vibrator controller.",2009,0, 4455,A method for selecting and ranking quality metrics for optimization of biometric recognition systems,"In the field of biometrics evaluation of quality of biometric samples has a number of important applications. The main applications include (1) to reject poor quality images during acquisition, (2) to use as enhancement metric, and (3) to apply as a weighting factor in fusion schemes. Since a biometric-based recognition system relies on measures of performance such as matching scores and recognition probability of error, it becomes intuitive that the metrics evaluating biometric sample quality have to be linked to the recognition performance of the system. The goal of this work is to design a method for evaluating and ranking various quality metrics applied to biometric images or signals based on their ability to predict recognition performance of a biometric recognition system. The proposed method involves: (1) Preprocessing algorithm operating on pairs of quality scores and generating relative scores, (2) Adaptive multivariate mapping relating quality scores and measures of recognition performance and (3) Ranking algorithm that selects the best combinations of quality measures. The performance of the method is demonstrated on face and iris biometric data.",2009,0, 4456,Global and local quality measures for NIR iris video,"In the field of iris-based recognition, evaluation of quality of images has a number of important applications. These include image acquisition, enhancement, and data fusion. Iris image quality metrics designed for these applications are used as figures of merit to quantify degradations or improvements in iris images due to various image processing operations. This paper elaborates on the factors and introduces new global and local factors that can be used to evaluate iris video and image quality. The main contributions of the paper are as follows. (1) A fast global quality evaluation procedure for selecting the best frames from a video or an image sequence is introduced. (2) A number of new local quality measures for the iris biometrics are introduced. The performance of the individual quality measures is carefully analyzed. Since performance of iris recognition systems is evaluated in terms of the distributions of matching scores and recognition probability of error, from a good iris image quality metric it is also expected that its performance is linked to the recognition performance of the biometric recognition system.",2009,0, 4457,Characterization of CdTe Detectors for Quantitative X-ray Spectroscopy,"Silicon diodes have traditionally been the detectors of choice for quantitative X-ray spectroscopy. Their response has been very well characterized and existing software algorithms process the spectra for accurate, quantitative analysis. But Si diodes have limited sensitivity at energies above 30 keV, while recent regulations require measurement of heavy metals such as lead and mercury, with K X-ray emissions well above 30 keV. Measuring the K lines is advantageous to reduce the interference between the L lines of these elements with each other and with the K lines of light elements. CdTe has much higher stopping power than Si, making it attractive for measuring higher energies, but the quality and reproducibility of its spectra have limited its use in the energy range of characteristic X-rays, <100 keV. CdTe detectors have now been optimized for energies <100 keV, yielding electronic noise of 500 eV for a 5 mm times 5 mm times 0.75 mm device. This provides adequate energy resolution for distinguishing characteristic X-ray peaks >30 keV. The response function of these CdTe detectors differs in important ways from that of Si detectors, requiring changes to the X-ray analytical software used to process the spectra. This paper will characterize the response of the latest generation of high resolution CdTe detectors at the energies of characteristic X-rays. It will present effects important for X-ray analysis, including the peak shape arising from hole tailing, escape peaks, spectral background, linearity, and stability. Finally, this paper will present spectral processing algorithms optimized for CdTe, including peak shape and escape peak algorithms.",2009,0, 4458,Measurement of worst-case power delivery noise and construction of worst case current for graphics core simulation,"Worst case graphics core power delivery noise is a major indicator of graphics chip performance. The design of good graphics core power delivery network (PDN) is technically difficult because it is not easy to predict a worst case current stimulus during pre-silicon design stage. Many times, the worst case power delivery noise is observed when graphics benchmark software is run during post-silicon validation. At times like this, it is too late to rectify the power delivery noise issue unless many extra capacitor placeholders are placed during early design stage. To intelligently optimize the graphics core power delivery network design and determining the right amount of decoupling capacitors, this paper suggests an approach that setup a working platform to capture the worst case power delivery noise; and later re-construct the worst case power delivery current using Thevenin's Theorem. The measurement is based on actual gaming application instead of engineering a special stimulus that is generated thru millions of logic test-vectors. This approach is practical, direct and quick, and does not need huge computing resources; or technically skilled logic designers to design algorithms to build the stimulus.",2009,0, 4459,Black-box and gray-box components as elements for performance prediction in telecommunications system,"In order to enable quality impact prediction for newly added functionality to the existing telecommunications system, we introduce new modeling elements to the common development process. In the paper we describe a black-box and grey-box instrumentation and modeling method, used for performance prediction of a distributed system. To verify basic concept of the method, a prototype system was made. The prototype consists of the existing proprietary telephone exchange system and new, open-source components that make an extension of the system. The paper presents conceptual architecture of the prototype and analyzes basic performance impact for introduction of open-source software into existing system. Instrumentation and modeling parameters are discussed, as well as measurements environment. Initial comparison of predicted and experimentally measured values validates current work on the method.",2009,0, 4460,Workflow Model Performance Analysis Concerning Instance Dwelling Times Distribution,Instances dwelling times (IDT) which consist of waiting times and handle times in a workflow model is a key performance analysis goal. In a workflow model the instances which act as customers and the resources which act as servers form a queuing network. Multidimensional workflow net (MWF-net) includes multiple timing workflow nets (TWF-nets) and the organization and resource information. This paper uses queuing theory and MWF-net to discuss mean value and probability distribution density function (PDDF) of IDT. An example is used to show that the proposed method can be effectively utilized in practice.,2009,0, 4461,A Low Latency Handoff Scheme Based on the Location and Movement Pattern,"Providing a seamless handoff and quality of service (QoS) guarantees is one of the key issues in real-time services. Several IPv6 mobility management schemes have been proposed and can provide uninterrupted service. However, these schemes either have synchronization issues or introduce signaling overhead and high packet loss rate. This paper presents a scheme that reduces the handoff latency based on the location and movement pattern of a Mobile Node (MN). An MN can detect its movement actively and forecast the handoff, which alleviates the communication cost between the MN and its Access Router (AR). Furthermore, by setting the dynamic interval in an appropriate range, the MNpsilas burden can also be alleviated. Finally, by performance evaluation using both theoretical analysis and computer simulations, we show that the proposed scheme can lower the handoff latency efficiently.",2009,0, 4462,"Fuzzy Evaluation on Software Maintainability Based on Membership Degree Transformation New Algorithm M(1,2,3)","Software maintainability means the ease with which a software system or component can be modified to correct faults, improve performance or other attributes, or adapt to a changed environment. For different software maintenance organizations or personnel, there are different concerns on software maintainability. If any one of the factors affecting the maintainability has not been completely resolved, or been unqualified, it will be affected. Therefore, in the evaluation process of software maintainability, there are a lot of uncertainty and ambiguity, so it is reasonable and scientific to apply fuzzy comprehensive evaluation method for software maintainability fuzzy evaluation. The core of fuzzy evaluation is membership degree transformation. But the transformation methods should be questioned, because redundant data in index membership degree is also used to compute object membership degree, which is not useful for object classification. The new algorithm is: using data mining technology based on entropy to mine knowledge information about object classification hidden in every index, affirm the relationship of object classification and index membership, eliminate the redundant data in index membership for object classification by defining distinguishable weight and extract valid values to compute object membership. The new algorithm of membership degree transformation includes three calculation steps which can be summarized as ldquoeffective, comparison and compositionrdquo, which is denoted as M(1, 2, 3). The paper applied the new algorithm in the fuzzy evaluation of software maintainability.",2009,0, 4463,A Method for Selecting ERP System Based on Fuzzy Set Theory and Analytical Hierarchy Process,"Enterprise resource planning (ERP) system can significantly improve future competitiveness and performance of a company, but it is also a critical investment and has a high failure probability. Thus, selecting an appropriate ERP system is very important. A method for selecting ERP system is proposed based on fuzzy set theory and analytical hierarchy process (AHP). The evaluation criteria system, including vendor brand, quality, price and service, is constructed, a multi-criteria decision-making model is formulated and the solution process is introduced in detail, that is, AHP is used to identify the weights of the criteria and fuzzy set theory is used to deal with the fuzzification of the criteria. An application example of ERP system selection demonstrates the feasibility of the proposed method.",2009,0, 4464,Low Cost Mechanism for QoS-Aware Re-Planning of Composite Service,"To meet the business and user requirements, it is crucial to manage the quality of service (QoS) of composite Web service. Due to the dynamics nature of the Web services, the runtime behavior of Web services is likely to differ from the initial estimated one. Therefore, a re-planning to adapt the composite service to the QoS violated services, ensuring the global constraints and preferences will be needed. In this paper, we propose a low cost mechanism for re-planning. Such mechanism is supported by an offline re-planning execution environment and a performance prediction based low cost policy.The experimentations show better performance of the mechanism.",2009,0, 4465,Comparative Assessment of Fingerprint Sample Quality Measures Based on Minutiae-Based Matching Performance,"This fingerprint sample quality is one of major factors influencing the matching performance of fingerprint recognition systems. The error rates of fingerprint recognition systems can be decreased significantly by removing poor quality fingerprints. The purpose of this paper is to assess the effectiveness of individual sample quality measures on the performance of minutiae-based fingerprint recognition algorithms. Initially, the authors examined the various factors that influenced the matching performance of the minutiae-based fingerprint recognition algorithms. Then, the existing measures for fingerprint sample quality were studied and the more effective quality measures were selected and compared with two image quality software packages, (NFIQ from NIST, and QualityCheck from Aware Inc.) in terms of matching performance of a commercial fingerprint matcher (Verifinger 5.0 from Neurotechnologija). The experimental results over various fingerprint verification competition (FVC) datasets show that even a single sample quality measure can enhance the matching performance effectively.",2009,0, 4466,Prediction Models for BPMN Usability and Maintainability,"The measurement of a business process in the early stages of the lifecycle, such as the design and modelling stages, could reduce costs and effort in future maintenance tasks. In this paper we present a set of measures for assessing the structural complexity of business processes models at a conceptual level. The aim is to obtain useful information about process maintenance and to estimate the quality of the process model in the early stages. Empirical validation of the measures was carried out along with a linear regression analysis aimed at estimating process model quality in terms of modifiability and understandability.",2009,0, 4467,Evaluation of Prioritization in Performance Models of DTP Systems,"Modern IT systems serve many different business processes on a shared infrastructure in parallel. The automatic request execution on the numerous interconnected components, hosted on heterogeneous hardware resources, is coordinated in distributed transaction processing (DTP) systems. While pre-defined quality-of-service metrics must be met, IT providers have to deal with a highly dynamic environment concerning workload structure and overall demand when provisioning their systems. Adaptive prioritization is a way to react to short-term demand variances. Performance models can be applied to predict the impacts of prioritization strategies on the overall performance of the system. In this paper we describe the workload characteristics and particularities of two real-world DTP systems and evaluate the effects of prioritization concerning supported overall load and resulting end-to-end performance measures.",2009,0, 4468,Integrity-Checking Framework: An In-situ Testing and Validation Framework for Wireless Sensor and Actuator Networks,"Wireless sensor and actuator network applications require several levels of testing during their development. Although software algorithms can be tested through simulations and syntax checking, it is difficult to predict or test for problems that may occur once the wireless sensor and actuator has been deployed. The requirement for testing is not however limited to the development phase. During the lifecycle of the system, faults, upgrades, retasking, etc. lead to further needs for system validation. In this paper we review the state-of-the-art techniques for testing wireless sensor and actuator applications and propose the integrity-checking framework. The framework provides in-situ full lifecycle testing and validation of wireless sensor and actuator applications by performing an ldquointegrity checkrdquo, during which the sensor inputs and actuator responses are emulated within the physical wireless sensor and actuator. This enables application-level testing by feeding controlled information to the sensor inputs, while data processing, communication, aggregation and decision making continue as normal across the physical wireless sensor and actuator.",2009,0, 4469,Quantitative Assessment for Organisational Security & Dependability,"There are numerous metrics proposed to assess security and dependability of technical systems (e.g., number of defects per thousand lines of code). Unfortunately, most of these metrics are too low-level, and lack on capturing high-level system abstractions required for organisation analysis. The analysis essentially enables the organisation to detect and eliminate possible threats by system re-organisations or re-configurations. In other words, it is necessary to assess security and dependability of organisational structures next to implementations and architectures of systems. This paper focuses on metrics suitable for assessing security and dependability aspects of a socio-technical system and supporting decision making in designing processes. We also highlight how these metrics can help in making the system more effective in providing security and dependability by applying socio-technical solutions (i.e., organisation design patterns).",2009,0, 4470,Reliable Transactional Web Service Composition Using Refinement Method,"Web services composition is a good way to construct complex Web software. However, Web services composition is prone to fail due to the unstable Web services execution. Thus, it is necessary to deal with the abstract hierarchical modeling of multi-partner business process. Therefore, this paper proposes a refinement method for failure compensation process of transaction mechanism, constructs failure service composition compensation model with the help of paired Petri net and builds a services composition compensation model. It discusses the refinement model and aggregated QoS estimation of the five common aggregation compensation constructs. It takes the classical traveling reservation business process as an example to show the influence on composite business process brought by the aggregated QoS metrics in different failure points, and analyzes the influence of reputation in different failure rate.",2009,0, 4471,Fault Detection of Satellite Network Based on MNMP,"The fault detection of satellite network is required to collect data of managed objects via network management protocol. Therefore, it is a premise of fault management of satellite network to choose a network management protocol designed according to the feature of satellite network. Due to the uniqueness of satellite network, the current network management protocol and standard, such as simple network management protocol (SNMP) and CMIP, are both directly unsuitable for the applying of fault detection of satellite network. By analysis of multiplex network management protocol (MNMP), this paper focuses on the discussion of the MNMP application in the fault detection of satellite network. Besides, in order to investigate the performance of fault detection based on MNMP, this paper also gives a performance test on several means of data acquisition of MNMP, and compares as well as analyze it with SNMP.",2009,0, 4472,An empirical investigation of filter attribute selection techniques for software quality classification,"Attribute selection is an important activity in data preprocessing for software quality modeling and other data mining problems. The software quality models have been used to improve the fault detection process. Finding faulty components in a software system during early stages of software development process can lead to a more reliable final product and can reduce development and maintenance costs. It has been shown in some studies that prediction accuracy of the models improves when irrelevant and redundant features are removed from the original data set. In this study, we investigated four filter attribute selection techniques, automatic hybrid search (AHS), rough sets (RS), Kolmogorov-Smirnov (KS) and probabilistic search (PS) and conducted the experiments by using them on a very large telecommunications software system. In order to evaluate their classification performance on the smaller subsets of attributes selected using different approaches, we built several classification models using five different classifiers. The empirical results demonstrated that by applying an attribution selection approach we can build classification models with an accuracy comparable to that built with a complete set of attributes. Moreover, the smaller subset of attributes has less than 15 percent of the complete set of attributes. Therefore, the metrics collection, model calibration, model validation, and model evaluation times of future software development efforts of similar systems can be significantly reduced. In addition, we demonstrated that our recently proposed attribute selection technique, KS, outperformed the other three attribute selection techniques.",2009,0, 4473,Poster abstract: Exploiting the LQI variance for rapid channel quality assessment,"Communicating over a reliable radio channel is vital for an efficient resource usage in sensor networks: a bad radio channel can lead to poor application performance and higher energy consumption. Previous research has shown that the LQI mean value is a good estimator of the link quality. Nevertheless, due to its high variance, many packets are needed to obtain a reliable estimation. Based on experimental results, we show instead that the LQI variance is not a limitation. We show that the variance of the LQI can be used as a metric for a rapid channel quality assessment. Our initial results indicate that identifying good channels using the LQI variance requires an order of magnitude fewer packets than when using the mean LQI.",2009,0, 4474,Humanoid Intelligent Control System of Plastic Corrugated Board Synchronous Shearing,"The synchronous shearing control of the plastic corrugated board is one of the important production processes. This process has a significant impact on the product quality. Through the analysis of its synchronization process, combined with technological requirements, this study puts forward a design method of humanoid intelligent control system. Adopting microprocessor system C8051F310, this study illustrates the design methods of the controller from both hardware and software. The practical use shows that the controller meets the high precision, steady performance, precise shearing, and the requirements of verticality and flatness. Currently, it has been successfully applied to a corrugated board plant of a plastic factory and has obtained high profits for enterprises.",2009,0, 4475,Risky Module Estimation in Safety-Critical Software,"Software used in safety-critical system must have high dependability. Software testing and V&V (Verification and Validation) activities are very important for assuring high software quality. If we can predict the risky modules in safety-critical software, testing activities and regulation activities can be applied to them more intensively. In this paper, we classify the estimated risk classes which can be used for deep testing and V&V. We predict the risk class for each module using support vector machines. We can consider that the modules classified to risk class 5 or 4 are more risky than others relatively. For all classification error rates, we expect that the results can be useful and practical for software testing, V&V, and activities for regulatory reviews. In the future works, to improve the practicality, we will have to investigate other machine learning algorithms and datasets.",2009,0, 4476,Prototype Implementation of a Goal-Based Software Health Management Service,"The FAILSAFE project is developing concepts and prototype implementations for software health management in mission-critical real-time embedded systems. The project unites features of the industry standard ARINC 653 Avionics Application Software Standard Interface and JPL's Mission Data System (MDS) technology. The ARINC 653 standard establishes requirements for the services provided by partitioned real-time operating systems. The MDS technology provides a state analysis method, canonical architecture, and software framework that facilitates the design and implementation of software-intensive complex systems. We use the MDS technology to provide the health management function for an ARINC 653 application implementation. In particular, we focus on showing how this combination enables reasoning about and recovering from application software problems. Our prototype application software mimics the space shuttle orbiter's abort control sequencer software task, which provides safety-related functions to manage vehicle performance during launch aborts. We turned this task into a goal-based function that, when working in concert with the software health manager, aims to work around software and hardware problems in order to maximize abort performance results. In order to make it a compelling demonstration for current aerospace initiatives, we additionally imposed on our prototype a number of requirements derived from NASA's Constellation Program. Lastly, the ARINC 653 standard imposes a number of requirements on the system integrator for developing the requisite error handler process. Under ARINC 653, the health monitoring (HM) service is invoked by an application calling the application error service or by the operating system or hardware detecting a fault. It is these HM and error process details that we implement with the MDS technology, showing how a state-analytic approach is appropriate for identifying fault determination details, and showing how the framework supp- orts acting upon state estimation and control features in order to achieve safety-related goals. We describe herein the requirements, design, and implementation of our software health manager and the software under control. We provide details of the analysis and design for the phase II prototype, and describe future directions for the remainder of phase II and the new topics we plan to address in phase III.",2009,0, 4477,Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning,"Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We observed good performance from both an existing ABFT method for matrix multiplication and a novel ABFT method for exponentiation. These techniques bring us a step closer to ""rad-hard"" machine learning algorithms.",2009,0, 4478,Synthesizing hardware from sketches,"This paper proposes to adapt sketching, a software synthesis technique, to hardware development. In sketching, the designer develops an incomplete hardware description, providing the ""insight"" into the design. The synthesizer completes the design to match an executable specification. This style of synthesis liberates the designer from tedious and error-prone details-such as timing delays, wiring in combinational circuits and initialization of lookup tables-while allowing him to control low-level aspects of the design. The main benefit will be a reduction of the time-to-market without impairing system performance.",2009,0, 4479,Predicting Performance of Multi-Agent systems during feasibility study,"Agent oriented software engineering (AOSE) is a software paradigm that has grasped the attention of researchers/developers for the last few years. As a result, many different methods have been introduced to enable researchers/developers to develop multi agent systems. However Performance, a non- functional attribute have not been given that much importance for producing quality software. Performance issues must be considered throughout software project development. Predicting performance early in the life cycle during feasibility study is not considered for predicting performance. In this paper, we consider the data collected (technical and environmental factors) during feasibility study of Multi-Agent software development to predict performance. We derive an algorithm to predict the performance metrics and simulate the results using a case study on scheduling the use of runways on an airport.",2009,0, 4480,Estimation of software projects effort based on function point,"With the development of software industry, software estimation and measurement is catching much more concern. Scale increasing in applications and a variety of programming languages using at the same time, manual measurement based on the LOC (line of code) cannot meet the estimating requirements. The emergence of function point resolves these difficult issues. In order to obtain the software effort, we propose an estimating method for software effort based on function point. It helps to estimate software effort more accurately without considering the languages or developing environment you choose. Firstly, use actual project records to obtain the linear relation between function point and software effort. Then determine the parameters of linear equations by maximum likelihood estimating method. Finally, you can get the effort of the project according to this equation with the function point given. After obtaining the software effort, project manager can arrange the project progress, control the cost and ensure the quality more accurately.",2009,0, 4481,An Approach to Measuring Software Development Risk Based on Information Entropy,"Software development risk always influence the success of software project, even determine a enterprise surviving or perishment. Consequently, there are some significant meaning for software company and software engineering field to measure effectively the risk. Whereas, the measurement is very difficult because software is a sort of logic product. There are many methods to measure the risk at the present time, but lack of quantificational measurement methods, and these measurements are all localization and they cannot consider well the risk factors and the effect. In this paper, we bring forward a quantificational approach to measuring the software development risk based on information entropy. It involves both the probabilities of the risk factors and the bloss, it is a effective synthesis method to measuring software development risk in practice.",2009,0, 4482,Towards Improved Assessment of Bone Fracture Risk,"Summary form only given. The mechanical competence of a bone depends on its density, its geometry and its internal trabecular microarchitecture. The gold standard to determine bone competence is an experimental, mechanical test. Direct mechanical testing is a straight-forward procedure, but is limited by its destructiveness. For the clinician, the prediction of bone quality for individual patients is, so far, more or less restricted to the quantitative analysis of bone density alone. Finite element (FE) analysis of bone can be used as a tool to non-destructively assess bone competence. FE analysis is a computational technique; it is the most widely used method in engineering for structural analysis. With FE analysis it is possible to perform a 'virtual experiment', i.e. the simulation of a mechanical test in great detail and with high precision. What is needed for that are, first, in vivo imaging capabilities to assess bone structure with adequate resolution, and second, appropriate software to solve the image-based FE models. Both requirements have seen a tremendous development over the last years. The last decade has seen the commercial introduction and proliferation of non-destructive microstructural imaging systems such as desktop micro- computed tomography (muCT), which allow easy and relatively inexpensive access to the 3D microarchitecture of bone. Furthermore, the introduction of new computational techniques has allowed to solve the increasingly large FE models, that represent bone in more and more detail. With the recent advent of microstructural in vivo patient imaging systems, these methodologies have reached a level that it is now becoming possible to accurately assess bone strength in humans. Although most applications are still in an experimental setting, it has been clearly demonstrated that it is possible to use these techniques in a clinical setting. The high level of automation, the continuing increase in computational power, and above all the im- proved predictive capacity over predictions based on bone mass, make clear that there is great potential in the clinical arena for in vivo FE analyses Ideally, the development of in vivo imaging systems with microstructural resolution better than 50 mum would allow measurement of patients at different time points and at different anatomical sites. Unfortunately, such systems are not yet available, but the resolution at peripheral sites has reached a level (80 mum) that allows elucidation of individual microstructural bone elements. Whether a resolution of 50 mum in vivo will be reached using conventional CT technology remains to be seen as the required doses may be too high. With respect to these dose considerations, MRI may have considerable potential for future clinical applications to overcome some of the limitations with X-ray CT. With the advent of new clinical MRI systems with higher field strengths, and the introduction of fast parallel- imaging acquisition techniques, higher resolutions in MRI will be possible with comparable image quality and without the adverse effects of ionizing radiation. With these patient scanners, it will be possible to monitor changes in the microarchitectural aspects of bone quality in vivo. In combination with FE analysis it will also allow to predict the mechanical competence of whole bones in the course of age- and disease-related bone loss and osteoporosis. We expect these findings to improve our understanding of the influence of densitometric, morphological but also loading factors in the etiology of spontaneous fractures of the hip, the spine, and the radius. Eventually, this improved understanding may lead to more successful approaches in the prevention of age- and disease-related fractures.",2009,0, 4483,Experimental Analysis of Different Metrics (Object-Oriented and Structural) of Software,"In this paper first investigate the relationships between existing object-oriented metrics (coupling, cohesion) and procedure-oriented metrics (Line of code, Cyclomatic complexity and knot metric) measure the probability of error detection in system classes during testing and is to propose an investigation and analysis strategy to make these kinds of studies more reusable and comparable, a problem which is persistent in the quality measurement. The metrics are first defined and then explained using practical applications finally, a review of the empirical study concerning chosen and coupling metrics and subset of these measures that provide sufficient information is given and metrics providing overlapping information are expelled from the set. The paper defines a new set of operational measures for the conceptual coupling of classes, which are theoretically valid and empirically studied. In this paper, we shows that these metrics capture new dimensions in coupling measurement, compared to existing structural metrics.",2009,0, 4484,Filtering Spam in Social Tagging System with Dynamic Behavior Analysis,"Spam in social tagging systems introduced by some malicious participants has become a serious problem for its global popularizing. Some studies which can be deduced to static user data analysis have been presented to combat tag spam, but either they do not give an exact evaluation or the algorithms' performances are not good enough. In this paper, we proposed a novel method based on analysis of dynamic user behavior data for the notion that users' behaviors in social tagging system can reflect the quality of tags more accurately. Through modeling the different categories of participants' behaviors, we extract tag-associated actions which can be used to estimate whether tag is spam, and then present our algorithm that can filter the tag spam in the results of social search. The experiment results show that our method indeed outperforms the existing methods based on static data and effectively defends against the tag spam in various spam attacks.",2009,0, 4485,Prioritization of Scenarios Based on UML Activity Diagrams,"Increased size and complexity of software requires better methods for different activities in the software development lifecycle. Quality assurance of software is primarily done by means of testing, an activity that faces constraints of both time and resources. Hence, there is need to test effectively within the constraints in order to maximize throughput i.e. rate of fault detection, coverage, etc. Test case prioritization involves techniques aimed at finding the best prioritized test suite. In this paper, we propose a prioritization technique based on UML activity diagrams. The constructs of an activity diagram are used to prioritize scenarios. Preliminary results obtained on a case-study indicate that the technique is effective in extracting the critical scenarios from the activity diagram.",2009,0, 4486,Quality of Service Composition and Adaptability of Software Architectures,"Quality of service adaptability refers to the ability of components/services to adapt in run-time the quality exhibited. A composition study from a quality point of view would investigate how these adaptable elements could be combined to meet systempsilas quality requirements. Enclosing quality properties with architectural models has been typically used to improve system understanding. Nevertheless these properties along with some supplementary information about quality adaptation would allow us to carry out a composition study during the design phase and even to predict some features of the adaptability behavior of the system. Existing modeling languages and tools lack enough mechanisms to cope with adaptability, e.g. to describe system elements that may offer/require several quality levels. This paper shows how we reuse existing modeling languages and tools, combine them and create new ones to tackle the problem of quality of service adaptability and composition. The final goal of this work is to evaluate architectural models to predict systempsilas QoS behavior before it is implemented.",2009,0, 4487,Online Self-Healing Support for Embedded Systems,"In this paper, online system-level self-healing support is presented for embedded systems. Different from off-line log analysis methods used by conventional intrusion detection systems, our research focuses on analyzing runtime kernel data structures hence perform self-diagnosis and self-healing. Inside the infrastructure, self-diagnosis and self-healing solutions have been implemented based on several selected critical kernel data structures. They can fully represent current system status and are also closely related with system resources. At runtime once any system inconsistency has been detected, predefined recovery functions are invoked. Our prototype is developed based on a lightweight virtual machine monitor, above on which the monitored Linux kernel, runtime detection and recovery services run simultaneously. The proposed infrastructure requires few modifications to current Linux kernel source code, thus it can be easily adopted into existing embedded systems. It is also fully software-based without introducing any specific hardware, therefore it is cost-efficient. The evaluation experiment results indicate that our prototype system can correctly detect inconsistent kernel data structures caused by security attacks with acceptable penalty to system performance.",2009,0, 4488,Improving Software-Quality Predictions With Data Sampling and Boosting,"Software-quality data sets tend to fall victim to the class-imbalance problem that plagues so many other application domains. The majority of faults in a software system, particularly high-assurance systems, usually lie in a very small percentage of the software modules. This imbalance between the number of fault-prone (fp) and non-fp (nfp) modules can have a severely negative impact on a data-mining technique's ability to differentiate between the two. This paper addresses the class-imbalance problem as it pertains to the domain of software-quality prediction. We present a comprehensive empirical study examining two different methodologies, data sampling and boosting, for improving the performance of decision-tree models designed to identify fp software modules. This paper applies five data-sampling techniques and boosting to 15 software-quality data sets of different sizes and levels of imbalance. Nearly 50 000 models were built for the experiments contained in this paper. Our results show that while data-sampling techniques are very effective in improving the performance of such models, boosting almost always outperforms even the best data-sampling techniques. This significant result, which, to our knowledge, has not been previously reported, has important consequences for practitioners developing software-quality classification models.",2009,0, 4489,Fault diagnosis and failure prognosis for engineering systems: A global perspective,"Engineering systems, such as aircraft, industrial processes, manufacturing systems, transportation systems, electrical and electronic systems, etc., are becoming more complex and are subjected to failure modes that impact adversely their reliability, availability, safety and maintainability. Such critical assets are required to be available when needed, and maintained on the basis of their current condition rather than on the basis of scheduled or breakdown maintenance practices. Moreover, on-line, real-time fault diagnosis and prognosis can assist the operator to avoid catastrophic events. Recent advances in Condition-Based Maintenance and Prognostics and Health Management (CBM/PHM) have prompted the development of new and innovative algorithms for fault, or incipient failure, diagnosis and failure prognosis aimed at improving the performance of critical systems. This paper introduces an integrated systems-based framework (architecture) for diagnosis and prognosis that is generic and applicable to a variety of engineering systems. The enabling technologies are based on suitable health monitoring hardware and software, data processing methods that focus on extracting features or condition indicators from raw data via data mining and sensor fusion tools, accurate diagnostic and prognostic algorithms that borrow from Bayesian estimation theory, and specifically particle filtering, fatigue or degradation modeling, and real-time measurements to declare a fault with prescribed confidence and given false alarm rate while predicting accurately and precisely the remaining useful life of the failing component/system. Potential benefits to industry include reduced maintenance costs, improved equipment uptime and safety. The approach is illustrated with examples from the aircraft and industrial domains.",2009,0, 4490,Fault detection of Air Intake Systems of SI gasoline engines using mean value and within cycle models,"This paper addresses the detection of faults in Air Intake Systems (AIS) of SI gasoline engines based on realtime measurements. It presents comparison of two classes of models for fault detection, namely those using a Mean Value Engine Model (MVEM) involving variables averaged over cycles and Within Cycle Crank-angle-based Model (WCCM) involving instantaneous values of variables changing with crank angle. Numerical simulation results of intake manifold leak and mass air flow sensor gain faults, obtained using the industry standard software called AMESimTM, have been used to demonstrate the fault detection capabilities of individual approaches. Based on these results it is clear that the method using WCCM has a higher fault detection sensitivity compared to one that uses MVEM, albeit at the expense of increased computational and modeling complexity.",2009,0, 4491,Dealing with Driver Failures in the Storage Stack,"This work augments MINIX 3's failure-resilience mechanisms with novel disk-driver recovery strategies and guaranteed file-system data integrity. We propose a flexible filter-driver framework that operates transparently to both the file system and the disk driver and enforces different protection strategies. The filter uses checksumming and mirroring in order to achieve end-to-end integrity and provide hard guarantees for detection of silent data corruption and recovery of lost data. In addition, the filter uses semantic information about the driver's working in order to verify correct operation and proactively replace the driver if an anomaly is detected. We evaluated our design through a series of experiments on a prototype implementation: application-level benchmarks show modest performance overhead of 0-28% and software-implemented fault-injection (SWIFI) testing demonstrates the filter's ability to detect and transparently recover from both data-integrity problems and driver-protocol violations.",2009,0, 4492,Mapping Web-Based Applications Failures to Faults,"In a world where software system is a daily need, system dependability has been a focus of interest. Nowadays, Web-based systems are more and more used and there is a lack of works about software fault representativeness for this platform. This work presents a field data study to analyze software faults found in real Java Web-based software during system test phase, performed by an independent software verification & validation team. The preliminary results allow us to conclude that previous classification partially fits Java Web-based software systems and must be extended to allow specific Web resources like JSP pages.",2009,0, 4493,Study of the relationship of bug consistency with respect to performance of spectra metrics,"In practice, manual debugging to locate bugs is a daunting and time-consuming task. By using software fault localization, we can reduce this time substantially. The technique of software fault localization can be performed using execution profiles of the software under several test inputs. Such profiles, known as program spectra, consist of the coverage of correct and incorrect executions statement from a given test suite. We have performed a systematic evaluation of several metrics that make use of measurement obtained from program spectra on Siemens test suite. In this paper, we discuss how the effectiveness of various metrics degrade in determining buggy statements as the bug consistency (error detection accuracy, qe) of a statement approaches zero. Bug consistency of a statement refers to the ratio of the number of failed tests executing the statement over the total number of tests executing the statement. We proposed effect(M) as to measure the effectiveness of these metrics as qe value varies. We also demonstrate that the qe (previously not considered as a metric), is just as effective as some of the metrics proposed. We also formally prove that qe is identical to the metric that Tarantula system uses for bug localization.",2009,0, 4494,Prioritizing test cases for resource constraint environments using historical test case performance data,"Regression testing has been widely used to assure the acquirement of appropriate quality through several versions of a software program. Regression testing, however, is too expensive in that, it requires many test case executions and a large number of test cases. To provide the missing flexibility, researchers introduced prioritization techniques. The aim in this paper has been to prioritize test cases during software regression test. To achieve this, a new equation is presented. The proposed equation considers historical effectiveness of the test cases in fault detection, each test case's execution history in regression test and finally the last priority assigned to the test case. The results of applying the proposed equation to compute the priority of regression test cases for two benchmarks, known as Siemens suite and Space program, demonstrate the relatively faster fault detection in resource and time constrained environments.",2009,0, 4495,Modeling and reconstruction of the statistical data from a system electronic application for administrative management in IPN through the filter of Kalman,"On the Management of Human Capital at the National Polytechnic Institute are developing information systems that seek improvements in administrative procedures, ensuring the acquisition complete of date, reliable and opportune, using technology available at the WEB. This aims to provide a quality service to more than 24,800 employees. In this case we propose the application forms via the Internet, because these employees carry data and digital documents. However, to provide the best service and sizing the server, it is necessary to analyze the accesses to the database and rebuild the system; for this is necessary to realize measurements using a free software tool and to analyze it as a system with parameter variations in time represented by an ARMA model using the Kalman filter as parameter estimator with good results.",2009,0, 4496,Vehicle suspensions performance testing system based on virtual instrument,"On the base of the vibration mechanical model and the motion equation of automobile suspension built. Analyze and bring forward the performance evaluation index of the suspension, and put forward the new testing method of the suspension performance. Using the virtual instrument technology, we develop the automobile suspension performance testing system. It adopts the PC-DAQ scheme. On the base of the original excitation suspension performance testing table we equip the table with some necessary sensors, the personal computer and the virtual software. Then the automobile suspension performance testing system is built. It can measure the absorptivity, the vibration frequency, the phase difference and the corresponding vibration waveform and so on. It also provides the foundation for the integrated estimation of the suspension performance and fault diagnosis. By testing cars in practice, the academic model is proved accurate and feasible and the testing system is exact and credible.",2009,0, 4497,Design of embedded laser beam measurement system,"In view of disadvantages lying in traditional laser beam quality measurement, an evaluation system was developed which can be used to detect laser beam quality automatically, M2 factor was used to evaluate the laser beam quality which was suggested by ISO. This system can be used in the laboratory and industry field, which collected image data of the laser speckle by image sampling system, stored image data and transmitted them to the embedded processor ARM11 under the control of FPGA, which is the abbreviation of field programmable gate array and DSP which is the abbreviation of digital signal processor, then run the image processing application software, executed laser parameter calculating algorithm, imitated laser beam transmitting hyperbolic curve in space, calculated the value of M2. Results showed that this system had advantages such as small size, light scale, low cost, simple operation, easy use, high measurement precision and so on, which had better application and popularize value.",2009,0, 4498,Comparing apples and oranges: Subjective quality assessment of streamed video with different types of distortion,"Video quality assessment is essential for the performance analysis of visual communication applications. Objective metrics can be used to estimate the relative quality differences, but they typically give reliable results only if the compared videos contain similar type of quality distortion. However, video compression typically produces different kinds of visual artifacts than transmission errors. In this paper, we propose a novel subjective quality assessment method that is suitable for comparing different types of quality distortions. The proposed method has been used to evaluate how well PSNR estimates the relative subjective quality levels for content with different types of quality distortions. Our conclusion is that PSNR is not a reliable metric for assessing the co-impact of compression artifacts and transmission errors on the subjective quality.",2009,0, 4499,Towards a computational quality model for IP-based audio,"This paper proposes a methodology for the development of a computational quality model for IP-based audio. The project mainly focuses on two questions: 1) choice of a quality measurement scale in the computational model and mapping of this scale with existing audio quality measurement metrics, and 2) quantification of encoding and packet loss impairments (only two quality degradation parameters are considered; others will be discussed in future work). The objective PEAQ algorithm in the SEAQ software implementation is used for the parameter quantification and the model accuracy estimation.",2009,0, 4500,Modelling of partial discharge activity in different spherical cavity sizes and locations within a dielectric insulation material,"The pattern of partial discharge (PD) occurrence at a defect site within a solid dielectric material is influenced by the conditions of the defect site. This is because the defect conditions, mainly its size and location determine the electric field distributions at the defect site which influence the patterns of PD occurrence. A model for a spherical cavity within a homogeneous dielectric material has been developed by using Finite Element Analysis (FEA) software. The model is used to study the influence of different conditions of the cavity on the electric field distribution in the cavity and the PD activity. In addition, experimental measurements of PD in spherical cavities of different size within a dielectric material have been undertaken. The obtained results show that PD activity depends on the size of the cavity within the dielectric material.",2009,0, 4501,Adaptive Control Framework for Software Components: Case-Based Reasoning Approach,"The proposed architecture for an adaptive software system is based on a multi-threaded programming model. It includes two basic logic components: a database for selected case gathering and a decision making sub-system. Using CBR-methods, a control procedure for the decision-making sub-system is worked out, which uses a database for selected case gathering. A respective CBR algorithm determines the number of threads needed for guaranteed predicted performance (measured in ms), and reliability (defined in %).",2009,0, 4502,Towards a Unified Behavioral Model for Component-Based and Service-Oriented Systems,"There is no clear distinction between service-oriented systems (SOS) and component-based systems (CBS). However, there are several characteristics that could let one consider SOS as a step further from CBS. In this paper, we discuss the general features of CBS and SOS, while accounting for behavioral modeling in the language called REMES. First, we present REMES in the context of CBS modeling, and then we show how it can become suitable for SOS. We also discuss the relation between our model and the current state of the art.",2009,0, 4503,Test Selection Prioritization Strategy,"A wide divergence is observed in projects between test activities planned in the test plan and the actual tests that can be executed. Estimates for test execution computed during the planning are inaccurate without test design. The actual time and resources available are usually less than planned. Assuming that time and resources cannot be changed, a dynamic selection of tests for execution that maximizes quality is required.",2009,0, 4504,Predicting Change Impact in Object-Oriented Applications with Bayesian Networks,"This study has to be considered as another step towards the proposal of assessment/predictive models in software quality. We consider in this work, that a probabilistic model using Bayesian nets constitutes an interesting alternative to non-probabilistic models suggested in the literature. Thus, we propose in this paper a probabilistic approach using Bayesian networks to analyze and predict change impact in object-oriented systems. An impact model is built and probabilities are assigned to network nodes. Data obtained from a real system are exploited to empirically study causality hypotheses between some software internal attributes and change impact. Several scenarios are executed on the network, and the obtained results confirm that coupling is a good indicator of change impact.",2009,0, 4505,Applicability of Software Reliability Growth Modeling in the Quality Assurance Phase of a Large Business Software Vendor,Software reliability growth modeling aims to use data on experienced failures for prediction of future quality levels. We analyze the applicability of this approach in the context of business software that is still in the quality assurance phase and has not been released to the customer yet. We find that the approach is not applicable in our research setup that is characterized by an agile process organization. We identify qualitative reasons why the models' assumptions are violated in our industrial case study and conduct a quantitative analysis whether data availability challenges can be addressed by mapping issue tracking data as the only available data source to the required system under test execution time. The derived metrics support managers in their decision whether their processes are suited for software reliability growth modeling.,2009,0, 4506,Improvements for 802.11-Based Location Fingerprinting Systems,"Position estimation with 802.11 and location fingerprinting has been a topic in research for quite some time already. But there are still some unaddressed issues that are the reason why such systems are not widely used. First, the positioning accuracy still leaves space for improvements. Second, users generally have no information about the quality of the estimated position. Especially in cases where the positioning error is large, the user's trust in the system suffers if he is not notified. Third, most systems that rely on a location fingerprinting approach need a time and effort consuming setup phase in which the training data has to be collected and processed. In this paper, we give an overview of existing possibilities to improve location fingerprinting systems under each of these three aspects. Several alternative solutions for each problem are presented. Finally, a discussion gives an overall picture of the usability of the solutions when considering the system as a whole.",2009,0, 4507,Research and Application on Automatic Network Topology Discovery in ITSM System,"Automatic network topology discovery is important for network management and network analysis in ITSM system. Issues like network environment configuration, performance testing and fault detecting all require accurate information of the network topology. According to the requirement of the ITSM system, in this paper we propose an efficient algorithm that is based on SNMP for discovering Layer 3 and Layer 2 topology of the network. The proposed algorithm only requires SNMP to be enabled on routers and managed switches. For Layer 3 topology discovery, this paper proposes a shared subnet based approach to analyze connections between routers. In addition, the algorithm uses a STP based method to discover Layer 2 topology, which does not require the completeness of switchpsilas AFT information. The algorithm has been implemented and tested in several enterprise-level networks, and the results demonstrate that it discovers the network topology accurately and efficiently.",2009,0, 4508,Research of a Service Monitoring System Based on SIP in Hybrid Network,"In IPv4 and IPv6 network, there are especially shortages of Application Performance Monitor system, which can help system owners to monitor and diagnostic system performance and provide warning message. In the paper, we have designed and implemented a SIP service monitoring scheme (SSMS) that includes a Service Detection System (SDS) and Real-time Alert System (RAS). The Service Detection System simulates the way of applied service access to proceed the quality testing in order to increase the credibility, while the real-time alert system utilizes the way of VoIP to inform the service provider and to confirm the alarm message.",2009,0, 4509,A Service Self-Optimization Algorithm based on Autonomic Computing,"Under the intrusion or abnormal attack, how to autonomously supply undergraded service to users is the ultimate goal of network security technology. Firstly, combined with martingale difference principle, a service self optimization algorithm based on autonomic computing-S2OAC is proposed. Secondly, according to the prior self optimizing knowledge and parameter information of inner environment, S2OAC searches the convergence trend of self optimizing function and executes the dynamic self optimization, aiming at minimum the optimization mode rate and maximum the service performance. Thirdly, set of the best optimization mode is updated and prediction model is renewed, which will implement the static self optimization and improve the accuracy of self optimization prediction. At last, the simulation results validate the efficiency and superiority of S2OAC.",2009,0, 4510,On the integration of protein contact map predictions,"Protein structure prediction is a key topic in computational structural proteomics. The hypothesis that protein biological functions are implied by their three-dimensional structure makes the protein tertiary structure prediction a relevant problem to be solved. Predicting the tertiary structure of a protein by using its residue sequence is called the protein folding problem. Recently, novel approaches to the solution of this problem have been found and many of them use contact maps as a guide during the prediction process. Contact map structures are bidimensional objects which represent some of the structural information of a protein. Many approaches and bioinformatics tools for contact map prediction have been presented during the past years, having different performances for different protein families. In this work we present a novel approach based on the integration of contact map predictions in order to improve the quality of the predicted contact map with a consensus-based algorithm.",2009,0, 4511,Next-Generation Power Information System,"Power monitoring systems (power quality monitors, digital fault recorders, digital relays, advanced controllers, etc.) continue to get more powerful and provide a growing array of benefits to overall power system operation and performance evaluation. Permanent monitoring systems are used to track ongoing performance, to watch for conditions that could require attention, and to provide information for utility and customer personnel when there is a problem to investigate. An important development area for these monitoring systems is the implementation of intelligent systems that can automatically evaluate disturbances and conditions to make conclusions about the cause of a problem or even predict problems before they occur. This paper describes the development of a Next Generation Power Information System that will provide an open platform for implementing advanced monitoring system applications that involve integration with many different data sources. The work builds on many years of experience with software for management and analysis of large power quality monitoring systems.",2009,0, 4512,Quantifying the Potential of Program Analysis Peripherals,"Tools such as multi-threaded data race detectors, memory bounds checkers, dynamic type analyzers, data flight recorders, and various performance profilers are becoming increasingly vital aids to software developers. Rather than performing all the instrumentation and analysis on the main processor, we exploit the fact that increasingly high-throughput board level interconnect is available on many systems, a fact we use to off-load analysis to an off-chip accelerator. We characterize the potential of such a system to both accelerate existing software development tools and enable a new class of heavyweight tools. There are many non-trivial technical issues in taking such an approach that may not appear in simulation, and to flush them out we have developed a prototype system that maps a DMA based analysis engine, sitting on a PCI-mounted FPGA, into the Valgrind instrumentation framework. With our novel instrumentation methods, we demonstrate that program analysis speedups of 29% to 440% could be achieved today with strictly off-the-shelf components on some of the state-of-the-art tools, and we carefully quantify the bottlenecks to illuminate several new opportunities for further architectural innovation.",2009,0, 4513,A Domain Specific Language in Dependability Analysis,"Domain specific languages gain increasing popularity as they substantially leverage software development by bridging the gap between technical and business area. After a domain framework is produced, experts gain an effective vehicle for assessing quality and performance of a system in the business-specific context. We consider the domain to be dependability of multi-agent system (MAS), for which a key requirement is an efficient verification of a topology model of a power system. As a result, we come up with a reliability evaluation solution offering a significant rise in the level of abstraction towards MAS utilized for purposes of a power system topology verification. By means of the mentioned solution safety engineers are enabled to perform analysis while the design is still incomplete. A new DSL is developed in XText in order to specify a structure of the system together with dependability extensions, which are further translated into dynamic fault trees using model to model transformations. The Eclipse Ecore becomes a common denominator, in which both metamodelspsila abstract syntax trees are defined. Finally, an expert is offered with two ways of defining a model: through abstract and textual concrete syntax, both of which are checked for consistency using object constraint language.",2009,0, 4514,A Model Based Framework for Specifying and Executing Fault Injection Experiments,"Dependability is a fundamental property of computer systems operating in critical environment. The measurement of dependability (and thus the assessment of the solutions applied to improve dependability) typically relies on controlled fault injection experiments that are able to reveal the behavior of the system in case of faults (to test error handling and fault tolerance) or extreme input conditions (to assess robustness of system components). In our paper we present an Eclipse-based fault injection framework that provides a model-based approach and a graphical user interface to specify both the fault injection experiments and the run-time monitoring of the results. It automatically implements the modifications that are required for fault injection and monitoring using the Javassist technology, this way it supports the dependability assessment and robustness testing of software components written in Java.",2009,0, 4515,A Comparison of Structural Testing Strategies Based on Subdomain Testing and Random Testing,"Both partition testing and random testing methods are commonly followed practice towards selection of test cases. For partition testing, the programpsilas input domain is divided into subsets, called subdomains, and one or more representatives from each subdomain are selected to test the program. In random testing test cases are selected from the entire programpsilas input domain randomly. The main aim of the paper is to compare the fault-detecting ability of partition testing and random testing methods. The results of comparing the effectiveness of partition testing and random testing may be surprising to many people. Even when partition testing is better than random testing at finding faults, the difference in effectiveness is marginal. Using some effectiveness metrics for testing and some partitioning schemes this paper investigates formal conditions for partition testing to be better than random testing and vice versa.",2009,0, 4516,Scrum and CMMI Going from Good to Great,"Projects combining agile methods with CMMI combine adaptability with predictability to better serve large customer needs. The introduction of Scrum at Systematic, a CMMI level 5 company, doubled productivity and cut defects by 40% compared to waterfall projects in 2006 by focusing on early testing and time to fix builds. Systematic institutionalized Scrum across all projects and used data driven tools like story process efficiency to surface product backlog impediments. This allowed them to systematically develop a strategy for a second doubling in productivity. Two teams have achieved a sustainable quadrupling of productivity compared to waterfall projects. We discuss here the strategy to bring the entire company to that level. Our experiences shows that Scrum and CMMI together bring a more powerful combination of adaptability and predictability than either one alone and suggest how other companies can combine them to achieve Toyota level performance - 4 times the productivity and 12 times the quality of waterfall teams.",2009,0, 4517,Study on application of three-dimensional laser scanning technology in forestry resources inventory,"Terrestrial laser scanners, as efficient tools, have opened a wide range of application fields within a rather short period of time. Beyond interactive measurement in 3D point clouds, techniques for the automatic detection of objects and the determination of geometric parameters form a high priority research issue. The quality of 3D point clouds generated by laser scanners and the automation potential make terrestrial laser scanning also an interesting tool for forest inventory and management. The paper will first review current laser scanner systems from a technological point of view and discuss different scanner technologies and system parameters regarding their suitability for forestry applications. In the second part of the paper, results of a pilot study on the applicability of terrestrial laser scanners in forest inventory tasks will be presented. The study concentrates on the automatic detection of trees and the subsequent determination of tree height and diameter at breast height. Reliability and precision of techniques for automatic point cloud processing were analysed based on scans of a test region in Harbin Experimental Forest Farm. In the pilot study, which represents an early stage of software development, more than 95% of the trees in a test region could be detected correctly. Tree heights could be determined with a precision of 80 cm, and breast height diameters could be determined with a precision of less than 1.5 cm.",2009,0, 4518,Efficient fault-prone software modules selection based on complexity metrics,"In order to improve the software reliability early, this paper proposes an efficient algorithm to select fault-prone software module. Based on software module's complexity metrics, the algorithm uses modified cascaded-correlation algorithm as neural network classifier to select the fault-prone software module. Finally, by analyzing the algorithm's application in the project MAP, the paper shows the advantage of the algorithm.",2009,0, 4519,The research on fault equivalent analysis method in testability experiment validation,"With the existing fault injection techniques, many faults that can fully expose testability design defects can not be injected. To solve this problem, a method of fault equivalent analysis is proposed. By this means, some characteristics are extracted from the faults those unable to be injected, and ldquoyield analysisrdquo or ldquoyielded analysisrdquo is performed. Then the minimal cut sets of atom faults is obtained and selected, which are finally equivalent to the atom faults sequence. Applications show that the method not only solves the problem that many faults are not able to be injected, but also ensures the effect of testability experiment validation.",2009,0, 4520,Software task processing with dependent modules and some measures,"This paper proposes a new model by combining an infinite-server queueing model for multi-task processing software system with a perfect debugging model based on Markov process with two types of faults suggested by Lee, Nam and Park. We apply this model for module and integration testing in the testing process. Also, we compute the expected number of tasks whose processes can be completed and the task completion probability are investigated under the proposed model. We interpret the meaning of the interaction between modules in a software composed of dependent modules.",2009,0, 4521,Software FMEA approach based on failure modes database,"A classification method of software failure modes based on software IPO process is presented. And then two database called general failure modes database (GFMD) and special failure modes database (SFMD) are proposed based on this classification method. Furthermore, a new approach of software FMEA which is based on GFMD and SFMD is presented. This approach which makes the analysis process of FMEA more operable and the failure modes obtained from analysis more comprehensive improves the efficiency of Software FMEA. Meanwhile GFMD and SFMD also offer a platform for the analyzers to accumulate and share their experience. The case study shows that this approach mentioned in this paper is effective in the practice.",2009,0, 4522,An approach of software quality prediction based on relationship analysis and prediction model,"By predicting the quality of the software that will be formed in the early stage of development, faults brought in at the phase of design will be found out early in order not leave them in the software product. Furthermore, it will be easy for designers to adopt appropriate plans based on specific expectations of the target software. However, the traditional prediction models have following shortages: 1) the relationship between attributes and metrics effectively cannot be expressed; 2) lack of the ability to process data both qualitatively and quantitatively; 3) not appropriate to the case with uncompleted information. In this paper, a model built based on and fuzzy neural network is proved to be good at quality prediction of object-oriented software.",2009,0, 4523,Three types of fault coverage in multi-state systems,"Fault-tolerance is an essential architectural attribute for achieving high reliability in many critical applications of digital systems. Automatic fault and error handling mechanisms play a crucial role in implementing fault tolerance because an uncovered (undetected) fault may lead to a system or a subsystem failure even when adequate redundancy exists. Examples of this effect can be found in computing systems, electrical power distribution networks, pipelines carrying dangerous materials etc. Because an uncovered fault may lead to overall system failure, an excessive level of redundancy may even reduce the system reliability. We consider three types of coverage models: 1. element level coverage where the fault coverage probability of an element does not depend on the states of other elements; 2. the multi-fault coverage where the effectiveness of recovery mechanisms depends on the coexistence of multiple faults in a group of elements that collectively participate in detecting and recovering the faults in that group; 3. the performance dependent coverage where the effectiveness of recovery mechanisms in a group depends on the entire performance level of this group. The paper presents a modification of the generalized reliability block diagram (RBD) method for evaluating reliability and performance indices of complex multi-state series-parallel systems with all these types of fault coverage. The suggested method based on a universal generating function technique allows the system performance distribution to be obtained using a straightforward recursive procedure.",2009,0, 4524,An end-to-end approach for the automatic derivation of application-aware error detectors,"Critical Variable Recomputation (CVR) based error detection provides high coverage for data critical to an application while reducing the performance overhead associated with detecting benign errors. However, when implemented exclusively in software, the performance penalty associated with CVR based detection is unsuitably high. This paper addresses this limitation by providing a hybrid hardware/software tool chain which allows for the design of efficient error detectors while minimizing additional hardware. Detection mechanisms are automatically derived during compilation and mapped onto hardware where they are executed in parallel with the original task at runtime. When tested using an FPGA platform, results show that our approach incurs an area overhead of 53% while increasing execution time by 27% on average.",2009,0, 4525,Automatic fault detection and diagnosis in complex software systems by information-theoretic monitoring,"Management metrics of complex software systems exhibit stable correlations which can enable fault detection and diagnosis. Current approaches use specific analytic forms, typically linear, for modeling correlations. In this paper we use normalized mutual information as a similarity measure to identify clusters of correlated metrics, without knowing the specific form. We show how we can apply the Wilcoxon rank-sum test to identify anomalous behaviour. We present two diagnosis algorithms to locate faulty components: RatioScore, based on the Jaccard coefficient, and SigScore, which incorporates knowledge of component dependencies. We evaluate our mechanisms in the context of a complex enterprise application. Through fault injection experiments, we show that we can detect 17 out of 22 faults without any false positives. We diagnose the faulty component in the top five anomaly scores 7 times out of 17 using SigScore, which is 40% better than when system structure is ignored.",2009,0, 4526,Improving students' hardware and software skills by providing unrestricted access to state of the art design tools and hardware systems,"The technology and CAD tools employed by industry to design digital hardware evolve quickly and continuously. Well prepared engineers, who are able to produce actual designs and adapt themselves to the global world, are in demand. Educational programs must keep pace with technologies in common use in order to produce graduates who are competitive in the marketplace. Studies conducted at two different universities, Rose Hulman Institute of Technology and Washington State University measure changes in student performance when all students have unlimited access to state of the art design tools and hardware systems. Data are collected from surveys, exams, and course assignments. Quantitative data are analyzed by comparison to historical data gathered from student groups that did not have unlimited access to hardware systems, and qualitative data are used to determine the subjective quality of each student's experience. Outcomes include: assessing whether the overall learning process is improved; whether students have a better knowledge of modern technologies and design methods; whether their comprehension of founding concepts has improved or faltered.",2009,0, 4527,Relative evaluation of partition algorithms for complex networks,"Complex networks partitioning consists in identifying denser groups of nodes. This popular research topic has applications in many fields such as biology, social sciences and physics. This led to many different partition algorithms, most of them based on Newman's modularity measure, which estimates the quality of a partition. Until now, these algorithms were tested only on a few real networks or unrealistic artificial ones. In this work, we use the more realistic generative model developed by Lancichinetti et al. to compare seven algorithms: Edge-betweenness, Eigenvector, Fast Greedy, Label Propagation, Markov Clustering, Spinglass and Walktrap. We used normalized mutual information (NMI) to assess their performances. Our results show Spinglass and Walktrap are above the others in terms of quality, while Markov Clustering and Edge-Betweenness also achieve good performance. Additionally, we compared NMI and modularity and observed they are not necessarily related: some algorithms produce better partitions while getting lower modularity.",2009,0, 4528,Seam Reconstruct: Dynamic scene stitching with Large exposure difference,"Panoramic stitching of static and dynamic scenes is a very important and challenging research area. For static scenes, number of approaches has been proposed so far. The result produced by these approaches provides great similarity between resultant panorama and input images with almost zero seam visibility. However for dynamic scenes, we have found that existing approaches are unable to overcome the challenges introduced due to simultaneous presence of (A) Large exposure difference and (B) Moving Objects. In this paper we propose a Seam-Reconstruction technique which overcomes these limitations. Seam-Reconstruction technique is a two step approach. The first step resolves the position of moving objects and prevents ghosting due to parallax by building a reference panorama with the help of optimal seam evaluation technique while the second step removes the remaining differences along the seam with the help of Poisson's equation. Experimental result depicts that we were able to stitch dynamic scene containing large exposure difference maintaining the quality as well as photometric and geometric similarity.",2009,0, 4529,Matching schemas of heterogeneous relational databases,"Schema matching is a basic problem in many database application domains, such as data integration. The problem of schema matching can be formulated as follows, ldquogiven two schemas, Si and Sj, find the most plausible correspondences between the elements of Si and Sj, exploiting all available information, such as the schemas, instance data, and auxiliary sourcesrdquo. Given the rapidly increasing number of data sources to integrate and due to database heterogeneities, manually identifying schema matches is a tedious, time consuming, error-prone, and therefore expensive process. As systems become able to handle more complex databases and applications, their schemas become large, further increasing the number of matches to be performed. Thus, automating this process, which attempts to achieve faster and less labor-intensive, has been one of the main tasks in data integration. However, it is not possible to determine fully automatically the different correspondences between schemas, primarily because of the differing and often not explicated or documented semantics of the schemas. Several solutions in solving the issues of schema matching have been proposed. Nevertheless, these solutions are still limited, as they do not explore most of the available information related to schemas and thus affect the result of integration. This paper presents an approach for matching schemas of heterogeneous relational databases that utilizes most of the information related to schemas, which indirectly explores the implicit semantics of the schemas, that further improves the results of the integration.",2009,0, 4530,Assessing easiness with Froglingo,"Expressive power has been a well-established as a dimension of measuring the quality of a computer language. Easiness is another dimension. It is the main stream of development in programming language and database management. However, there has not been a method to precisely measure the easiness of a computer language. This article discusses easiness. Provided that a data model be easier than a programming language in representing the semantics of the given data model, this article concludes that Froglingo is the easiest in database application development and maintenance.",2009,0, 4531,Design and data processing of a real-time power quality monitoring instrument,"Power quality (PQ) monitoring is an important issue to electric utilities and many industrial power customers. This paper presents a new distributed monitoring instrument based on Digital Signal Processing (DSP) and virtual measuring technology. The instrument is composed of EOC module, Feeder Control Unit (FCU) and Supervision Unit (SU). EOC module is used to implement the data acquisition and high speed data transmission of busbar voltage; FCU performs data acquisition and processing of feeders; SU based on virtual measuring technology is used to further analyze and process power quality parameters, achieves data storage and management. Wavelet-transformation which is implemented into SU detects transient power quality disturbance, while digital filtering, windowing, the software fixed-frequency sampling method, linear interpolation and Fast Fourier Transformation (FFT) are realized by DSP. Therefore, the monitoring instrument not only implements real-time, comprehensive high-precision and management for all power quality parameters, but also helps confirm source of the disturbance, improve the quality of power supply and increase the stability performance of power system.",2009,0, 4532,Research and design of IUR76-II test system for infrared flame detectors,"IUR76-II performance test system for infrared flame detectors is based on the National Standard of P.R.C, which is named as `performance requirements and test methods for point infrared flame detectors' (GB15631-1995). The architecture of new test system includes of infrared point source, optical track, modulator, optical filter device, 4-dimension holder, optical simulators, data collector, and computer. The common test is consisted of response threshold measurement, response time measurement, direction measurement, surrounding optical interference test, and power on test. In the new test system, SiC infrared source is used to replace the fire of the combusting gasified gasoline, which is used in national standard. The stability of SiC infrared source is smaller than 0.1%, which is much better than 20% (stability of the combusting gasified gasoline). The optical filter device is equipped with stepper motor. There are four bands infrared waves could pass through optical filter device, which are 4.2 mum ~4.7 mum wide band, 4.32 mum narrow band, 4.45 mum narrow band, 4.58 mum narrow band. The data collector and computer software is used in new test system to analyze measurement results and obtain quality reports for DUT. Based on the experience procedure and results, the new test system is proved more safe, convenient, and precision than the system proposed by national standard. This new test system for infrared flame detectors is applied in testing the point infrared flame detectors equipped in industrial or civilian buildings.",2009,0, 4533,The design of online system for multi-rang measuring sediment concentration,"According to actual sediment testing needs, an online system is designed to measure sediment concentration. In this system, two testing ways will be selected at the different sediment range. When the water contain low sediment, the system will chose photoelectric-based detection methods to measure. It has a low measurement range, but has a high accuracy. When the water contain high sediment, the system will chose capacitive differential pressure detection methods to measure. On the contrary, it has a lower precision, but has higher measurement range. This paper particularly introduces photoelectric testing technology, principle of the capacitive differential pressure sensor, PLC hardware design, sensor inclination correction and monitoring software design. The system can overcome the effects of different environments, and work steadily. This system is more suitable to measure sediment concentration in the reservoir dredging, water quality management and slurry treatment etc.",2009,0, 4534,Research on harmonic current distribution of beijing's rail traffic power supply system,"This paper will discuss the monitoring and data analysis of nonlinear load power quality by analyzing electrical railway, which is a typical nonlinear load in Beijing. Firstly, this paper will introduce a method of designing the software and hardware of a kind of highly accurate and large-capacity energy quality data recorder, which is capable of accurately recording voltage, electric current wave, etc., data on site for a long standing and can restore the original data with upper computer in lab. Secondly, this paper will analyze the data collected by the energy quality data recorder from Beijing Xiazhuang transformer substation, which is responsible for supplying electricity to Daqin railway and is a typical nonlinear electrical railway load. On the one hand this paper presents the complete data of fundamental current and harmonic current in a typical working day (as shown in graphs) and analyzes the correlation between fundamental current and main harmonic current. On the other hand, this paper also analyzes the amplitude of main harmonic current distribution with statistical methods and establishes an autoregressive (AR) model to illustrate the distribution of main harmonic current. From the above analysis we can get the key data for evaluating the impact generated by this transformer substation on the grid's electrical energy quality.",2009,0, 4535,Design of a remote-monitoring system of MOA based on GPRS,"By measuring resistive current of the Metal-oxide Arresters(MOA) on-line monitoring technology, we can understand the performance condition of MOA at any time without unnecessary power-cut-off overhaul. Thus we can detect the abnormal phenomena and hidden accidents of MOA in time, take measures in advance to prevent the accident from getting worse and to avoid the economic loss resulted from the accident. To conquer the defect of this transmission method, a new method of monitoring of metal-oxide arresters in long distance was presented. The monitoring system consists of on-spot collection module, GPRS transmission module, long-distance monitoring center. The design of hardware and software of TMS320F2812 microprocessor according to the data collection and processing module was introduced. Data report can be formed in this system. The state of MOA can be inspected conveniently and accurately. Electric Power system can run in the reliable state.",2009,0, 4536,Research on location of single-phase earth fault based on pulse injection method in distribution network,"A method of fault location based on pulse signal injection has been promoted in this paper, which is not only independent of the following factors, such as system operating mode, topology, neutral grounding and random fault, but also the site of signal injection, the width and period of pulse are flexible and adjustable. In this paper the design scheme of software and hardware for signal source is proposed, high-pressure pulse generator is carried out by adopting C8051F310 MCU, and signal detector is designed based on the principle of electromagnetic induction.",2009,0, 4537,Real-time power quality monitoring and analysis platform of rural power network based on virtual instrument,"A new design proposal of real-time power quality monitoring and analysis platform based on virtual instruments technology is proposed that aims at the key issues that affecting the quality of rural power network. From steady-state to transient-state, this article discusses the typical algorithms of harmonic analysis, voltage deviation and unbalance in three-phase, voltage swells and dips, voltage fluctuation, transient oscillation and the theory of power quality disturbances detection based on Hilbert phase-shifting. Completing the software design of power quality monitoring and analysis platform based on NI (National Instruments)-LabVIEW, which including visual design algorithm of the monitoring and analysis, graphical display of the calculation, analysis and display of the results, auxiliary we also use the FLUKE standard power source. The actual results show that the test results of our platform are accurate, the interfaces are friendly and the performances are steady and practical.",2009,0, 4538,Research and design of IUR76-I test system for ultraviolet flame detectors,"IUR7 6-1 performance test system for ultraviolet (UV) flame detectors is based on the National Standard of P.R.C, which is named as 'performance requirements and test methods for point ultraviolet flame detectors' (GB12791-199Gamma). The architecture of new test system includes of ultraviolet point source, optical track, modulator, optical filter device, 4-dimension holder, optical simulators, data collector, and computer. The common test is consisted of response threshold measurement, response characteristic curve measurement, direction measurement, surrounding optical interference test, and power on test. In the new test system, deuterium lamp is used as the ultraviolet point source to replace the fire of the combusting gasified gasoline, which is used in national standard. The stability of deuterium lamp is smaller than 0.05%, which is much better than 20% (stability of the combusting gasified gasoline). The optical filter device is equipped with stepper motor. There are four bands ultraviolet waves could pass through optical filter device, which are 200 nm~280 nm wide band, 210 nm narrow band, 254 nm narrow band, 270 nm narrow band. The data collector and computer software is used in new test system to analyze measurement results and obtain quality reports for DUT based on the experience procedure and results, the new test system is proved more convenient, safe, and precision than the system proposed by national standard. This new test system for ultraviolet flame detectors is applied in testing the point ultraviolet flame detectors equipped in industrial or civilian buildings.",2009,0, 4539,Phase based-voltage dip classification using a new classification Algorithm,"This paper is concerned with the development and implementation of an improved algorithm for dip classification. It can determine the type of dip based on the difference in phase angle between the measured voltages using Zinari's Algorithm. The improved algorithm overcomes the shortcomings of existing conventional dip classification methods which are based solely on the duration and magnitude of the voltage deviation. A software program was developed using Matlab/Simulink and the performance of the algorithm was checked over the full range of phase shift and dip magnitude. Moreover, the algorithm was implemented in a laboratory set up, and various dip types were simulated and then classified using the improved algorithm.",2009,0, 4540,Assessment of flicker impact of fluctuating loads prior to connection,"With the acceptance of IEEE Std. 1453, many utility companies in North America are facing problems in applying the recommended concepts. These problems center around difficulties in predicting flicker produced by specific customers or loads before they are connected. The discussions in this paper are intended to serve as an overview of some of the methods that are commonly used, mostly outside North America, to perform pre-connection flicker assessments of fluctuating loads.",2009,0, 4541,"Investigation of residential customer safety-line, neutral and earth integrity","Reverse polarity and neutral failures can produce potentially dangerous voltage levels within electrical consumer's premises. While earthing at the consumers premises is normally good during the installation, it may degrade over time. Existing conventional electromechanical energy meters do not detect such conditions at the consumer premises. Hence, an accurate detection of conditions such as reverse polarity, earthing and neutral failure and degradation is essential for safe and reliable operation of a household electrical system. It is highly desirable that a protection system is designed such that it should detect such conditions accurately and it should not be oversensitive, as this could lead to an unnecessarily unacceptable high level of ldquonuisancerdquo operation. In addition, such a solution should have to be reliable, economical and easily adoptable into existing premises without any major modification to the installation. This paper is intended to derive various necessary indices to detect neutral and earthing failure or degradation and reverse polarity conditions at the electrical consumer's premises. The simulation is carried out with the MATLABreg - SIMULINKreg software with SimPowerSystemstrade toolbox. These indices can be integrated into a smart meter or similar device to accurately detect earthing and neutral failure or degradation and reverse polarity conditions at consumer premises.",2009,0, 4542,An Adaptive mimic filter â€?Based algorithm for the detections of CT saturations,"An adaptive mimic filter-based algorithm for detecting the saturation of current transformers (CTs) has been presented and implemented in this paper. First, an adaptive method was developed for obtaining the line impedance of a digital mimic filter. The variations of the obtained line impedance were then used to detect the CT saturations. By using the proposed algorithm, the saturation period of a current waveform can be accurately detected. This paper finally utilized the MATLAB/SIMULINK software and a DSP-based environment to verify the proposed algorithm. Test results show the proposed algorithm can accurately detect the CT saturations.",2009,0, 4543,Geometrical approach on masked gross errors for power systems state estimation,"In this paper, a geometrical based-index, called undetectability index (UI), that quantifies the inability of the traditional normalized residue test to detect single gross errors is proposed. It is shown that the error in measurements with high UI is not reflected in their residues. This masking effect is due to the ldquoproximityrdquo of a measurement to the range of the Jacobian matrix associated with the power system measurement set. A critical measurement is the limit case of measurement with high UI, that is, it belongs to the range of the Jacobian matrix, has an infinite UI index, its error is totally masked and cannot be detected in the normalized residue test at all. The set of measurements with high UI contains the critical measurements and, in general, the leverage points, however there exist measurements with high UI that are neither critical nor leverage points and whose errors are masked by the normalized residue test. In other words, the proposed index presents a more comprehensive picture of the problem of single gross error detection in power system state estimation than critical measurements and leverage points. The index calculation is very simple and is performed using routines already available in the existing state estimation software. Two small examples are presented to show the way the index works to assess the quality of measurement sets in terms of single gross error detection. The IEEE-14 bus system is used to show the efficiency of the proposed index to identify measurements whose errors are masked by the estimation processing.",2009,0, 4544,Hierarchical fault detection and diagnosis for unmanned ground vehicles,"This paper presents a fault detection and diagnosis (FDD) method for unmanned ground vehicles (UGVs) operating in multi agent systems. The hierarchical FDD method consisting of three layered software agents is proposed: Decentralized FDD (DFDD), centralized FDD (CFDD), and supervisory FDD (SFDD). Whereas the DFDD is based on modular characteristics of sensors, actuators, and controllers connected or embedded to a single DSP, the CFDD is to analyze the performance of vehicle control system and/or compare information between different DSPs or connected modules. The SFDD is designed to monitor the performance of UGV and compare it with local goal transmitted from a ground station via wireless communications. Then, all software agents for DFDD, CFDD, and SFDD interact with each other to detect a fault and diagnose its characteristics. Finally, the proposed method will be validated experimentally via hardware-in-the-loop simulations.",2009,0, 4545,Enterprise architecture dependent application evaluations,"Chief information officers (CIO) are faced with the increasing complexity of application landscapes. The task of specifying future architectures depends on an exact assessment of the current state. When CIOs have to evaluate the whole landscape or certain applications of it, they are facing an enormous challenge, since there is no common methodology for that purpose. Within this paper we address this task and present an approach for enterprise architecture dependent application evaluations. We outline why it is important not only to assess single applications by classical software metrics in order to get an indication of their quality, but also to regard the overall context of an enterprise application. To round up this contribution we present a brief example from a project with one of our industrial partners.",2009,0, 4546,Robust Color Image Enhancement of Digitized Books,"We present a spatially variant framework for correcting uneven illumination and color cast, problems commonly associated with digitized books. The core of our method is a color image segmentation algorithm based on watershed transform and noise thresholding. We propose a threshold estimation method using the histogram of the gradient magnitude. The critical parameters of the segmentation are further constrained by statistical analysis of multiple book pages conducted in a bootstrap pass. We demonstrate the consistent performance of the proposed method on books with wide-ranging content and image quality.",2009,0, 4547,Chlorophyll measurement from Landsat TM imagery,"Water quality is an important factor for human health and quality of life. This has been recognized many years ago. Remote sensing can be used for various purposes. Environmental monitoring through the method of traditional ship sampling is time consuming and requires a high survey cost. This study uses an empirical model, based on actual water quality of chlorophyll measurements from the Penang Strait, Malaysia to predict chlorophyll based on optical properties of satellite digital imagery. The feasibility of using remote sensing technique for estimating the concentration of chlorophyll using Landsat satellite imagery in Penang Island, Malaysia was investigated in this study. The objective of this study is to evaluate the feasibility of using Landsat TM image to provide useful data for the chlorophyll mapping studies. The chlorophyll measurements were collected simultaneously with the satellite image acquisition through a field work. The in-situ locations were determined using a handheld Global Positioning Systems (GPS). The surface reflectance values were retrieved using ATCOR2 in the PCI Geomatica 10.1.3 image processing software. And then the digital numbers for each band corresponding to the sea-truth locations were extracted and then converted into radiance values and reflectance values. The reflectance values were used for calibration of the water quality algorithm. The efficiency of the proposed algorithm was investigated based on the observations of correlation coefficient (R) and root-mean-square deviations (RMS) with the sea-truth data. Finally the chlorophyll map was color-coded and geometrically corrected for visual in terpretation. This study shows that the Landsat satellite imagery has the potential to supply useful data for chlorophyll studies by using the developed algorithm. This study indicates that the chlorophyll mapping can be carried out using remote sensing technique by using Landsat imagery and the previously developed algorithm over Penang,- Malaysia.",2009,0, 4548,The data management system for the VENUS and NEPTUNE cabled observatories,"The VENUS (http://venus.uvic.ca/) and NEPTUNE Canada (http://neptunecanada.ca/) cabled ocean observatories have been envisioned from their inception as underwater extensions of the Internet. Having sensors connected to a network opens up tremendous opportunities, from realtime access to sensor measurements to the interactive control of remote assets. Moreover, with a software system in control, data from all sensors can be managed, archived and made available to the wider community. Finally, a data management system can also offer features that are unique to centrally managed instrumentation: thanks to communication standards, autonomous event detection and reaction becomes a very powerful possibility. This talk will summarize the salient features of DMAS (Data Management and Archiving System) that have been implemented over the past four years as well as the planned features to be delivered soon. DMAS consists of two main components: the Data Acquisition Framework (DAF) takes care of the interaction with instruments in terms of control, monitoring as well as data acquisition and storage. The framework also contains operation control tools. The user interaction features include data search and retrieval, data distribution. Current developments in the Web 2.0 area will provide a complete research environment where users will have the ability to work and interact on-line with colleagues, process and visualize data, establish observation schedules and pre-program autonomous, event detection and reaction. We call this environment ""Oceans 2.0"". The architecture of DMAS is focused on limiting the amount of data loss and maximizing up time. It is a service oriented architecture, making use of some of the more advanced tools available. The code is written in Java and makes use of an enterprise service bus and of the publish-and-subscribe paradigm. The paper also describes the management approaches that were adopted to build DMAS. Good code quality, the continuous support of- VENUS and the need for flexibility in dealing with rapidly changing requirements were the drivers behind the adoption of in-house development, the Agile development methodology and the creation of three teams. The development team deals with requirement collection, analysis, design, coding and unit testing; the QA/QC team performs software tests and regression in a production-like environment. The systems/operations team focuses on the hardware and system software preparation and support, receives and installs new code releases after QA approval and monitors systems operations. We are vying to perform new code releases twice a month. The method has worked well during the four years of the system development phase and has allowed for a constant support of VENUS. DMAS will be completed well under budget.",2009,0, 4549,Automated Refactoring Suggestions Using the Results of Code Analysis Tools,"Static analysis tools are used for the detection of errors and other problems on . The detected problems related to the internal structure of a software can be removed by source code transformations called refactorings. To automate such source code transformations, refactoring tools are available. In modern integrated development environments, there is a gap between the static analysis tools and the refactoring tools. This paper presents an automated approach for the improvement of the internal quality of software by using the results of code analysis tools to call a refactoring tool to remove detected problems. The approach is generic, thus allowing the combination of arbitrary tools. As a proof of concept, this approach is implemented as a plug-in for the integrated development environment Eclipse.",2009,0, 4550,Quality of Code Can Be Planned and Automatically Controlled,"Quality of code is an important and critical health indicator of any software development project. However, due to the complexity and ambiguousness of calculating this indicator it is rarely used in commercial contracts. As programmers are much more motivated with respect to the delivery of functionality than quality of code beneath it,they often produce low quality code, which leads to post-delivery and maintenance problems. The proposed mechanism eliminates this lack of attention to quality of code. The results achieved after the implementation of the mechanism are more motivated programmers, higher project sponsor confidence and a predicted quality of code.",2009,0, 4551,Static Estimation of Test Coverage,"Test coverage is an important indicator for unit test quality. Tools such as Clover compute coverage by first instrumenting the code with logging functionality, and then logging which parts are executed during unit test runs. Since computation of test coverage is a dynamic analysis, it presupposes a working installation of the software. In the context of software quality assessment by an independent third party, a working installation is often not available. The evaluator may not have access to the required libraries or hardware platform. The installation procedure may not be automated or documented. In this paper, we propose a technique for estimating test coverage at method level through static analysis only. The technique uses slicing of static call graphs to estimate the dynamic test coverage. We explain the technique and its implementation. We validate the results of the static estimation by statistical comparison to values obtained through dynamic analysis using Clover. We found high correlation between static coverage estimation and real coverage at system level but closer analysis on package and class level reveals opportunities for further improvement.",2009,0, 4552,ZX: A Novel Service Selection Solution Based on Selector-Side QoS Measuring,"This paper proposes a service selection solution named ZX, which is based on the selector-side QoS measuring results. In ZX, we identify the QoS attributes which could be measured in selector's side, and provide possible solutions to gather the QoS information, and we introduce a new service selecting method, which concerns normal-the-better QoS attributes, by implementing the nearest neighbor algorithm.",2009,0, 4553,Optimized Multipinhole Design for Mouse Imaging,"The aim of this study was to enhance high-sensitivity imaging of a limited field of view in mice using multipinhole collimators on a dual head clinical gamma camera. A fast analytical method was used to predict the contrast-to-noise ratio (CNR) in many points of a homogeneous cylinder for a large number of pinhole collimator designs with modest overlap. The design providing the best overall CNR, a configuration with 7 pinholes, was selected. Next, the pinhole pattern was made slightly irregular to reduce multiplexing artifacts. Two identical, but mirrored 7-pinhole plates were manufactured. In addition, the calibration procedure was refined to cope with small deviations of the camera from circular motion. First, the new plates were tested by reconstructing a simulated homogeneous cylinder measurement. Second, a Jaszczak phantom filled with 37 MBq 99mTc was imaged on a dual head gamma camera, equipped with the new pinhole collimators. The image quality before and after refined calibration was compared for both heads, reconstructed separately and together. Next, 20 short scans of the same phantom were performed with single and multipinhole collimation to investigate the noise improvement of the new design. Finally, two normal mice were scanned using the new multipinhole designs to illustrate the reachable image quality of abdomen and thyroid imaging. The simulation study indicated that the irregular patterns suppress most multiplexing artifacts. Using body support information strongly reduces the remaining multiplexing artifacts. Refined calibration improved the spatial resolution. Depending on the location in the phantom, the CNR increased with a factor of 1 to 2.5 using the new instead of a single pinhole design. The first proof of principle scans and reconstructions were successful, allowing the release of the new plates and software for preclinical studies in mice.",2009,0,4184 4554,Optimal Radial Basis Function Neural Network power transformer differential protection,"This paper presents a new algorithm for protection of power transformer by using optimal radial basis function neural network (ORBFNN). ORBFNN based technique is applied by amalgamating the conventional differential protection scheme of power transformer and internal faults are precisely discriminated from inrush condition. The proposed method neither depend on any threshold nor the presence of harmonic contain in differential current. The RBFNN is designed by using particle swarm optimization (PSO) technique. The proposed RBFNN model has faster learning and detecting capability than the conventional neural networks. A comparison in the performance of the proposed ORBFNN and more commonly reported feed forward back propagation neural network (FFBPNN), in literature, is made. The simulations of different faults, over-excitation, and switching conditions on three different power transformers are performed by using PSCAD/EMTDC software and presented algorithm is evaluated by using MATLAB. The test results show that the new algorithm is quick and accurate.",2009,0, 4555,Application of Discrete Wavelet Transform for differential protection of power transformers,"This paper presents a novel formulation for differential protection of three-phase transformers. The discrete wavelet transform (DWT) is employed to extract transitory features of transformer three-phase differential currents to detect internal faulty conditions. The performance of the proposed algorithm is evaluated through simulation of faulty and non-faulty test cases on a power transformer using ATP/EMTP software. The optimal mother wavelet selection includes performance analysis of different mother wavelets and resolution number of levels. In order to test the formulations performance, the proposed method was implemented on MatLabreg environment. Simulated comparative test results with a percentage differential protection with harmonic restraint formulation shows that the proposed technique improves the discrimination performance. Simulated test cases of magnetizing inrush and close by external faults are also presented in order to test the performance of the proposed method in extreme conditions.",2009,0, 4556,Security Protocol Testing Using Attack Trees,In this paper we present an attack injection approach for security protocol testing aiming at vulnerability detection. We use attack tree model to describe known attacks and derive injection test scenarios to test the security properties of the protocol under evaluation. The test scenarios are converted to a specific fault injector script after performing some transformations. The attacker is emulated using a fault injector. This model based approach facilitates there usability and maintainability of the generated injection attacks as well as the generation of fault injectors scripts. The approach is applied to an existing mobile security protocol. We performed experiments with truncation and DoS attacks; results show good precision and efficiency in the injection method.,2009,0, 4557,A Resource Management System for Fault Tolerance in Grid Computing,"In grid computing, resource management and fault tolerance services are important issues. The availability of the selected resources for job execution is a primary factor that determines the computing performance. The failure occurrence of resources in the grid computing is higher than in a tradition parallel computing. Since the failure of resources affects job execution fatally, fault tolerance service is essential in computational grids. And grid services are often expected to meet some minimum levels of quality of service (QoS) for desirable operation. However Globus toolkit does not provide fault tolerance service that supports fault detection service and management service and satisfies QoS requirement. Thus this paper proposes fault tolerance service to satisfy QoS requirement in computational grids. In order to provide fault tolerance service and satisfy QoS requirements, we expand the definition of failure, such as process failure, processor failure, and network failure. And we propose resource scheduling service, fault detection service and fault management service and show implement and experiment results.",2009,0, 4558,Quantitative Modeling of Communication Cost for Global Service Delivery,"IT service providers are increasingly utilizing globally distributed resources to drive down costs, reduce risk through diversification and gain access to a larger talent pool. However, fostering effective collaboration among geographically distributed resources is a difficult challenge. In this paper, we present our initial attempt to quantify the increased overhead in leveraging distributed resources as one of the project costs. We associate this overhead cost measurement with metrics that measure communication quality, such as reduction in productivity and communication delay. These metrics can in turn be computed as functions of underlying project parameters. To achieve this goal, we first build a project communication model (PCM) to categorize different types of collaborative communication. We then represent communication efficiency and changes in resource availability in terms of information theoretic concepts such as reduced channel capacity, information encoding efficiency and channel availability. This analysis is used to help determine the cost associated with team formation and task distribution during the project planning phase.",2009,0, 4559,Monitoring of Complex Applications Execution in Distributed Dependable Systems,"The execution of applications in dependable system requires a high level of instrumentation for automatic control. We present in this paper a monitoring solution for complex application execution. The monitoring solution is dynamic, offering real-time information about systems and applications. The complex applications are described using workflows. We show that the management process for application execution is improved using monitoring information. The environment is represented by distributed dependable systems that offer a flexible support for complex application execution. Our experimental results highlight the performance of the proposed monitoring tool, the MonALISA framework.",2009,0, 4560,Taguchi Smaller-the-Best Software Quality Metrics,"Taguchi-based software metrics (numerical software measurements) define software quality in terms of ""loss imparted to society"" after a software product is delivered to the end user. Previous work has focused on nominal-the-best and larger-the-best loss functions for estimating the loss to society. This paper focuses on the smaller-the-best loss function for estimating the loss to society. The smaller-the-best loss function is for a property whose value should be as small as possible. It is shown that the smaller-the-best loss function can be obtained from the nominal-the-best loss function by substituting zero for t, the target value in the nominal-the-best loss function. This derivation is illustrated and a case study utilizing the derivation is presented. Taguchi-based metrics are in line with six-sigma statistical quality control techniques that have been used with software products for several years. Experiences with Taguchi-based metrics have shown good correlation with existing popularly known software quality metrics.",2009,0, 4561,Machine Vision Based Image Analysis for the Estimation of Pear External Quality,"The research on real time fruit quality detection with machine vision is an attractive and prospective subject for improving marketing competition and post harvesting value-added processing technology of fruit products. However, the farm products with different varieties and different quality have caused tremendous losses in economy due to lacking the post-harvest inspecting standards and measures in China. In view of the existing situations of fruit quality detection and the broad application prospect of machine vision in quality evaluation of agricultural products in China, the methods to detect the external quality of pear by machine vision were researched in this work. It aims at solving the problems, such as fast processing the large amount of image information, processing capability and increasing precision of detection, etc. The research is supported by the software of Lab Windows/CVI of NI Company. The system can be used for fruit grading by the external qualities of size, shape, color and surface defects. Some fundamental theories of machine vision based on virtual instrumentation were investigated and developed in this work. It is testified that machine vision is an alternative to unreliable manual sorting of fruits.",2009,0, 4562,Reliability Modeling and Analysis of Safety-Critical Manufacture System,"There are working, fail-safe and fail-dangerous states in safety-critical manufacture systems. This paper presents three typical safety-critical manufacture system architecture models: series, parallel and series-parallel system, whose components lifetime distributions are general forms. Also the reliability related indices, such as the probabilities that the system in these states and the mean times for the system fail-safe and fail-dangerous, are derived respectively. And the relationships among the obtained indices of the three system architecture models are analyzed. Finally some numerical examples are employed to elaborate the results obtained in this paper. The derived indices formulas are new results and without component lifetime distribution assumptions, which have significant meaning for evaluating the manufacture system reliability and improving the manufacture system safety design.",2009,0, 4563,Distributed control plane architecture of next generation IP routers,"In this paper, we present our research aiming at building a petabit router model for next generation networks. Considering the increasing traffic requirements on the Internet, current gigabit and terabit speed routers will soon not be able to meet user demand. One of the promising trends of router evolution is to build next generation routers with enhanced memory capacity and computing resources, distributed across a very high speed switching fabric. The main limitation of the current routing and signaling software modules, traditionally designed in a centralized manner, is that they do not scale in order to fully exploit such an advanced distributed hardware architecture. This paper discusses an implementation for an control plane for next generation routers integrating several protocol dedicated distributed architectures, aiming at increasing the scalability and resiliency. The proposed architecture distributes the processing functions on router cards, i.e., on both control and line cards. Therefore, it reduces the bottlenecks and improves both the overall performance and the resiliency in the presence of faults. Scalability is estimated with respect to the CPU utilization and memory requirements.",2009,0, 4564,An efficient hardware-software approach to network fault tolerance with InfiniBand,"In the last decade or so, clusters have observed a tremendous rise in popularity due to excellent price to performance ratio. A variety of Interconnects have been proposed during this period, with InfiniBand leading the way due to its high performance and open standard. Increasing size of the InfiniBand clusters has reduced the mean time between failures of various components of these clusters tremendously. In this paper, we specifically focus on the network component failure and propose a hybrid hardware-software approach to handling network faults. The hybrid approach leverages the user-transparent network fault detection and recovery using Automatic Path Migration (APM), and the software approach is used in the wake of APM failure. Using Global Arrays as the programming model, we implement this approach with Aggregate Remote Memory Copy Interface (ARMCI), the runtime system of Global Arrays. We evaluate our approach using various benchmarks (siosi7, pentane, h2o7 and siosi3) with NWChem, a very popular ab initio quantum chemistry application. Using the proposed approach, the applications run to completion without restart on emulated network faults and acceptable overhead for benchmarks executing for a longer period of time.",2009,0, 4565,Reliability-aware scalability models for high performance computing,"Scalability models are powerful analytical tools for evaluating and predicting the performance of parallel applications. Unfortunately, existing scalability models do not quantify failure impact and therefore cannot accurately account for application performance in the presence of failures. In this study, we extend two well-known models, namely Amdahl's law and Gustafson's law, by considering the impact of failures and the effect of fault tolerance techniques on applications. The derived reliability-aware models can be used to predict application scalability in failure-present environments and evaluate fault tolerance techniques. Trace-based simulations via real failure logs demonstrate that the newly developed models provide a better understanding of application performance and scalability in the presence of failures.",2009,0, 4566,MITHRA: Multiple data independent tasks on a heterogeneous resource architecture,"With the advent of high-performance COTS clusters, there is a need for a simple, scalable and fault-tolerant parallel programming and execution paradigm. In this paper, we show that the popular MapReduce programming model can be utilized to solve many interesting scientific simulation problems with much higher performance than regular cluster computers by leveraging GPGPU accelerators in cluster nodes. We use the Massive Unordered Distributed (MUD) formalism and establish a one-to-one correspondence between it and general Monte Carlo simulation methods. Our architecture, MITHRA, leverages NVIDIA CUDA technology along with Apache Hadoop to produce scalable performance gains using the MapReduce programming model. The evaluation of our proposed architecture using the Black Scholes option pricing model shows that a MITHRA cluster of 4 GPUs can outperform a regular cluster of 62 nodes, achieving a speedup of about 254 times in our testbed, while providing scalable near linear performance with additional nodes.",2009,0, 4567,Benchmarking Quality-Dependent and Cost-Sensitive Score-Level Multimodal Biometric Fusion Algorithms,"Automatically verifying the identity of a person by means of biometrics (e.g., face and fingerprint) is an important application in our day-to-day activities such as accessing banking services and security control in airports. To increase the system reliability, several biometric devices are often used. Such a combined system is known as a multimodal biometric system. This paper reports a benchmarking study carried out within the framework of the BioSecure DS2 (Access Control) evaluation campaign organized by the University of Surrey, involving face, fingerprint, and iris biometrics for person authentication, targeting the application of physical access control in a medium-size establishment with some 500 persons. While multimodal biometrics is a well-investigated subject in the literature, there exists no benchmark for a fusion algorithm comparison. Working towards this goal, we designed two sets of experiments: quality-dependent and cost-sensitive evaluation. The quality-dependent evaluation aims at assessing how well fusion algorithms can perform under changing quality of raw biometric images principally due to change of devices. The cost-sensitive evaluation, on the other hand, investigates how well a fusion algorithm can perform given restricted computation and in the presence of software and hardware failures, resulting in errors such as failure-to-acquire and failure-to-match. Since multiple capturing devices are available, a fusion algorithm should be able to handle this nonideal but nevertheless realistic scenario. In both evaluations, each fusion algorithm is provided with scores from each biometric comparison subsystem as well as the quality measures of both the template and the query data. The response to the call of the evaluation campaign proved very encouraging, with the submission of 22 fusion systems. To the best of our knowledge, this campaign is the first attempt to benchmark quality-based multimodal fusion algorithms. In the presence of changing - - image quality which may be due to a change of acquisition devices and/or device capturing configurations, we observe that the top performing fusion algorithms are those that exploit automatically derived quality measurements. Our evaluation also suggests that while using all the available biometric sensors can definitely increase the fusion performance, this comes at the expense of increased cost in terms of acquisition time, computation time, the physical cost of hardware, and its maintenance cost. As demonstrated in our experiments, a promising solution which minimizes the composite cost is sequential fusion, where a fusion algorithm sequentially uses match scores until a desired confidence is reached, or until all the match scores are exhausted, before outputting the final combined score.",2009,0, 4568,A Concept of System Usability Assessment: System Attentiveness as the Measure of Quality,"The goal of this paper is to present novel metrics for system usability assessment and quality assessment (SQA). Proposed metrics should provide means of capturing overall system interference with regular daily routines and habits of system users, referred to as ""attentive interference"". We argue that assuring the system is attentive proves essential when trying to mitigate risks related to system rejection by the intended users.",2009,0, 4569,Performance of FMIPv6-based cross-layer handover for supporting mobile VoIP in WiMAX networks,"This paper presents validation and evaluation of mobile VoIP support over WiMAX networks using the FMIPv6-based cross layer handover scheme. A software module has been implemented for the FMIPv6-based handoff scheme. The handoff delay components are formulated. To evaluate its support of mobile VoIP, we carefully assess the handoff delay, the total delay, and the R factor which is a representation of voice user satisfactory degree. Simulation results show that the cross-layering handoff scheme, as compared with the non-cross-layer scheme, successfully decreases layer-3 handoff delay by almost 50%, and is therefore thriving to support mobile VoIP services. We believe this is the first performance evaluation work for the FMIPv6-based cross-layer scheme, and hence an important work for the WiMAX research community.",2009,0, 4570,Exploiting scientific workflows for large-scale gene expression data analysis,"Microarrays are state technologies of the art for the measurement of expression of thousands of genes in a single experiment. The treatment of these data are typically performed with a wide range of tools, but the understanding of complex biological system by means of gene expression usually requires integrating different types of data from multiple sources and different services and tools. Many efforts are being developed on the new area of scientific workflows in order to create a technology that links both data and tools to create workflows that can easily be used by researchers. Currently technologies in this area aren't mature yet, making arduous the use of these technologies by the researcher. In this paper we present an architecture that helps the researchers to make large-scale gene expression data analysis with cutting edge technologies. The main underlying idea is to automate and rearrange the activities involved in gene expression data analysis, in order to freeing the user of superfluous technological details and tedious and error-prone tasks.",2009,0, 4571,The effect of granularity level on software defect prediction,"Application of defect predictors in software development helps the managers to allocate their resources such as time and effort more efficiently and cost effectively to test certain sections of the code. In this research, we have used naive Bayes classifier (NBC) to construct our defect prediction framework. Our proposed framework uses the hierarchical structure information about the source code of the software product, to perform defect prediction at a functional method level and source file level. We have applied our model on SoftLAB and Eclipse datasets. We have measured the performance of our proposed model and applied cost benefit analysis. Our results reveal that source file level defect prediction improves the verification effort, while decreasing the defect prediction performance in all datasets.",2009,0, 4572,Negotiation based advance reservation priority grid scheduler with a penal clause for execution failures,"The utility computing in a grid demands much more adaptability and dynamism when certain levels of commitments are to be complied with. Users with distinct priorities are categorized on the basis of the types of organizations or applications they belong to and can submit multiple jobs with varying specific needs. The interests of consumers and the resource service providers must be equally watched upon, with focus on commercials too. At the same time, the quality of the service must be ascertained to a committed level, thus enforcing an agreement. The authors propose an algorithm for a negotiation based scheduler that dynamically analyses and assesses the incoming jobs in terms of priorities and requirements, reserves them to resources after negotiations and match-making between the resource providers and the users. The jobs thus reserved are allocated resources for future. The performance evaluation of the scheduler for various parameters was done through simulations. The results were found to be optimal after incorporating advance reservation with dynamic priority control over job selection, involving the impact of situations confronting resource failures introducing economic policies and penalties.",2009,0, 4573,An outlier detection algorithm based on object-oriented metrics thresholds,"Detection of outliers in software measurement datasets is a critical issue that affects the performance of software fault prediction models built based on these datasets. Two necessary components of fault prediction models, software metrics and fault data, are collected from the software projects developed with object-oriented programming paradigm. We proposed an outlier detection algorithm based on these kinds of metrics thresholds. We used Random Forests machine learning classifier on two software measurement datasets collected from jEdit open-source text editor project and experiments revealed that our outlier detection approach improves the performance of fault predictors based on Random Forests classifier.",2009,0, 4574,Hand tracking and trajectory analysis for physical rehabilitation,"In this work we present a framework for physical rehabilitation, which is based on hand tracking. One particular requirement in physical rehabilitation is the capability of the patient to correctly reproduce a specific path, following an example provided by the medical staff. Currently, these assignments are typically performed manually, and a nurse or doctor, who supervises the correctness of the movement, constantly assists the patient throughout the whole rehabilitation process. With the proposed system, our aim is to provide medical institutions and patients with a low-cost and portable instrument to automatically assess the rehabilitation improvements. To evaluate the performance of the exercise, and to determine the distance between the trial and the reference path, we adopted the dynamic time warping (DTW) and the longest common sub-sequence (LCSS) as discriminating metrics. Trajectories and numerical values are then stored to track the history of the patient and appraise the improvements of the rehabilitation process over time. Thanks to the tests conducted with real patients, it has been possible to evaluate the quality of the proposed tool, in terms of both graphical interface and functionalities.",2009,0, 4575,Preserving Cohesive Structures for Tool-Based Modularity Reengineering,"The quality of software systems heavily depends on their structure, which affects maintainability and readability. However, the ability of humans to cope with the complexity of large software systems is limited. To support reengineering large software systems, software clustering techniques that maximize module cohesion and minimize inter-modular coupling have been developed. The main drawback of these approaches is that they might pull apart elements that were thoughtfully placed together. This paper describes how strongly connected component analysis, dominance analysis, and intra-modular similarity clustering can be applied to identify and to preserve cohesive structures in order to improve the result of reengineering. The use of the proposed method allows a significant reduction of the number of component movements. As a result, the probability of false component movements is reduced. The proposed approach is illustrated by statistics and examples from 18 open source Java projects.",2009,0, 4576,Filtering System Metrics for Minimal Correlation-Based Self-Monitoring,"Self-adaptive and self-organizing systems must be self-monitoring. Recent research has shown that self-monitoring can be enabled by using correlations between monitoring variables (metrics). However, computer systems often make a very large number of metrics available for collection. Collecting them all not only reduces system performance, but also creates other overheads related to communication, storage, and processing. In order to control the overhead, it is necessary to limit collection to a subset of the available metrics. Manual selection of metrics requires a good understanding of system internals, which can be difficult given the size and complexity of modern computer systems. In this paper, assuming no knowledge of metric semantics or importance and no advance availability of fault data, we investigate automated methods for selecting a subset of available metrics in the context of correlation-based monitoring. Our goal is to collect fewer metrics while maintaining the ability to detect errors. We propose several metric selection methods that require no information beside correlations. We compare these methods on the basis of fault coverage. We show that our minimum spanning tree-based selection performs best, detecting on average 66% of faults detectable by full monitoring (i.e., using all considered metrics) with only 30% of the metrics.",2009,0, 4577,UML Specification and Correction of Object-Oriented Anti-patterns,"Nowadays, the detection and correction of software defects has become a very hard task for software engineers. Most importantly, the lack of standard specifications of these software defects along with the lack of tools for their detection, correction and verification forces developers to perform manual modifications; resulting not only in mistakes, but also in costs of time and resources. The work presented here is a study of the specification and correction of a particular type of software defect: Object-Oriented anti-patterns. More specifically, we define a UML based specification of anti-patterns and establish design transformations for their correction. Through this work, we expect to open up the possibility to automate the detection and correction of these kinds of software defects.",2009,0, 4578,Scenario-Based Genetic Synthesis of Software Architecture,"Software architecture design can be regarded as finding an optimal combination of known general solutions and architectural knowledge with respect to given requirements. Based on previous work on synthesizing software architecture using genetic algorithms, we propose a refined fitness function for assessing software architecture in genetic synthesis, taking into account the specific anticipated needs of the software system under design. Inspired by real life architecture evaluation methods, the refined fitness function employs scenarios, specific situations possibly occurring during the lifetime of the system and requiring certain modifiability properties of the system. Empirical studies based on two example systems suggest that using this kind of fitness function significantly improves the quality of the resulting architecture.",2009,0, 4579,MUD receiver capacity optimization for wireless ad hoc networks,"In general the performance of ad hoc networks is limited by mobility, half-duplex operation, and possible collisions. One way to overcome these bottlenecks is to apply a multiuser detection (MUD) reception that can significantly increase the network throughput and improve quality of service. Recent technological advances allow implementation of a multi-user detection receiver in one software-defined radio chip and thus making it feasible to consider this technology for ad hoc networks. Nevertheless, the MUD receiver power consumption and complexity grow exponentially with its capacity defined as the maximum number of CDMA signals, originated at different nodes, which can be received simultaneously. Therefore the receiver capacity should be optimized to provide reasonable trade-off between the network performance and the MUD receiver cost and power consumption. We address this issue by presenting an approximate analysis of the network throughput as a function of MUD receiver capacity and other network parameters such as node density and offered traffic. The numerical analysis illustrates the gains in network throughput and performance with increasing receiver capacity. Based on this analysis we propose a framework for MUD receiver capacity optimization based on performance and cost utility functions.",2009,0, 4580,PRR-PRR Dynamic Relocation,"Partial bitstream relocation (PBR) on FPGAs has been gaining attention in recent years as a potentially promising technique to scale parallelism of accelerator architectures at run time, enhance fault tolerance, etc. PBR techniques to date have focused on reading inactive bitstreams stored in memory, on-chip or off-chip, whose contents are generated for a specific partial reconfiguration region (PRR) and modified on demand for configuration into a PRR at a different location. As an alternative, we propose a PRR-PRR relocation technique to generate source and destination addresses, read the bitstream from an active PRR (source) in a non-intrusive manner, and write it to destination PRR. We describe two options of realizing this on Xilinx Virtex 4 FPGAs: (a) hardware-based accelerated relocation circuit (ARC) and (b) a software solution executed on Microblaze. A comparative performance analysis to highlight the speed-up obtained using ARC is presented. For real test cases, performance of our implementations are compared to estimated performances of two state of the art methods.",2009,0, 4581,Study on Coordinated Distributed Scheduling in WiMAX Mesh Network,"The IEEE802.16 (WiMAX) standard specifies the air interface of broad width access systems. Mesh topology is supported as an optional structure. In the mesh network, coordinated distributed scheduling is an important scheduling mode. Such mode permits traffic to be routed through other subscriber stations (SS) and occur directly between SSs. Xmt holdoff exponent is an import parameter in the mode. How to set it makes a great impact on the whole scheduling performance. In this paper, a dynamic setting scheme based on classification is proposed and compares in performance with the static scheme with fixed values. Simulation results show the proposed scheme can increase throughout and can provide different quality of service (QoS) levels for multimedia services.",2009,0, 4582,Research of Software Defect Prediction Model Based on Gray Theory,"The software testing process is an important stage in the software life cycle to improve software quality. An amount of software data and information can be obtained. Based on analyzing the source and the type of software, this paper describes several use of the defects data and explains how to estimate the density of the software by using the software defects data collected in the practical work, and to use GM model to predict and assess the software reliability.",2009,0, 4583,Sub-Path Congestion Control in CMT,"Using the application of block data transfer, we investigate the performance of concurrent multipath transfer using SCTP multihoming (CMT) with congestion control policy under the scenario of a sender is constrained by the receive buffer (rbuf). We find that existing policy has some defects in aspect of bandwidth-aware source scheduling. Based on this, Based on this We proposed a sub-path congestion-control policy for SCTP (SPCC-SCTP) with bandwidth-aware source scheduling by dividing an association into sub paths based on shared bottleneck detection to overcome existing flaws of standard SCTP in supporting the multi-homing feature in the case of concurrent multipath transfer (CMT). The performance of the SPCC-SCTP is assessed through ns-2 experiments in a simplified Diff-Serv network. Simulation results demonstrated the effectiveness of our proposed mechanism and invite further research.",2009,0, 4584,Changes and bugs â€?Mining and predicting development activities,"Software development results in a huge amount of data: changes to source code are recorded in version archives, bugs are reported to issue tracking systems, and communications are archived in e-mails and newsgroups. We present techniques for mining version archives and bug databases to understand and support software development. First, we introduce the concept of co-addition of method calls, which we use to identify patterns that describe how methods should be called. We use dynamic analysis to validate these patterns and identify violations. The co-addition of method calls can also detect cross-cutting changes, which are an indicator for concerns that could have been realized as aspects in aspect-oriented programming. Second, we present techniques to build models that can successfully predict the most defect-prone parts of large-scale industrial software, in our experiments Windows Server 2003. This helps managers to allocate resources for quality assurance to those parts of a system that are expected to have most defects. The proposed measures on dependency graphs outperformed traditional complexity metrics. In addition, we found empirical evidence for a domino effect, i.e., depending on defect-prone binaries increases the chances of having defects.",2009,0, 4585,Introducing a test suite similarity metric for event sequence-based test cases,"Most of today's event driven software (EDS) systems are tested using test cases that are carefully constructed as sequences of events; they test the execution of an event in the context of its preceding events. Because sizes of these test suites can be extremely large, researchers have developed techniques, such as reduction and minimization, to obtain test suites that are ldquosimilarrdquo to the original test suite, but smaller. Existing similarity metrics mostly use code coverage; they do not consider the contextual relationships between events. Consequently, reduction based on such metrics may eliminate desirable test cases. In this paper, we present a new parameterized metric, CONTeSSi(n) which uses the context of n preceding events in test cases to develop a new context-aware notion of test suite similarity for EDS. This metric is defined and evaluated by comparing four test suites for each of four open source applications. Our results show that CONT eSSi(n) is a better indicator of the similarity of EDS test suites than existing metrics.",2009,0, 4586,Analysis of pervasive multiple-component defects in a large software system,"Certain software defects require corrective changes repeatedly in a few components of the system. One type of such defects spans multiple components of the system, and we call such defects pervasive multiple-component defects (PMCDs). In this paper, we describe an empirical study of six releases of a large legacy software system (of approx. size 20 million physical lines of code) to analyze PMCDs with respect to: (1) the complexity of fixing such defects and (2) the persistence of defect-prone components across phases and releases. The overall hypothesis in this study is that PMCDs inflict a greater negative impact than do other defects on defect-correction efficacy. Our findings show that the average number of changes required for fixing PMCDs is 20-30 times as much as the average for all defects. Also, over 80% of PMCD-contained defect-prone components still remain defect-prone in successive phases or releases. These findings support the overall hypothesis strongly. We compare our results, where possible, to those of other researchers and discuss the implications on maintenance processes and tools.",2009,0, 4587,Modeling class cohesion as mixtures of latent topics,"The paper proposes a new measure for the cohesion of classes in object-oriented software systems. It is based on the analysis of latent topics embedded in comments and identifiers in source code. The measure, named as maximal weighted entropy, utilizes the latent Dirichlet allocation technique and information entropy measures to quantitatively evaluate the cohesion of classes in software. This paper presents the principles and the technology that stand behind the proposed measure. Two case studies on a large open source software system are presented. They compare the new measure with an extensive set of existing metrics and use them to construct models that predict software faults. The case studies indicate that the novel measure captures different aspects of class cohesion compared to the existing cohesion measures and improves fault prediction for most metrics, which are combined with maximal weighted entropy.",2009,0, 4588,Assessing the impact of framework changes using component ranking,"Most of today's software applications are built on top of libraries or frameworks. Just as applications evolve, libraries and frameworks also evolve. Upgrading is straightforward when the framework changes preserve the API and behavior of the offered services. However, in most cases, major changes are introduced with the new framework release, which can have a significant impact on the application. Hence, a common question a framework user might ask is, ldquoIs it worth upgrading to the new framework version?rdquo In this paper, we study the evolution of an application and its underlying framework to understand the information we can get through a multi-version use relation analysis. We use component rank changes to measure this impact. Component rank measurement is a way of quantifying the importance of a component by its usage. As framework components are used by applications, the rankings of the components are changed. We use component ranking to identify the core components in each framework version. We also confirm that upgrading to the new framework version has an impact to a component rank of the entire system and the framework, and this impact not only involves components which use the framework directly, but also other indirectly-related components. Finally, we also confirm that there is a difference in the growth of use relations between application and framework.",2009,0, 4589,On predicting the time taken to correct bug reports in open source projects,"Existing studies on the maintenance of open source projects focus primarily on the analyses of the overall maintenance of the projects and less on specific categories like the corrective maintenance. This paper presents results from an empirical study of bug reports from an open source project, identifies user participation in the corrective maintenance process through bug reports, and constructs a model to predict the corrective maintenance effort for the project in terms of the time taken to correct faults. Our study focuses on 72482 bug reports from over nine releases of Ubuntu, a popular Linux distribution. We present three main results: (1) 95% of the bug reports are corrected by people participating in groups of size ranging from 1 to 8 people, (2) there is a strong linear relationship (about 92%) between the number of people participating in a bug report and the time taken to correct it, (3) a linear model can be used to predict the time taken to correct bug reports.",2009,0, 4590,The squale model â€?A practice-based industrial quality model,"ISO 9126 promotes a three-level model of quality (factors, criteria, and metrics) which allows one to assess quality at the top level of factors and criteria. However, it is difficult to use this model as a tool to increase software quality. In the Squale model, we add practices as an intermediate level between metrics and criteria. Practices abstract away from raw information (metrics, tool reports, audits) and provide technical guidelines to be respected. Moreover, practice marks are adjusted using formulae to suit company development habits or exigences: for example bad marks are stressed to point to places which need more attention. The Squale model has been developed and validated over the last couple of years in an industrial setting with Air France-KLM and PSA Peugeot-Citroen.",2009,0, 4591,Data transformation and attribute subset selection: Do they help make differences in software failure prediction?,"Data transformation and attribute subset selection have been adopted in improving software defect/failure prediction methods. However, little consensus was achieved on their effectiveness. This paper reports a comparative study on these two kinds of techniques combined with four classifier and datasets from two projects. The results indicate that data transformation displays un obvious influence on improving the performance, while attribute subset selection methods show distinguishably inconsistent output. Besides, consistency across releases and discrepancy between the open-source and in-house maintenance projects in the evaluation of these methods are discussed.",2009,0, 4592,Fundamental performance assessment of 2-D myocardial elastography in a phased-array configuration,"Two-dimensional myocardial elastography, an RF-based, speckle-tracking technique, uses 1-D cross-correlation and recorrelation methods in a 2-D search, and can estimate and image the 2-D transmural motion and deformation of the myocardium so as to characterize the cardiac function. Based on a 3-D finite-element (FE) canine left-ventricular model, a theoretical framework was previously developed by our group to evaluate the estimation quality of 2-D myocardial elastography using a linear array. In this paper, an ultrasound simulation program, Field II, was used to generate the RF signals of a model of the heart in a phased-array configuration and under 3-D motion conditions; thus simulating a standard echocardiography exam. The estimation method of 2-D myocardial elastography was adapted for use with such a configuration. All elastographic displacements and strains were found to be in good agreement with the FE solutions, as indicated by the mean absolute error (MAE) between the two. The classified first and second principal strains approximated the radial and circumferential strains, respectively, in the phased-array configuration. The results at different sonographic signal-to-noise ratios (SNRs) showed that the MAEs of the axial, lateral, radial, and circumferential strains remained relatively constant when the SNRs was equal to or higher than 20 dB. The MAEs of the strain estimation were not significantly affected when the acoustic attenuation was included in the simulations. A significantly reduced number of scatterers could be used to speed up the simulation, without sacrificing the estimation quality.The proposed framework can further be used to assess the estimation quality, explore the theoretical limitation and investigate the effects of various parameters in 2-D myocardial elastography under more realistic conditions.",2009,0, 4593,Image feature extraction for mobile processors,"High-quality cameras are a standard feature of mobile platforms, but the computational capabilities of mobile processors limit the applications capable of exploiting them. Emerging mobile application domains, for example mobile augmented reality (MAR), rely heavily on techniques from computer vision, requiring sophisticated analyses of images followed by higher-level processing. An important class of image analyses is the detection of sparse localized interest points. The scale invariant feature transform (SIFT), the most popular such analysis, is computationally representative of many other feature extractors. Using a novel code-generation framework, we demonstrate that a small set of optimizations produce high-performance SIFT implementations for three very different architectures: a laptop CPU (Core 2 Duo), a low-power CPU (Intel Atom), and a low-power GPU (GMA X3100). We improve the runtime of SIFT by more than 5X on our low-power architectures, enabling a low-power mobile device to extract SIFT features up to 63% as fast as the laptop CPU.",2009,0, 4594,Software component quality prediction in the legacy product development environment using Weibull and other mathematical distributions,"Software component quality has a major influence in software development project performances such as lead-time, time to market and cost. It also affects the other projects within the organization, the people assigned into the projects and the organization in general. Software development organization must have indication and prediction about software component quality and project performances in general. One of the software component quality prediction techniques is statistical and probabilistic technique. The paper deals with software quality prediction techniques, different applied models in different development projects and faults existing indicators. Three case studies are presented and evaluated for prediction of software quality in very large development projects within the AXE platform in the Ericsson Nikola Tesla R&D.",2009,0, 4595,Evaluating rate-estimation for a mobility and QoS-aware network architecture,"In a nearby future wireless networks will run applications with special QoS requirements. FHMIP is an effective scheme to reduce Mobile IPv6 handover disruption but it does not deal with any other specific QoS requirement. Therefore new traffic management schemes are needed in order to provide QoS guarantees to real-time applications and this implies network mobility optimizations and congestion control support. Traffic management schemes should deal with QoS requirements during handover and should use some resource management strategy in order to achieve this. In this article a new resource management scheme for DiffServ QoS model is proposed, to be used by access routers as an extension to FHMIP micromobility protocol. In order to prevent QoS deterioration, access routers pre-evaluate the impact of accepting all traffic from a mobile node, previous to the handover. This pre-evaluation and post decision on whether or not to accept any, or all, of this new traffic is based on a measurement based admission control procedure. This mobility and QoS-aware network architecture, integrating a simple signaling protocol, a traffic descriptor, and exhibiting adaptive behavior has been implemented and tested using ns-2. All measurements and decisions are based on DiffServ class-of-service aggregations, thus avoiding large flow state information maintenance. Rate estimators are essential mechanisms to the efficiency of this QoS-aware overall architecture. Therefore, in order to be able to choose the rate estimator that better fits this global architecture, two rate estimators - Time Sliding Window (TSW) and Exponential Moving Average (EMA) - have been studied and evaluated by means of ns-2 simulations in QoS-aware wireless mobility scenarios.",2009,0, 4596,RSSI-based Cross Layer Link Quality Management for Layer 3 wireless mesh networks,"We propose a framework of link quality management for wireless mesh networks. In this method, packet error rate for each rate of each link is estimated based on RSSI measured for received hello packets using noise parameter to select the optimum rate, such that the link metric becomes minimum. We present a method to estimate the noise parameter and show that the proposed method is promising using an actual testbed.",2009,0, 4597,Characterization of the Effects of CW and Pulse CW Interference on the GPS Signal Quality,"In the Global Positioning System (GPS), code division multiple access (CDMA) signals are used. Because of the spectral characteristics of the CDMA signal, each particular type of interference (signals to be rejected) has a different effect on the quality of the received GPS satellite signals. In this paper, the effects of three types of interference are studied on the carrier-to-noise ratio (C/No) of the received GPS signal as an indicator of the quality of that signal; continuous wave (CW), pulse CW, and swept CW. For CW interference, it is analytically shown that the C/No of the signal can be calculated using a closed formula after the correlator in the receiver. This result is supported by calculating the C/No using the I and Q data from a software GPS receiver. For pulsed CW, a similar analysis is performed to characterize the effect of parameters such as pulse repetition period (PRP) and also duty cycle on the received signal quality. It is specifically shown that for equal interference power levels, in the cases where the PRP is far less than the pseudorandom noise code period, the signal degradation increases with increasing the duty cycle whereas it doesn't change when the two periods are equal or the PRP is far bigger than the code period.",2009,0, 4598,Pre-determining comparative tests and utilizing signal levels to perform accurate diagnostics,Standard diagnostic schemes don't do enough analysis to focus in on the actual cause of a test failure. Often measurements will be border-line on test sequences prior to an actual test failure. These border-line measurements can be used to aid in the determination of an actual fault. An actual fault and the associated test that should detect that fault can be deceiving. The specific test in question can pass but be right on the border-line. The failure might not show up in testing until a later test is performed. This later test assumes all the prior tests passed and therefore the circuitry associated with these prior tests is good. This is not necessarily the case if some of the output measurements prior to the actual failing test were right on the border-line. What can we do? Take advantage of the order in which the faults are simulated1. We should structure our TPSs such that a review of preliminary tests should be evaluated before the R/R component list is presented. The review of the preliminary test can be rather straight forward. We can look at signals that are within 8-10% of the lower or upper limit. We can then use an inter-related test scheme to evaluate the test(s) that can be associated with the actual failing test. This paper will use an example TPS and show how a test scheme and an evaluation scheme can be used to determine the PCOF. The paper will show actual measurements and how these measurements can be evaluated to determine the actual cause of a failure.,2009,0, 4599,"A comparison of software cost, duration, and quality for waterfall vs. iterative and incremental development: A systematic review","The objective of this study is to present a body of evidence that will assist software project managers to make informed choices about software development approaches for their projects. In particular, two broadly defined competing approaches, the traditional ldquowaterfallrdquo approach and iterative and incremental development (IID), are compared with regards to development cost and duration, and resulting product quality. The method used for this comparison is a systematic literature review. The small set of studies we located did not demonstrate any identifiable cost, duration, or quality trends, although there was some evidence suggesting the superiority of IID (in particular XP). The results of this review indicate that further empirical studies, both quantitative and qualitative, on this topic need to be undertaken. In order to effectively compare study results, the research community needs to reach a consensus on a set of comparable parameters that best assess cost, duration, and quality.",2009,0, 4600,An experiment to observe the impact of UML diagrams on the effectiveness of software requirements inspections,"Software inspections aim to find defects early in the development process and studies have found them to be effective. However, there is almost no data available regarding the impact of UML diagram utilization in software requirements specification documents on inspection effectiveness. This paper addresses this issue by investigating whether inclusion of UML diagrams impacts the effectiveness of requirements inspection. We conducted an experiment in an academic environment with 35 subjects to empirically investigate the impact of UML diagram inclusion on requirements inspections' effectiveness and the number of reported defects. The results show that including UML diagrams in requirements specification document significantly impacts the number of reported defects, and there is no significant impact on the effectiveness of individual inspections.",2009,0, 4601,Improving CVSS-based vulnerability prioritization and response with context information,"The growing number of software security vulnerabilities is an ever-increasing challenge for organizations. As security managers in the industry have to operate within limited budgets they also have to prioritize their vulnerability responses. The Common Vulnerability Scoring System (CVSS) aids in such prioritization by providing a metric for the severity of vulnerabilities. In its most prominent application, as the severity metric in the U.S. National Vulnerability Database (NVD), CVSS scores omit information pertaining the potential exploit victims' context. Researchers and managers in the industry have long understood that the severity of vulnerabilities varies greatly among different organizational contexts. Therefore the CVSS scores provided by the NVD alone are of limited use for vulnerability prioritization in practice. Security managers could address this limitation by adding the missing context information themselves to improve the quality of their CVSS-based vulnerability prioritization. It is unclear for them, however, whether the potential improvements are worth the additional effort. We present a method that enables practitioners to estimate these improvements. Our method is of particular use to practitioners who do not have the resources to gather large amounts of empirical data, because it allows them to simulate the improvement potential using only publicly available data in the NVD and distribution models from the literature. We applied the method on a sample set of 720 vulnerability announcements from the NVD and found that adding context information significantly improved the prioritization and selection of vulnerability response process. Our findings contribute to the discourse on returns on security investment, measurement of security processes and quantitative security management.",2009,0, 4602,The impact of limited search procedures for systematic literature reviews â€?A participant-observer case study,"This study aims to compare the use of targeted manual searches with broad automated searches, and to assess the importance of grey literature and breadth of search on the outcomes of SLRs. We used a participant-observer multi-case embedded case study. Our two cases were a tertiary study of systematic literature reviews published between January 2004 and June 2007 based on a manual search of selected journals and conferences and a replication of that study based on a broad automated search. Broad searches find more papers than restricted searches, but the papers may be of poor quality. Researchers undertaking SLRs may be justified in using targeted manual searches if they intend to omit low quality papers; if publication bias is not an issue; or if they are assessing research trends in research methodologies.",2009,0, 4603,Tool supported detection and judgment of nonconformance in process execution,"In the past decades the software engineering community has proposed a large collection of software development life cycles, models, and processes. The goal of a major set of these processes is to assure that the product is finished within time and budget, and that a predefined set of functional and nonfunctional requirements (e.g. quality goals) are satisfied at delivery time. Based upon the assumption that there is a real relationship between the process applied and the characteristics of the product developed from that process, we developed a tool supported approach that uses process nonconformance detection to identify potential risks in achieving the required process characteristics. In this paper we present the approach and a feasibility study that demonstrates its use on a large-scale software development project in the aerospace domain. We demonstrate that our approach, in addition to meeting the criteria above, can be applied to a real system of reasonable size; can represent a useful and adequate set of rules of relevance in such an environment; and can detect relevant examples of process nonconformance that provide useful insight to the project manager.",2009,0, 4604,The curse of copy&paste â€?Cloning in requirements specifications,"Cloning in source code is a well known quality defect that negatively affects software maintenance. In contrast, little is known about cloning in requirements specifications. We present a study on cloning in 11 real-world requirements specifications comprising 2,500 pages. For specification clone detection, an existing code clone detection tool is adapted and its precision analyzed. The study shows that a considerable amount of cloning exists, although the large variation between specifications suggests that some authors manage to avoid cloning. Examples of frequent types of clones are given and the negative consequences of cloning, particularly the obliteration of commonalities and variations, are discussed.",2009,0, 4605,An empirical quality model for web service ontologies to support mobile devices,"As Web services and the semantic Web become more important, enabling technologies such as Web service ontologies will grow larger. The ability of mobile devices, such as cell phones and PDAs, to download and reason across them will be severely limited. Given that an agent on a mobile device only needs a subset of what is described in a Web service ontology, an ontology sub-graph can be created. In this paper, we develop a empirical software engineering approach to build a prediction model to measure the quality in terms of correct query handling of ontology sub-graphs relative to the original ontology - mean average recall of the sub-graph compared to the original ontology is used as the quality standard. Our metrics allow speedy selection of a sub-graph for use by a mobile device.",2009,0, 4606,Gauging acceptance of software metrics: Comparing perspectives of managers and developers,"Metrics efforts are often impeded by factors such as poor data quality and developer resistance. To better understand and thus to address the developer perspective in a metrics program we undertook a case study at a large multi-national corporation. We identified six projects, and conducted surveys of both project managers and developers. These surveys were based on the metrics acceptance model (MAM) which is a framework (i.e. a model of relationships between factors, operationalized by a survey instrument) for gauging developer opinions toward software metrics. We noticed some interesting differences between developers' and managers' perceptions of metrics. While managers were on the same page as developers when it came to factors such as ease of use of the metrics tool, they over-estimated developers' confidence to report accurate measures. Managers under-estimated developers' beliefs about the usefulness of metrics and about their fear of adverse consequences. These findings suggest that the MAM could provide useful insights to project managers to train and motivate their developers. We also found that the MAM can be an effective diagnostic tool both at an organizational and project level to identify potential impediments in metrics programs.",2009,0, 4607,Towards logistic regression models for predicting fault-prone code across software projects,"In this paper, we discuss the challenge of making logistic regression models able to predict fault-prone object-oriented classes across software projects. Several studies have obtained successful results in using design-complexity metrics for such a purpose. However, our data exploration indicates that the distribution of these metrics varies from project to project, making the task of predicting across projects difficult to achieve. As a first attempt to solve this problem, we employed simple log transformations for making design-complexity measures more comparable among projects. We found these transformations useful in projects which data is not as spread as the data used for building the prediction model.",2009,0, 4608,Simulation of the defect removal process with queuing theory,"In this paper, we simulate the defects removal process using finite independent queues with different capacity and loading, which represent the limitation of developers and the ability differences of developers. The re-assignment strategy used in defects removal represents the cooperation between relevant developers. Experimental results based on real data show that the simulated approach can provide very useful and important information which can help project manager to estimate the duration of the whole defects removal process, the utilization of each developers and the defects remain at a specific time.",2009,0, 4609,Reducing false alarms in software defect prediction by decision threshold optimization,"Software defect data has an imbalanced and highly skewed class distribution. The misclassification costs of two classes are not equal nor are known. It is critical to find the optimum bound, i.e. threshold, which would best separate defective and defect-free classes in software data. We have applied decision threshold optimization on Naiumlve Bayes classifier in order to find the optimum threshold for software defect data. ROC analyses show that decision threshold optimization significantly decreases false alarms (on the average by 11%) without changing probability of detection rates.",2009,0, 4610,Scope error detection and handling concerning software estimation models,"Over the last 25+ years, the software community has been searching for the best models for estimating variables of interest (e.g., cost, defects, and fault proneness). However, little research has been done to improve the reliability of the estimates. Over the last decades, scope error and error analysis have been substantially ignored by the community. This work attempts to fill this gap in the research and enhance a common understanding within the community. Results provided in this study can eventually be used to support human judgment-based techniques and be an addition to the portfolio. The novelty of this work is that, we provide a way of detecting and handling the scope error arising from estimation models. The answer whether or not scope error will occur is a pre-condition to safe use of an estimation model. We also provide a handy procedure for dealing with outliers as to whether or not to include them in the training set for building a new version of the estimation model. The majority of the work is empirically based, applying computational intelligence techniques to some COCOMO model variations with respect to a publicly available cost estimation data set in the PROMISE repository.",2009,0, 4611,Predicting defects with program dependencies,"Software development is a complex and error-prone task. An important factor during the development of complex systems is the understanding of the dependencies that exist between different pieces of the code. In this paper, we show that for Windows Server 2003 dependency data can predict the defect-proneness of software elements. Since most dependencies of a component are already known in the design phase, our prediction models can support design decisions.",2009,0, 4612,Cognitive factors in perspective-based reading (PBR): A protocol analysis study,"The following study investigated cognitive factors involved in applying the Perspective-Based Reading (PBR) technique for defect detection in software inspections. Using the protocol analysis technique from cognitive science, the authors coded concurrent verbal reports from novice reviewers and used frequency-based analysis to consider existing research on cognition in software inspections from within a cognitive framework. The current coding scheme was able to describe over 98% of the cognitive activities reported during inspection at a level of detail capable of validating multiple hypotheses from literature. A number of threats to validity are identified for the protocol analysis method and the parameters of the current experiment. The authors conclude that protocol analysis is a useful tool for analyzing cognitively intense software engineering tasks such as software inspections.",2009,0, 4613,A detailed examination of the correlation between imports and failure-proneness of software components,"Research has provided evidence that type usage in source files is correlated with the risk of failure of software components. Previous studies that investigated the correlation between type usage and component failure assigned equal blame to all the types imported by a component with a failure history, regardless of whether a type is used in the component, or associated to its failures. A failure-prone component may use a type, but it is not always the case that the use of this type has been responsible for any of its failures. To gain more insight about the correlation between type usage and component failure, we introduce the concept of a failure-associated type to represent the imported types referenced within methods fixed due to failures. We conducted two studies to investigate the tradeoffs between the equal-blame approach and the failure-associated type approach. Our results indicate that few of the types or packages imported by a failure-prone component are associated with its failures - less than 25% of the type imports, and less than 55% of the packages whose usage were reported to be highly correlated with failures by the equal-blame approach, were actually correlated with failures when we looked at the failure-associated types.",2009,0, 4614,A probability-based approach for measuring external attributes of software artifacts,"The quantification of so-called external software attributes, which are the product qualities with real relevance for developers and users, has often been problematic. This paper introduces a proposal for quantifying external software attributes in a unified way. The basic idea is that external software attributes can be quantified by means of probabilities. As a consequence, external software attributes can be estimated via probabilistic models, and not directly measured via software measures. This paper discusses the reasons underlying the proposals and shows the pitfalls related to using measures for external software attributes. We also show that the theoretical bases for our approach can be found in so-called ldquoprobability representations,rdquo a part of Measurement Theory that has not yet been used in Software Engineering Measurement. By taking the definition and estimation of reliability as reference, we show that other external software attributes can be defined and modeled by a probability-based approach.",2009,0, 4615,Original content extraction oriented to anti-plagiarism,"In order to reduce the impact of inclusion of citations and references during the detection of plagiarism in academic theses, and extract the original content, the author created three ways to extract original content and remove the citation: 1) Removal of normative citations by symbol features; 2) removal tacit citations by Bayesian method based on the minimum risk and thesis structure; 3) removal common knowledge base on domain public knowledge base. The research results show that during the extraction of original content, the precision decreases as the risk coefficient increases, while the recall rate increases with the risk coefficient. When the risk coefficient is 60, the whole performance achieves the optimum. Plagiarism detection after extracting the original content presents a fault rate decrease from 9.09% to 4.52%.",2009,0, 4616,Improve the Portability of J2ME Applications: An Architecture-Driven Approach,"The porting of J2ME applications is usually difficult because of diverse device features, limited device resources, and specific issues like device bugs. Therefore, achieving high efficiency in J2ME application porting can be challenging, tedious and error-prone. In this paper, we propose an architecture-driven approach to help address these issues through improving the portability of J2ME applications. It abstracts and models the features that affect porting tasks using component model named NanoCM (nano component model). The model is described in an architecture description language named NanoADL. Several open source J2ME applications are used as the case studies, and are evaluated using metrics indicating coupling, comprehensibility and complexity. Experiment results show that our approach effectively improves the portability of J2ME applications.",2009,0, 4617,A Passive Mode QoS Measurer for ISP,"Most network QoS measurement methods are user-oriented: measurements are triggered by network users, to obtain some performance / QoS metrics of a specified network path. Obviously these methods are not suitable for Internet services providers (ISPs), for they take ISPs too much time to finish a performance poll for the whole network. ISPs need a rapid method to measure and verify some important QoS metrics of their own network and partner's network. We introduce a real-time, passive mode measurement method based on high speed traffic monitoring and tracing mechanism. This method make it possible for ISPs to obtain QoS metrics for most network paths within a very short term at only one measuring point. The method is easy to deploy in backbone and works well in Gbps networks. We also introduce several new QoS metrics and according estimation algorithms, which may be more useful to ISPs than current end-to-end QoS metrics. The method is implemented in a tool called RPPM.",2009,0, 4618,Integrated Evaluation Model of Software Reliability Based on Multi-information,"The software reliability prediction model based on software quality measurement (program characteristics) and the software reliability growth model based on statistics characteristic (failure data) are the main quantification evaluation and prediction methods of software reliability. The modern creditability theory based on Bayesian estimation was introduced into the multi-information integrated modeling of software reliability evaluation and prediction. By this means, the prediction result of software reliability based on the metrics of software quality and the evaluation result of software reliability growth models based on failure data are combined reasonably via the credibility factor. Thus the extensive information related to software reliability was utilized and integrated so as to make the evaluated result of software reliability more reasonable. According to the modeling means of variance decomposition and optimal linear non-homogeneous credibility estimator, the mathematic expression of this integrated evaluation model of software reliability based on Bayesian estimation was described in detail, and the solution of credibility factor was illustrated. The data experiment testified its rationality and feasibility of this multi-information integrated model.",2009,0, 4619,Description and Selection Model Based on Constraint QoS for Web Service,"Along with the rapid growth of the Web service, the research of non-functional properties of Web service(QoS) becomes a hotspot. The work in this paper is based on an extended service model. It puts forward a metric policy of comprehensive QoS, and the matching algorithm including two aspects: non-QoS constraint and QoS constraint. The experimental results prove that the algorithm is reliable and sensitive to constraint conditions.",2009,0, 4620,Research on Testing-Based Software Credibility Measurement and Assessment,"It's one of the important approaches to measure and assess software credibility by testing in trustworthy software study. From the perspective of effective management and credibility analysis supporting the test process, this article describes a basic framework for its management and discusses the assessment techniques and methods of credible test process and software product.",2009,0, 4621,Progress and Quality Modeling of Requirements Analysis Based on Chaos,"It is important and difficult for us to know the progress and quality of requirements analysis. We introduce chaos and software requirements complexity to the description of requirements decomposing, and get a method which can help us to evaluate the progress and quality. The model shows that requirements decomposing procedure has its own regular pattern which we can describe in a equation and track in a trajectory. The requirements analysis process of a software system can be taken as normal if its trajectory coincide with the model. We may be able to predict the time we need to finish all requirements decomposition in advance based on the model. We apply the method in the requirements analysis of home phone service management system, and the initial results show that the method is useful in the evaluation of requirements decomposition.",2009,0, 4622,Soft Measurement Modeling Based on Improved Simulated Annealing Neural Network for Sewage Treatment,"Considering the issues that the sewage treatment process is a complicated and nonlinear system, and the key parameters of sewage treatment quality can not be detected on-line, a soft measurement modeling method based on improved simulated annealing neural network (ISANN) is presented in this paper. First the simulated annealing algorithm with the best reserve mechanism is introduced and it is organic combined with Powell algorithm to form improved simulated annealing mixed optimize algorithm, instead of gradient falling algorithm of BP network to train network weight. It can get higher accuracy and faster convergence speed. We construct the network structure. With the ability of strong self-learning and faster convergence of ISANN, the soft measurement modeling method can truly detect and assess the quality of sewage treatment in real time by learning the sewage treatment parameter information of sensors acquired. The experimental results show that this method is feasible and effective.",2009,0, 4623,Research on Fabric Image Registration Technique Based on Pattern Recognition Using Cross-Power Spectrum,"This paper designs a fabric image registration method after researching texture of fabric. It can find the commonality area of two images quickly and accurately. Generally speaking, it is a good choice to solve the image registration with the relativity of two images in random texture images. However, repeated on high-resolution images to do Fourier Transform will decrease real-time of detection system. In order to enhance the real-time performance of fabric inspection system, the paper designs an effective method in the premise of ensuring matching effect. The method combines the excellence of frequency domain and space domain using predictable commonality area in the process of template matching, so arithmetic with favorable matching effect is put forward.",2009,0, 4624,Measurement of the Complexity of Variation Points in Software Product Lines,"Feature models are used in member product configuration in software product lines. A valid product configuration must satisfy two kinds of constraints, multiplicity of each variation point and dependencies among the variants in a product line. The combined impact of the two kinds of constrains on product configuration should be well understood. In this paper we propose a measurement, called VariationRank, that combines the two kinds of constraints to assess the complexity of variation points in software product lines. Based on the measurement we can identify those variation points with the highest impact on product configurations in a product line. This information could be used as guidance in product configuration as well as for feature model optimization in software product lines. A case study is presented and discussed in this paper as well.",2009,0, 4625,The digital algorithm processors for the ATLAS Level-1 Calorimeter Trigger,"The ATLAS Level-1 Calorimeter Trigger identifies high-ET jets, electrons/photons and hadrons, and measures total and missing transverse energy in proton-proton collisions at the Large Hadron Collider. Two subsystems-the Jet/Energy-sum Processor (JEP) and the Cluster Processor (CP)-process data from every crossing, and report feature multiplicities and energy sums to the ATLAS Central Trigger Processor, which produces a Level-1 Accept decision. Locations and types of identified features are read out to the Level-2 Trigger as regions-of-interest, and quality-monitoring information is read out to the ATLAS data acquisition system. The JEP and CP subsystems share a great deal of common infrastructure, including a custom backplane, several common hardware modules, and readout hardware. Some of the common modules use FPGAs with selectable firmware configurations based on their location in the system. This approach saved substantial development effort and provided a uniform model for firmware and software development. We present an in-depth description of the two subsystems as manufactured and installed. We compare and contrast the JEP and CP systems, and discuss experiences during production, installation and commissioning. We also briefly present results of recent tests that suggest an interesting upgrade path for higher luminosity running at the LHC.",2009,0, 4626,The LHCb readout system and real-time event management,"The LHCb experiment is a hadronic precision experiment at the LHC accelerator aimed at mainly studying b-physics by profiting from the large b-anti-b-production at LHC. The challenge of high trigger efficiency has driven the choice of a readout architecture allowing the main event filtering to be performed by a software trigger with access to all detector information on a processing farm based on commercial multicore PCs. The readout architecture therefore features only a relatively relaxed hardware trigger with a fixed and short latency accepting events at 1 MHz out of a nominal proton collision rate of 30 MHz, and high bandwidth with event fragment assembly over gigabit Ethernet. A fast central system performs the entire synchronization, event labelling and control of the readout, as well as event management including destination control, dynamic load balancing of the readout network and the farm, and handling of special events for calibrations and luminosity measurements. The event filter farm processes the events in parallel and reduces the physics event rate to about 2 kHz which are formatted and written to disk before transfer to the offline processing. A spy mechanism allows processing and reconstructing a fraction of the events for online quality checking. In addition a 5 Hz subset of the events are sent as express stream to offline for checking calibrations and software before launching the full offline processing on the main event stream. In this paper, we will give an overview of the readout system, and describe the real-time event management and the experience with the system during the commissioning phase with cosmic rays and first LHC beams.",2009,0, 4627,The ALICE Data Quality Monitoring system,"ALICE is one of the four experiments installed at the CERN Large Hadron Collider (LHC), especially designed for the study of heavy-ion collisions. The online Data Quality Monitoring (DQM) is an important part of the data acquisition (DAQ) software. It involves the online gathering, the analysis by user-defined algorithms and the visualization of monitored data. This paper presents the final design, as well as the latest and coming features, of the ALICE's specific DQM software called AMORE (Automatic Monitoring Environment). It describes the challenges we faced during its implementation, including the performances issues, and how we tested and handled them, in particular by using a scalable and robust publish-subscribe architecture. We also review the on-going and increasing adoption of this tool amongst the ALICE collaboration and the measures taken to develop, in synergy with their respective teams, efficient monitoring modules for the sub-detectors. The related packaging and release procedure needed by such a distributed framework is also described. We finally overview the wide range of usages people make of this framework, and we review our own experience, before and during the LHC start-up, when monitoring the data quality on both the sub-detectors and the DAQ side in a real-world and challenging environment.",2009,0,5392 4628,The ALICE DAQ online databases,"ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). The ALICE Data Acquisition system (DAQ) is made of a large number of distributed hardware and software components, which rely on several online databases: the configuration database, describing the counting room machines, some detector-specific electronics settings and the DAQ and Experiment Control System runtime parameters; the log database, centrally collecting reports from running processes; the experiment logbook, tracking the run statistics filled automatically and the operator entries; the online archive of constantly updated data quality monitoring reports; the file indexing services, including the status of transient files for permanent storage and the calibration results for offline export; the user guides (Wiki); test databases to check the interfaces with external components; reference data sets used to restore known configurations. With 35 GB of online data hosted on a MySQL server and organized in more than 500 relational tables for a total of 40 million rows, this information is populated and accessible through various frontends, including C library for efficient repetitive access, Tcl/TK GUIs for configuration editors and log browser, HTML/PHP pages for the logbook, and command line tools for scripting and expert debugging. Exhaustive hardware benchmarks have been conducted to select the appropriate database server architecture. Secure access from private and general networks was implemented. Ad-hoc monitoring and backup mechanisms have been designed and deployed. We discuss the implementation of these complex databases and how the inhomogeneous requirements have been addressed. We also review the performance analysis outcome after more than one year in production and show results of data mining from this central information source.",2009,0, 4629,Quality Assurance Testing for Magnetization Quality Assessment of BLDC Motors used in Compressors,"Quality assurance testing of mass-produced electrical appliances is critical for the reputation of the manufacturer since defective units will have a negative impact on safety, reliability, efficiency, and performance of the end product. It has been observed at a brushless DC compressor motor manufacturing facility that improper magnetization of the rotor permanent magnet is one of the leading causes of motor defects. A new technique for post-manufacturing assessment the magnetization quality of concentrated winding brushless DC compressor motors for quality assurance, is proposed in this paper. The new method evaluates the data acquired during the test runs performed after motor assembly to observe anomalies in the zero crossing pattern of the back-emf voltages for screening motor units with defective magnetization. An experimental study on healthy and defective 250 W brushless DC compressor motor units shows that the proposed technique provides sensitive detection of magnetization defects that existing tests were not capable of finding. The proposed algorithm does not require additional hardware for implementation since it can be added to the existing test run inverter of the quality assurance system as a software algorithm.",2009,0, 4630,Software Reliability Prediction and Analysis Using Queueing Models with Multiple Change-Points,"Over the past three decades, many software reliability growth models (SRGMs) were proposed and they are aimed at predicting and estimating software reliability. One common assumption of these conventional SRGMs is that detected faults will be removed immediately. In reality, this assumption may not be reasonable and may not always occur. Developers need time to identify the root causes of detected faults and then fix them. Besides, during debugging the fault correction rate may not be a constant and could be changed at some certain points as time proceeds. Consequently, in this paper, we will explore and study how to apply queueing model to investigate the fault correction process during software development. We propose an extended infinite server queueing model with multiple change-points to predict and assess software reliability. Experimental results based on real failure data show that proposed model can depicts the change of fault correction rates and predict the behavior of software development more accurately than traditional SRGMs.",2009,0, 4631,A Trust-Based Detecting Mechanism against Profile Injection Attacks in Recommender Systems,"Recommender systems could be applied in grid environment to help grid users select more suitable services by making high quality personalized recommendations. Also, recommendation could be employed in the virtual machines managing platform to measure the performance and creditability of each virtual machine. However, such systems have been shown to be vulnerable to profile injection attacks (shilling attacks), attacks that involve the insertion of malicious profiles into the ratings database for the purpose of altering the system's recommendation behavior. In this paper we introduce and evaluate a new trust-based detecting algorithm for protecting recommender systems against profile injection attacks. Moreover, we discuss the combination of our trust-based metrics with previous metrics such as RDMA in profile-level and item-level respectively. In the end, we show these metrics can lead to improved detecting accuracy experimentally.",2009,0, 4632,Architectural Availability Analysis of Software Decomposition for Local Recovery,"Non-functional properties, such as timeliness, resource consumption and reliability are of crucial importance for today's software systems. Therefore, it is important to know the non-functional behavior before the system is put into operation. Preferably, such properties should be analyzed at design time, at an architectural level, so that changes can be made early in the system development process. In this paper, we present an efficient and easy-to-use methodology to predict - at design time - the availability of systems that support local recovery. Our analysis techniques work at the architectural level, where the software designer simply inputs the software modules' decomposition annotated with failure and repair rates. From this decomposition we automatically generate an analytical model (i.e. a continuous-time Markov chain), from which various performance and dependability measures are then computed, in a way that is completely transparent to the user. A crucial step is the use of intermediate models in the Input/Output Interactive Markov Chain formalism, which makes our techniques, efficient, mathematically rigorous, and easy to adapt. In particular, we use aggressive minimization techniques to keep the size of the generated state spaces small. We have applied our methodology on a realistic case study, namely the MPlayer open source software. We have investigated four different decomposition alternatives and compared our analytical results with the measured availability on a running MPlayer. We found that our predicted results closely match the measured ones.",2009,0, 4633,Looking for Product Line Feature Models Defects: Towards a Systematic Classification of Verification Criteria,"Product line models (PLM) are important artifacts in product line engineering. Due to their size and complexity, it is difficult to detect defects in PLMs. The challenge is however important: any error in a PLM will inevitably impact configuration, generating issues such as incorrect product models, inconsistent architectures, poor reuse, difficulty to customize products, etc. Surveys on feature-based PLM verification approaches show that there are many verification criteria, that these criteria are defined in different ways, and that different ways of working are proposed to look for defect. The goal of this paper is to systematize PLM verification. Based on our literature review, we propose a list of 23 verification criteria that we think cover those available in the literature.",2009,0, 4634,Evaluating the Completeness and Granularity of Functional Requirements Specifications: A Controlled Experiment,"Requirements engineering (RE) is a relatively young discipline, and still many advances have been achieved during the last decades. In particular, numerous RE methods have been proposed. However, there is a growing concern for empirical validations that assess RE proposals and statements. This paper is related to the evaluation of the quality of functional requirements specifications, focusing on completeness and granularity. To do this, several concepts related to conceptual model quality are presented; these concepts lead to the definition of metrics that allow measuring certain aspects of a requirements model quality (e.g. degree of functional encapsulations completeness with respect to a reference model, number of functional fragmentation errors). A laboratory experiment with master students has been carried out, in order to compare (using the proposed metrics) two RE approaches; namely, Use Cases and Communication Analysis. Results indicate greater quality (in terms of completeness and granularity) when communication analysis guidelines are followed. Moreover, interesting issues arise from experimental results, which invite further research.",2009,0, 4635,Double Redundant Fault-Tolerance Service Routing Model in ESB,"With the development of the Service Oriented Architecture (SOA), the Enterprise Service Bus (ESB) is becoming more and more important in the management of mass services. The main function of it is service routing which focuses on delivery of message among different services. At present, some routing patterns have been implemented to finish the messaging, but they are all static configuration service routing. Once one service fails in its operation, the whole service system will not be able to detect such fault, so the whole business function will also fail finally. In order to solve this problem, we present a double redundant fault tolerant service routing model. This model has its own double redundant fault tolerant mechanism and algorithm to guarantee that if the original service fails, another replica service that has the same function will return the response message instead automatically. The service requester will receive the response message transparently without taking care where it comes from. Besides, the state of failed service will be recorded for service management. At the end of this article, we evaluated the performance of double redundant fault tolerant service routing model. Our analysis shows that, by importing double redundant fault tolerance, we can improve the fault-tolerant capability of the services routing apparently. It will solve the limitation of existent static service routing and ensure the reliability of messaging in SOA.",2009,0, 4636,Reasoning on Non-Functional Requirements for Integrated Services,"We focus on non-functional requirements for applications offered by service integrators; i.e., software that delivers service by composing services, independently developed, managed, and evolved by other service providers. In particular, we focus on requirements expressed in a probabilistic manner, such as reliability or performance. We illustrate a unified approach-a method and its support tools-which facilitates reasoning about requirements satisfaction as the system evolves dynamically. The approach relies on run-time monitoring and uses the data collected by the probes to detect if the behavior of the open environment in which the application is situated, such as usage profile or the external services currently bound to the application, deviates from the initially stated assumptions and whether this can lead to a failure of the application. This is achieved by keeping a model of the application alive at run time, automatically updating its parameters to reflect changes in the external world, and using the model's predictive capabilities to anticipate future failures, thus enabling suitable recovery plans.",2009,0, 4637,SQUAD: Software Quality Understanding through the Analysis of Design,"Object-oriented software quality models usually use metrics of classes and of relationships among classes to assess the quality of systems. However, software quality does not depend on classes solely: it also depends on the organization of classes, i.e., their design. Our thesis is that it is possible to understand how the design of systems affects their quality and to build quality models that take into account various design styles, in particular design patterns, antipatterns, and code smells. To demonstrate our thesis, we first analyze how playing roles in design patterns, antipatterns, and code smells impacts quality; specifically change-proneness, fault-proneness, and maintenance costs. Second, we build quality models and apply and validate them on open-source and industrial object-oriented systems to show that they allow a more precise evaluation of the quality than traditional models,like Bansiya et al.'s QMOOD.",2009,0, 4638,A New Metric for Automatic Program Partitioning,"Software reverse engineering techniques are most often applied to reconstruct the architecture of a program with respect to quality constraints, or non-functional requirements such as maintainability or reusability. However, there has been no effort to assess the architecture of a program from the performance viewpoint and reconstruct this architecture in order to improve the program performance. In this paper, a novel Actor-Oriented Program reverse engineering approach, is proposed to reconstruct an object-oriented program architecture based on a high performance model such as actor model. Since actors can communicate with each other asynchronously, reconstructing the program architecture based on this model may result in the concurrent execution of the program invocations and consequently increasing the overall performance of the program when enough processors are available.",2009,0, 4639,Performance Analysis of Datamining Algorithms for Software Quality Prediction,"Data mining techniques are applied in building software fault prediction models for improving the software quality. Early identification of high-risk modules can assist in quality enhancement efforts to modules that are likely to have a high number of faults. Classification tree models are simple and effective as software quality prediction models, and timely predictions of defects from such models can be used to achieve high software reliability. In this paper, the performance of five data mining classifier algorithms named J48, CART, Random Forest, BFTree and Naive Bayesian classifier (NBC) are evaluated based on 10 fold cross validation test. Experimental results using KC2 NASA software metrics dataset demonstrates that decision trees are much useful for fault predictions and based on rules generated only some measurement attributes in the given set of the metrics play an important role in establishing final rules and for improving the software quality by giving correct predictions. Thus we can suggest that these attributes are sufficient for future classification process. To evaluate the performance of the above algorithms Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Receiver Operating Characteristic (ROC) and Accuracy measures are applied.",2009,0, 4640,A Novel Fast Multiple Reference Frames Selection Method for H.264/AVC,"Multiple reference frames motion estimation is one of the new features introduced in H.264/AVC. Although this new feature improves video coding performance, it is extremely computational intensive. Full search scheme is adopted in reference software JM, and the increased computation load is in proportion to the number of searched reference frames. In this paper, we propose a novel fast multiple reference frames selection method. It can decrease the number of reference frames for motion estimation and reduce the complexity of coding adaptively according to the features of video sequences. The experiment results show that our algorithm achieves 42% coding time saving on average with negligible quality loss.",2009,0, 4641,A platform for testing and comparing of real-time decision-support algorithms in mobile environments,"The unavailability of a flexible system for real-time testing of decision-support algorithms in a pre-hospital clinical setting has limited their use. In this study, we describe a plug-and-play platform for real-time testing of decision-support algorithms during the transport of trauma casualties en route to a hospital. The platform integrates a standard-of-care vital-signs monitor, which collects numeric and waveform physiologic time-series data, with a rugged ultramobile personal computer. The computer time-stamps and stores data received from the monitor, and performs analysis on the collected data in real-time. Prior to field deployment, we assessed the performance of each component of the platform by using an emulator to simulate a number of possible fault scenarios that could be encountered in the field. Initial testing with the emulator allowed us to identify and fix software inconsistencies and showed that the platform can support a quick development cycle for real-time decision-support algorithms.",2009,0, 4642,A new validity measure for a correlation-based fuzzy c-means clustering algorithm,"One of the major challenges in unsupervised clustering is the lack of consistent means for assessing the quality of clusters. In this paper, we evaluate several validity measures in fuzzy clustering and develop a new measure for a fuzzy c-means algorithm which uses a Pearson correlation in its distance metrics. The measure is designed with within-cluster sum of square, and makes use of fuzzy memberships. In comparing to the existing fuzzy partition coefficient and a fuzzy validity index, this new measure performs consistently across six microarray datasets. The newly developed measure could be used to assess the validity of fuzzy clusters produced by a correlation-based fuzzy c-means clustering algorithm.",2009,0, 4643,"A wearable, low-power, health-monitoring instrumentation based on a programmable system-on-chipTM","Improvement in quality and efficiency of health and medicine, at home and in hospital, has become of paramount importance. The solution of this problem would require the continuous monitoring of several key patient parameters, including the assessment of autonomic nervous system (ANS) activity using non-invasive sensors, providing information for emotional, sensorial, cognitive and physiological analysis of the patient. Recent advances in embedded systems, microelectronics, sensors and wireless networking enable the design of wearable systems capable of such advanced health monitoring. The subject of this article is an ambulatory system comprising a small wrist device connected to several sensors for the detection of the autonomic nervous system activity. It affords monitoring of skin resistance, skin temperature and heart activity. It is also capable of recording the data on a removable media or sending it to computer via a wireless communication. The wrist device is based on a programmable system-on-chip (PSoCtrade) from Cypress: PSoCs are mixed-signal arrays, with dynamic, configurable digital and analogical blocks and an 8-bit microcontroller unit (MCU) core on a single chip. In this paper we present first of all the hardware and software architecture of the device, and then results obtained from initial experiments.",2009,0, 4644,SleepMinder: An innovative contact-free device for the estimation of the apnoea-hypopnoea index,"We describe an innovative sensor technology (SleepMindertrade) for contact-less and convenient measurement of sleep and breathing in the home. The system is based on a novel non-contact biomotion sensor and proprietary automated analysis software. The biomotion sensor uses an ultra low-power radio-frequency transceiver to sense the movement and respiration of a subject. Proprietary software performs a variety of signal analysis tasks including respiration analysis, sleep quality measurement and sleep apnea assessment. This paper measures the performance of SleepMinder as a device for the monitoring of sleep-disordered breathing (SDB) and the provision of an estimate of the apnoea-hypopnoea index (AHI). The SleepMinder was tested against expert manually scored PSG data of patients gathered in an accredited sleep laboratory. The comparison of SleepMinder to this gold standard was performed across overnight recordings of 129 subjects with suspected SDB. The dataset had a wide demographic profile with the age ranging between 20 and 81 years. Body weight included subjects with normal weight through to the very obese (Body Mass Index: 21-44 kg/m2). SDB severity ranged from subjects free of SDB to those with severe SDB (AHI: 0.8-96 events/hours). SleepMinder's AHI estimation has a correlation of 91% and can detect clinically significant SDB (AHI>15) with a sensitivity of 89% and a specificity of 92%.",2009,0, 4645,A hybrid platform based on EOG and EEG signals to restore communication for patients afflicted with progressive motor neuron diseases,"An efficient alternative channel for communication without overt speech and hand movements is important to increase the quality of life for patients suffering from Amiotrophic Lateral Sclerosis or other illnesses that prevent correct limb and facial muscular responses. Often, such diseases leave the ocular movements preserved for a relatively long time. The aim of this study is to present a new approach for the hybrid system which is based on the recognition of electrooculogram (EOG) and electroencephalogram (EEG) measurements for efficient communication and control. As a first step we show that the EOG-based side of the system for communication and controls is useful for patients. The EOG side of the system has been equipped with an interface including a speller to notify of messages. A comparison of the performance of the EOG-based system has been made with a BCI system that uses P300 waveforms. As a next step, we plan to integrate EOG and EEG sides. The final goal of the project is to realize a unique noninvasive device able to offer the patient the partial restoration of communication and control abilities with EOG and EEG signals.",2009,0, 4646,An Assessment of GPS-based precise point positioning of the low earth-orbiting satellite CHAMP,"Precise point positioning (PPP) with the international GNSS service (IGS) products, which consist of precise orbits and clock correction information, has been demonstrated by several investigators to achieve a centimeter-decimeter level positioning accuracy in real-time for land and aerial vehicular navigation. The purpose of this paper is to present one phase of study conducted at National Defense Academy (NDA) on performance evaluation when such a PPP technique is extended for orbit determination of the low Earth orbiters (LEO) in post-flight processing or even in real-time. In the present study the satellite CHAMP was selected as an LEO since high accuracy orbit is prerequisite to that mission. A PPP filter with a simple white noise plant model was designed, and a basic PPP software which was originally developed at NDA for land and aerial vehicles was intensively modified to process GPS data taken by the dual-frequency receiver ldquoBlackJackrdquo onboard the CHAMP satellite. To generate a reference orbit of CHAMP, we proposed a hybrid orbit determination (OD) method consisting of classical least-squares and Hill-Clohessy-Wildshire (HCW) solution, where a least-squares point positioning solution of 3D coordinates from pseudoranges of CHAMP was employed as the pseudo-observations. It was shown that our PPP filter produced an overall positioning accuracy of better than a sub-meter for transverse and normal components and 90 cm for radial component for 3-hour data arc, and in particular of a few decimeters for each component over the last one-hour segment of good-quality data (no data gap), the implication of the precision real-time on-orbit satellite navigation using only a single, dual-frequency GPS receiver, along with the prediction portion of IGS rapid or ultra-rapid products.",2009,0, 4647,Use of threat image projection (TIP) to enhance security performance,"Threat Image Projection (TIP) is a software system that is used at airports to project images of threat items amongst the passenger baggage being screened by X-ray. The use of TIP is becoming more widespread and is increasingly being included as part of security regulation. This is due to its purported benefits of improved attention and vigilance, and increased exposure to threat items that are linked to improvements in threat detection performance. Further, the data collected by the TIP system can be used to assess individual performance, provide feedback to screeners, and tailor training to specific performance weaknesses; which can generate further performance improvements. However, TIP will only be successful in enhancing security performance if it is used and managed effectively. In this paper the key areas of effective TIP use and management that enable security performance to be enhanced are highlighted. These include the optimisation of TIP settings, such as the TIP to bag ratio, and image library management. Appropriate setting of these components can lead to improved performance as a facet result of increasing exposure to a suitable range of threat images. The key elements of TIP training are highlighted including the importance of communicating TIP related information and the role of the supervisor in ensuring TIP is used appropriately. Finally, the use of TIP data are examined, including the effective use of TIP for performance assessment and screener feedback and in defining training. The provision of feedback regarding TIP scores has been shown to enhance performance in excess of that achieved by using TIP in isolation. To date, the vast majority of TIP research has been conducted in relation to the screening of carry-on baggage. In the final part of this presentation the use of TIP to enhance performance in other areas such as Hold Baggage Screening (HBS) and Cargo are considered. HBS TIP is associated with different challenges due to its alternative - method of operation (to present complete images of a bag and threat item) which imposes demands for the operational set-up and the construction of the image library. The use of TIP in Cargo is associated with a different set of challenges as a result of the diverse nature of items scanned and the screening environment. However, in both these domains, the use of TIP has been associated with the realisation of benefits in line with those achieved for carry-on baggage screening. Through understanding differences in the context in which TIP is used it is possible to understand the differing requirements for its use and management that will enable the benefits of TIP to be realised, enhancing security performance across locations and screening contexts.",2009,0, 4648,Applying Code Coverage Approach to an Infinite Failure Software Reliability Model,"An approach to software reliability modeling based on code coverage is used to derive the Infinite Failure software reliability Model Based on Code Coverage - IFMBC. Our aim was to verify the soundness of the approach under different assumptions. The IFMBC was assessed with test data from a real application, making use of the following structural testing criteria: all-nodes, all-edges, and potential-uses - a data-flow based family of criteria. The IFMBC was shown to be as good as the Geometric Model - GEO, found to be the best traditional time-based model that fits the data. Results from the analysis also show that the IFMBC is as good as the BMBC - Binomial software reliability Model Based on Coverage - a model previously derived using the code coverage approach, indicating it to be effective under different modeling assumptions.",2009,0, 4649,Proactive estimation of the video streaming reception quality in WiFi networks using a cross-layer technique,"The quality of service (QoS) in a Wireless Fidelity (WiFi) network can not be guaranteed due to intermittent disconnections, interferences and so forth because they reduce considerably the necessary QoS for multimedia applications. For that reason it is essential a software as the one we present in this paper that evaluates if there are the appropriate conditions to receive the streaming measuring the congestion, radio coverage, throughput, lost and received packets rate, and so on. Our software makes a proactive estimation using a bottom-up cross-layer technique measuring physical level parameters, Round Trip Time (RTT) and getting traffic statistics for the overall WiFi network and for each ongoing Real Time Streaming Protocol/Real Time Protocol (RTSP/RTP) streaming session. Experimental results show a high percentage of good decisions of the software to detect the wireless channel degradation and its recovery.",2009,0, 4650,TPSS: A two-phase sleep scheduling protocol for object tracking in sensor networks,"Lifetime maximization is an important factor in the design of sensor networks for object tracking applications. Some techniques of node scheduling have been proposed to reduce energy consumption. By exploiting the redundancy of network coverage, they turn off unnecessary nodes, or make nodes work in turn, which require high nodes density or sacrificing tracking quality. We present TPSS, a two-phase sleep scheduling protocol, which divides the whole tracking procedure into two phases and assigns different scheduling policies at each phase. To balance energy savings and tracking quality, we further optimize the node scheduling protocol in terms of network coverage and nodes state prediction. We evaluate our method in an indoor environment with 36 sensor nodes. Comparing with the existing methods, all the experimental results show that our method has a high performance in term of energy savings and tracking quality.",2009,0, 4651,Throughput and delay analysis of the IEEE 802.15.3 CSMA/CA mechanism considering the suspending events in unsaturated traffic conditions,"Unlike in IEEE 802.11, the CSMA/CA traffic conditions in IEEE 802.15.3 are typically unsaturated. This paper presents an extended analytical model based on Bianchi's model in IEEE 802.11, by taking into account the device suspending events, unsaturated traffic conditions, as well as the effects of error-prone channels. Based on this model we re-derive a closed form expression of the average service time. The accuracy of the model is validated through extensive simulations. The analysis is also instructional for IEEE 802.11 networks under limited load.",2009,0, 4652,Mutation Sensitivity Testing,"Computational scientists often encounter code-testing challenges not typically faced by software engineers who develop testing techniques. Mutation sensitivity testing addresses these challenges, showing that a few well-designed tests can detect many code faults and that reducing error tolerances is often more effective than running additional tests.",2009,0, 4653,Engineering the Software for Understanding Climate Change,"Climate scientists build large, complex simulations with little or no software engineering training—and don't readily adopt the latest software engineering tools and techniques. This ethnographic study of climate scientists shows that their culture and practices share many features of agile and open source projects, but with highly customized software validation and verification techniques.",2009,0, 4654,Measuring the quality of interfaces using source code entropy,"Global enterprises face an increasingly high complexity of software systems. Although size and complexity are two different aspects of a software system, traditionally, various size metrics have been established to indicate their complexity. In fact, many developed software metrics correlate with the number of lines of code. Moreover, a combination of multiple metrics collected on bottom layers into one comprehensible and meaningful indicator for an entire system is not a trivial task. This paper proposes a novel interpretation of an entropy-based metric to assess the design of a software system in terms of interface quality and understandability. The proposed metric is independent of the system size and delivers one single value eliminating the unnecessary aggregation step. Further, an industrial case study has been conducted to illustrate the usefulness of this metric.",2009,0, 4655,Performance evaluation of service-oriented architecture through stochastic Petri nets,"The service-oriented architecture (SOA) has become an unifying technical architecture that can be embodied with Web service technologies, in which the Web service is thought as a fundamental building block. This paper proposes a simulation modeling approach based on stochastic Petri nets to estimate the performance of SOA applications. Using the proposed model it is possible to predict resource consumption and service levels degradation in scenarios with different compositions and workloads. A case study was conducted to validate the approach and to compare the results against an existing analytical modeling approach.",2009,0, 4656,Using designer's effort for user interface evaluation,"Designing Human Computer Interfaces is one of the more important and difficult design tasks. The tools for verifying the quality of the interface are frequently expensive or provide feedback too far after the design of the interface as to make it meaningless. To improve the interface usability, designers need a verification tool providing immediate feedback at a low cost. Using an effort-based measure of usability, it is possible for a designer to estimate the effort a subject might expend to complete a specific task. In this paper, we develop the notion of designer's effort for evaluating interface usability for new designs and Commercial-Off-The-Shelf software. Designer's effort provides a technique to evaluate human interface before completing the development of the software and provides feedback from usability tests conducted using the effort-based evaluation technique.",2009,0, 4657,Application of a seeded hybrid genetic algorithm for user interface design,"Studies have established that computer user interface (UI) design is a primary contributor to people's experiences with modern technology; however, current UI development remains more art than quantifiable science. In this paper, we study the use of search algorithms to predict optimal display layouts early in system design. This has the potential to greatly reduce the cost and improve the quality of UI development. Specifically, we demonstrate a hybrid genetic algorithm and pattern search optimization process that makes use of human performance modeling to quantify known design principles. We show how this approach can be tailored by capturing contextual factors in order to properly seed and tune the genetic algorithm. Finally, we demonstrate the ability of this process to discover superior layouts as compared to manual qualitative methods.",2009,0, 4658,Discovering the best web service: A neural network-based solution,"Differentiating between Web services that share similar functionalities is becoming a major challenge into the discovery of Web services. In this paper we propose a framework for enabling the efficient discovery of Web services using artificial neural networks (ANN) best known for their generalization capabilities. The core of this framework is applying a novel neural network model to Web services to determine suitable Web services based on the notion of the quality of Web service (QWS). The main concept of QWS is to assess a Web service's behaviour and ability to deliver the requested functionality. Through the aggregation of QWS for Web services, the neural network is capable of identifying those services that belong to a variety of class objects. The overall performance of the proposed method shows a 95% success rate for discovering Web services of interest. To test the robustness and effectiveness of the neural network algorithm, some of the QWS features were excluded from the training set and results showed a significant impact in the overall performance of the system. Hence, discovering Web services through a wide selection of quality attributes can considerably be influenced with the selection of QWS features used to provide an overall assessment of Web services.",2009,0, 4659,Automation component aspects for efficient unit testing,"Automation systems software must provide sufficient diagnosis information for testing to enable early defect detection and quality measurement. However, in many automation systems the aspects of automation, testing, and diagnosis are intertwined in the code. This makes the code harder to read, modify, and test. In this paper we introduce the design of a test-driven automation (TDA) component with separate aspects for automation, diagnosis, and testing to improve testability and test efficiency. We illustrate with a prototype, how automation component aspects allow flexible configuration of a ¿¿system under test¿¿ for test automation. Major result of the pilot application is that the TDA concept was found usable and useful to improve testing efficiency.",2009,0, 4660,A novel error resilient joint source-channel coding scheme for image transmission over error prone channels,"In this paper, we proposed a double-level error resilient joint source-channel coding scheme for image transmission. Characteristically, we inserted one coordinative component named error resilient entropy coding (EREC) between source coding and channel coding to achieve additional error resilient capability. Based on the novel architecture, we proved that, in the aspects of computational complexity, coding redundancy and furthermore the decoding performance of image transmission, our scheme outperformed either separate source and channel coding scheme or joint source-channel coding scheme with synchronization words.",2009,0, 4661,On line sensor planning for tracking in camera networks,"Sensor planning chiefly applies to optimizing surveillance tasks, such as persistent tracking by designing and utilizing camera placement strategies. Against substituting new optimized camera networks for those still in usage, online sensor planning hereby involves the design of vision algorithms that not only select cameras which yield the best results, but also improve the quality with which surveillance tasks are performed. In previous literatures about coverage problem in sensor planning, targets (e.g., persons) are justly simplified as a 2-D point. However in actual application scene, cameras are always heterogeneous such as fixed with different height and action radii, and the monitored objects has 3-D features (e.g., height). This paper presents a new sensor planning formulation addressing the efficiency enhancement of active visual tracking in camera networks that track and detect people traversing a region. The numerical results show that this online sensor planning approach can improve the active tracking performance of the system.",2009,0, 4662,Control of dynamic voltage restorer (DVR) based on introducing a new approach for the on-line estimation of symmetrical components,The main duty of dynamic voltage restorer (DVR) is protection of sensitive loads against voltage disturbances especially voltage sags and swells. DVR should possess necessary response time in restoring load bus voltage to avoid load interruptions. One the most important factors that affects the performance of DVR is the estimation algorithm which is used in control unit of this compensator. This paper presents a new fast-response algorithm for the on-line estimation of symmetrical components in harmonic distortion condition which provides adequate response time and accuracy for dynamic voltage restorer. Furthermore performance of this algorithm would be compared with some well-known estimation techniques using MATLAB/SIMULINK. Application of the proposed estimation approach is realized by representing a new control unit for compensation of different voltage sags and swells. Finally the simulation results which obtained using PSCAD/EMTDC confirm the efficiency of suggested control unit.,2009,0, 4663,Service Redundancy Strategies in Service-Oriented Architectures,"Redundancy can improve the availability of components in service-oriented systems. However, predicting and quantifying the effects of different redundancy strategies can be a complex task. In our work, we have taken an architecture based approach to the modeling, predicting and monitoring of properties in distributed software systems.This paper proposes redundancy strategies for service-oriented systems and models services with their associated protocols. We derive formal models from these high-level descriptions that are embedded in our fault-tolerance testing framework.We describe the general framework of our approach, develop two service redundancy strategies and report about the preliminary evaluation results in measuring performance and availability of such services. While the assumptions for the chosen case study are limiting, our evaluation is promising and encourages the extension of our testing framework to cater for more complex, hybrid, fault-tolerance strategies and architectural compositions.",2009,0, 4664,What Software Repositories Should Be Mined for Defect Predictors?,"The information about which modules in a software system's future version are potentially defective is a valuable aid for quality managers and testers. Defect prediction promises to indicate these defect-prone modules. Constructing effective defect prediction models in an industrial setting involves the decision from what data source the defect predictors should be derived. In this paper we compare defect prediction results based on three different data sources of a large industrial software system to answer the question what repositories to mine. In addition, we investigate whether a combination of different data sources improves the prediction results. The findings indicate that predictors derived from static code and design analysis provide slightly yet still significant better results than predictors derived from version control, while a combination of all data sources showed no further improvement.",2009,0, 4665,Experiences and Results from Establishing a Software Cockpit at BMD Systemhaus,"What is the degree of completion of the current iteration? Are we on track? How accurate are estimates compared to actual effort? Software cockpits (software project control centers) provide systematic support for answering such questions. Therefore, like a cockpit in an aircraft, software cockpits integrate and visualize accurate and timely information from various data sources for operative and strategic decision making. Important aspects are to track progress, to visualize team activities, and to provide transparency about the status of a project. In this paper we present our experiences from implementing and introducing a software cockpit in a leading Austrian software development company. The introduction has been supported by a small-scale process improvement and a GM-based measurement initiative. Furthermore, the paper discusses challenges and potential pitfalls in establishing software cockpits and shares some lessons learned relevant for the practitioner confronted with introducing a software cockpit in an industrial setting.",2009,0, 4666,Synthetic Metrics for Evaluating Runtime Quality of Software Architectures with Complex Tradeoffs,"Runtime quality of software, such as availability and throughput, depends on architectural factors and execution environment characteristics (e.g. CPU speed, network latency). Although the specific properties of the underlying execution environment are unknown at design time, the software architecture can be used to assess the inherent impact of the adopted design decisions on runtime quality. However, the design decisions that arise in complex software architectures exhibit non trivial interdependences. This work introduces an approach that discovers the most influential factors, by exploiting the correlation structure of the analyzed metrics via factor analysis of simulation data. A synthetic performance metric is constructed for each group of correlated metrics. The variability of these metrics summarizes the combined factor effects hence it is easier to assess the impact of the analyzed architecture decisions on the runtime quality. The approach is applied on experimental results obtained with the ACID Sim Tools framework for simulating transaction processing architectures.",2009,0, 4667,CQML Scheme: A Classification Scheme for Comprehensive Quality Model Landscapes,"Managing quality during the development, operation, and maintenance of software(-intensive) systems and services is a challenging task. Although many organizations need to define, control, measure, and improve various quality aspects of their development artifacts and processes, nearly no guidance is available on how to select, adapt, define, combine, use, and evolve quality models. Catalogs of models as well as selection and tailoring processes are widely missing. A first step towards better support for selecting and adapting quality models can be seen in a classification of existing quality models, especially with respect to their suitability for different purposes and contexts. This article presents so-called comprehensive quality model landscapes (CQMLs), which provide such a classification scheme and help to get an overview of existing models and their relationships. The article focuses on the description and justification of essential concepts needed to define quality models and landscapes. In addition, the article describes typical usage scenarios, illustrates the concept with examples, and sketches open questions and future work.",2009,0, 4668,Fault Analysis in OSS Based on Program Slicing Metrics,"In this paper, we investigate the barcode OSS using two of Weiser's original slice-based metrics (tightness and overlap) as a basis, complemented with fault data extracted from multiple versions of the same system. We compared the values of the metrics in functions with at least one reported fault with fault-free modules to determine a) whether significant differences in the two metrics would be observed and b) whether those metrics might allow prediction of faulty functions. Results revealed some interesting traits of the tightness metric and, in particular, how low values of that metric seemed to indicate fault-prone functions. A significant difference was found between the tightness metric values for faulty functions when compared to fault-free functions suggesting that tightness is the `better' of the two metrics in this sense. The overlap metric seemed less sensitive to differences between the two types of function.",2009,0, 4669,A Framework for the Balanced Optimization of Quality Assurance Strategies Focusing on Small and Medium Sized Enterprises,"The Quality Improvement Paradigm (QIP) offers a general framework for systematically improving an organization's development processes in a continuous manner. In the context of the LifeCycleQM project, the general QIP framework was concretized for a specific application area, namely, the balanced improvement of quality assurance (QA) strategies, i.e., a set of systematically applied QA activities. Especially with respect to small and medium-sized enterprises, the encompassing QIP framework presents limited guidance for easy and concrete application. Therefore, individual activities within the QIP framework were refined focusing on QA strategies, and proven measurement procedures such as the defect flow model and knowledge from the area of process improvement (e.g., with regard to individual QA procedures) was integrated for reuse in a practice-oriented manner. The feasibility of the developed approach was initially explored by its application at IBS AG, a medium-sized enterprise, where improvement goals were defined, a corresponding measurement program was established, improvement potential was identified, and concrete improvement suggestions for the QA strategy were derived, assessed, and implemented.",2009,0, 4670,A Multi-Tier Provenance Model for Global Climate Research,"Global climate researchers rely upon many forms of sensor data and analytical methods to help profile subtle changes in climate conditions. The U.S. Department of Energy Atmospheric Radiation Measurement (ARM) program provides researchers with curated Value Added Products (VAPs) resulting from continuous instrumentation streams, data fusion, and analytical profiling. The ARM operational staff and software development teams (data producers) rely upon a number of techniques to ensure strict quality control (QC) and quality assurance (QA) standards are maintained. Climate researchers (data consumers) are highly interested in obtaining as much provenance evidence as possible to establish data trustworthiness. Currently all the evidence is not easily attainable or identifiable without significant efforts to extract and piece together information from configuration files, log files, codes, or status information on the ARM website. Our objective is to identify a provenance model that serves the needs of both the VAP producers and consumers. This paper shares our initial results - a comprehensive multi-tier provenance model. We describe how both ARM operations staff and the climate research community can greatly benefit from this approach to more effectively assess and quantify the data historical record.",2009,0, 4671,"RAMS Analysis of a Bio-inspired Traffic Data Sensor (""Smart Eye"")","The Austrian Research Centers have developed a compact low-power embedded vision system ""Smart Eye TDS"", capable of detecting, counting and measuring the velocity of passing vehicles simultaneously on up to four lanes of a motorway. The system is based on an entirely new bio-inspired wide dynamic ¿silicon retina¿ optical sensor. Each of the 128 × 128 pixels operates autonomously and delivers asynchronous events representing relative changes in illumination with low latency, high temporal resolution and independence of scene illumination. The resulting data rate is significantly lower and reaction significantly faster than for conventional vision systems. In ADOSE, an FP7 project started 2008 (see acknowledgment at the end of the paper), the sensor will be tested on-board for pre-crash warning and pedestrian protection systems. For safety-related control applications, it is evident that dependability issues are important. Therefore a RAMS analysis was performed with the goal of improving the quality of this new traffic data sensor technology, in particular with respect to reliability and availability. This paper describes the methods used and the results found by applying a RAMS analysis to this specific case of a vision system.",2009,0, 4672,Simulation of Target Range Measurement Process by Passive Optoelectronic Rangefinder,"Active rangefinders used for measurement of longer distances of objects (targets), e.g. pulsed laser rangefinders, emit radiant energy, being in conflict with hygienic restrictions in various applications. Having been applied in security and military area there is a serious defect: irradiation can be detected by target. Used passive optoelectronic rangefinders (POERF) can fully eliminate mentioned above defects. The development started initially on a department of Military Academy in Brno (since 2004 University of Defence) in the year 2001 in cooperation with the company OPROX, a.s., Brno. The POERF development resulted in making special tool being able to test individual algorithms required for the target range measurement. It is the Test POERF simulation program, which is permanent created by authors of this contribution. Its 3rd version has just been finalized. This contribution is dealing with the simulation software and covers short comments on the simulation experiments results.",2009,0, 4673,Instruction Precomputation for Fault Detection,"Fault tolerance (FT) is becoming increasingly important in computing systems. This work proposes and evaluates the instruction precomputation technique to detect hardware faults. Applications are profiled off-line, and the most frequent instruction instances with their operands and results are loaded into the precomputation table when executing. The precomputation-based error detection technique is used in conjunction with another method that duplicates all instructions and compares the results. In the precomputation-enabled version, whenever possible, the instruction compares its result with a precomputed value, rather than executing twice. Another precomputation-based scheme does not execute the precomputed instructions at all, assuming that precomputation provides sufficient reliability. Precomputation improves the fault coverage (including permanent and some other faults) and performance of the duplication method. The proposed method is compared to an instruction memoization-based technique. The performance improvements of the precomputation- and memoization-based schemes are comparable, while precomputation has a better long-lasting fault coverage and is considerably cheaper.",2009,0, 4674,Safety criteria and development methodology for the safety critical railway software,"The main part of the system is operated by the software. Besides the safety critical system such as railways, airplanes, and nuclear power plants is applied by the software. The software can perform more varying and highly complex functions efficiently because software can be flexibly designed and implemented. But the flexible design makes it difficult to predict the software failures. We need to show that the safety critical railway software is developed to ensure the safety. This paper is suggested safety criteria and software development methodology to enhance safety for the safety critical railway system.",2009,0, 4675,A Multi-Agent Based Service Restoration in Distribution Network with Distributed Generations,"The amount of distributed generation (DG) integrations in the distribution network is increasing. In order to enhance the service reliability, intentional islanding operation and coordinated operations of DG with supporting feeders can be considered to assist service restoration during fault. This paper presents a multi-agent based fault restoration system with and without DG assistance and compares its performance with centralized processing scheme. Network simulation results indicate that for fault detection, isolation and service restoration, a multi-agent system could outperform the centralized processing system. Equipped with adequate synchronizing equipment in the distribution system, coordinated operations of DG and supporting feeders could minimize the un-served load during fault.",2009,0, 4676,On-line Transformer Condition Monitoring through Diagnostics and Anomaly Detection,"This paper describes the end-to-end components of an on-line system for diagnostics and anomaly detection. The system provides condition monitoring capabilities for two in service transmission transformers in the UK. These transformers are nearing the end of their design life, and it is hoped that intensive monitoring will enable them to stay in service for longer. The paper discusses the requirements on a system for interpreting data from the sensors installed on site, as well as describing the operation of specific diagnostic and anomaly detection techniques employed. The system is deployed on a substation computer, collecting and interpreting site data on-line.",2009,0, 4677,An Alternative Pre-Processing Technique Applied to Power Quality Disturbances Identification,"This research presents the development of an alternative pre-processing technique of signals based on the fractal dimension calculation, entropy and signal energy that will be applied to disturbances classification occurring in an electrical power system (EPS). With respect to the power quality disturbances, the voltage sags, voltage swells, oscillatory transients and interruptions were considered for this application. In order to test and validate the proposed technique, a representative database has been obtained through computer simulations of a real EPS using the ATP software. Through windows data and the pre-processing technique proposed, the data were directed to artificial neural network (ANN) architecture to classify the power quality events. The results shown that, combining both the data pre-processing techniques and ANNs, a satisfactory performance of all the proposed methodology can be obtained.",2009,0, 4678,ADEM: Automating deployment and management of application software on the Open Science Grid,"In grid environments, the deployment and management of application software presents a major practical challenge for end users. Performing these tasks manually is error-prone and not scalable to large grids. In this work, we propose an automation tool, ADEM, for grid application software deployment and management, and demonstrate and evaluate the tool on the Open Science Grid. ADEM uses Globus for basic grid services, and integrates the grid software installer Pacman. It supports both centralized ¿prebuild¿ and on-site ¿dynamic-build¿ approaches to software compilation, using the NMI Build and Test system to perform central prebuilds for specific target platforms. ADEM's parallel workflow automatically determines available grid sites and their platform ¿signatures¿, checks for and integrates dependencies, and performs software build, installation, and testing. ADEM's tracking log of build and installation activities is helpful for troubleshooting potential exceptions. Experimental results on the Open Science Grid show that ADEM is easy to use and more productive for users than manual operation.",2009,0, 4679,A benchmark for fault monitors in distributed systems,"Fault monitoring is one of the main activities of fault tolerant distributed systems. It is required to determine the suspected/crashed component and proactively take the recovery steps to keep the system alive. The main objective of the fault monitoring activity is to quickly and correctly identify the faults. There are many techniques for fault monitoring which have general and specific parameters which influence their performance. In this paper we find the parameters that can help us classify the fault monitoring techniques. We created a benchmark ACI (adaptation, convergence, intelligence) and applied it on current techniques.",2009,0, 4680,Model based predictive Rate Controller for video streaming,"Due to the heterogeneous structure of networks, video streaming over lossy IP networks is very important issues. Infrastructure of the Internet exhibits variable bandwidths, delays, congestions and time-varying packet losses. Video streaming applications should not only have a good end-to-end transport performance but also have a robust rate control, because of variable attributes of the Internet. Furthermore video streaming applications should have a multipath rate allocation mechanism. So for providing the video streaming service quality, some other components such as Bandwidth Estimation and Rate Controller should be taken into consideration. This paper gives an overview of video streaming concept and bandwidth estimation tools and then introduces special architectures for bandwidth adaptive video streaming. A bandwidth estimation algorithm-pathChirp, optimized rate controllers and multipath rate allocation algorithm are considered as all-in-one solution for video streaming problem. This solution is directed and optimized by a decision center which is designed for obtaining the maximum quality at the receiving side.",2009,0, 4681,Investigation on the effectiveness of classifying the voltage sag using support vector machine,"Support vector machine (SVM), which is based on statistical learning theory, is a universal machine learning method. This paper proposes the application of SVM in classifying the causes of voltage sag in power distribution system. Voltage sag is among the major power quality disturbances that can cause substantial loss of product and also can attribute to malfunctions, instabilities and shorter lifetime of the load. Voltage sag can be caused by fault in power system, starting of induction motor and transformer energizing. An IEEE 30 bus system is modeled using the PSCAD software to generate the data for different type of voltage sag namely, caused by fault and starting of induction motor. Feature extraction using the wavelet transformation for the SVM input has been performed prior to the classification of the voltage sag cause. Two kernels functions are used namely radial basis function (RBF) and polynomial function. The minimum and maximum of the wavelet energy are used as the input to the SVM and analysis on the performance of these two kernels are presented. In this paper, it has been found that the polynomial kernel performed better as compared to the RBF in classifying the cause of voltage sag in power system.",2009,0, 4682,Dynamic Voltage Restorer based on stacked multicell converter,"Voltage sags are major power quality problems in distribution systems. To overcome these problems, different solutions have been proposed to compensate these sags and protect sensitive loads. As a solution, dynamic voltage restorers (DVRs) have been proposed to provide higher power quality. However, the quality of DVR output voltage, such as THD, is important. So, in this paper a configuration of DVR based on stacked multicell (SM) converter is proposed. The main properties of SM converter, which causes increase in the number of output voltage levels, are transformer-less operation and natural self-balancing of flying capacitors voltages. The proposed DVR consists of a set of series converter and shunt rectifier connected back-to-back. To guarantee the proper operation of shunt rectifier and maintaining the dc link voltage at the desired value, which results in suitable performance of DVR, the DVR is characterized by installing the series converter on the source-side and the shunt rectifier on the load-side. Also, the pre-sag compensation strategy and proposed synchronous reference frame (SRF) based voltage sag detection and reference series injected voltage determination methods are adopted as the control system. The proposed DVR is simulated using PSCAD/EMTDC software and simulation results are presented to validate its effectiveness.",2009,0, 4683,Speeding up motion estimation in modern video encoders using approximate metrics and SIMD processors,"In the past, efforts have been devoted to the amelioration of motion estimation algorithms to speed up motion compensated video coding. Now, efforts are increasingly being directed at exploiting the underlying architecture, in particular, single instruction, multiple data (SIMD) instruction sets. The resilience of motion estimation algorithms to various error metrics allows us to propose new high performance approximate metrics based on the sum of absolute differences (SAD). These new approximate metrics are amenable to efficient branch-free SIMD implementations which yield impressive speed-ups, up to 11:1 in some cases, while sacrificing image quality for less than 0.1 dB on average.",2009,0, 4684,Modeling of large-scale point model,"This paper proposes efficient data structures for point-based rendering and a real-time and high quality rendering algorithm for large-scale point models. As a preprocessing, large-scale point model is subdivided into multiple blocks and a hierarchical structure with minimal bounding box (MBB) property is built for each block. A 3D R-tree index is constructed by those MBB properties. A linear binary tree is created in every block data. During rendering, the model is deal with block by block. Fast view-frustum detection based on respective MBB and 3D R-tree index are first performed to determine invisible data blocks. For visibility detection, this project proposes three algorithms which are back point visibility detection, view point-dependent visibility detection and depth-dependent visibility detection. Visible blocks are then rendered by choosing appropriate rendering model and view point-dependent level-of-detail. For determined level-of-detail, corresponding point geometry is accessed from the 3D R-tree and the linear binary tree (K-D tree). Adaptive distance-dependent rendering is accomplished to select point geometry, yielding better performance without loss of quality. The experiment system is developed in C# program language and CSOpenGL 3D graphic library. The point-cloud data sampled from several great halls of Forbidden City are used in experiment. Experimental results show that our approach can not only design to allow easy access to point data stored in Oracle databases, but also realize real-time rendering for huge datasets in consumer PCs. Those are the grounds for the modeling and computer simulation with point-cloud data.",2009,0, 4685,Investigating the Effect of Refactoring on Software Testing Effort,"Refactoring, the process of improving the design of existing code by changing its internal structure without affecting its external behavior, tends to improve software quality by improving design, improving readability, and reducing bugs. There are many different refactoring methods, each having a particular purpose and effect. Consequently, the effect of refactorings on software quality attribute may vary. Software testing is an external software quality attributes that takes lots of time and effort to make sure that the software performs as intended. In this paper, we propose a classification of refactoring methods based on their measurable effect on software testing effort. This, in turn, helps the software developers decide which refactoring methods to apply in order to optimize a software system with regard to the testing effort.",2009,0, 4686,An Effective Path Selection Strategy for Mutation Testing,"Mutation testing has been identified as one of the most effective techniques, in detecting faults. However, because of the large number of test elements that it introduces, it is regarded as rather expensive for practical use. Therefore, there is a need for testing strategies that will alleviate this drawback by selecting effective test data that will make the technique more practical. Such a strategy based on path selection is reported in this paper. A significant influence on the efficiency associated with path selection strategies is the number of test paths that must be generated in order to achieve a specified level of coverage, and it is determined by the number of paths that are found to be feasible. Specifically, a path selection strategy is proposed that aims at reducing the effects of infeasible paths and conversely developing effective and efficient mutation based tests. The results obtained from applying the method to a set of program units are reported and analysed presenting the flexibility, feasibility and practicality of the proposed approach.",2009,0, 4687,Prioritizing Use Cases to Aid Ordering of Scenarios,"Models are used as the basis for design and testing of software. The unified modeling language (UML) is used to capture and model the requirements of a software system. One of the major requirements of a development process is to detect defects as early as possible. Effective prioritization of scenarios helps in early detection of defects as well maximize effort and utilization of resources. Use case diagrams are used to represent the requirements of a software system. In this paper, we propose using data captured from the primitives of the use case diagrams to aid in prioritization of scenarios generated from activity diagrams. Interactions among the primitives in the diagrams are used to guide prioritization. Customer prioritization of use cases is taken as one of the factors. Preliminary results on a case study indicate that the technique is effective in prioritization of test scenarios.",2009,0, 4688,An Automatic Compliance Checking Approach for Software Processes,"A lot of knowledge has been accumulated and documented in the form of process models, standards, best practices, etc. The knowledge tells how a high quality software process should look like, in other words, which constrains should be fulfilled by a software process to assure high quality software products. Compliance checking for a predefined process against proper constrains is helpful to quality assurance. Checking the compliance of an actual performed process against some constrains is also helpful to process improvement. Manual compliance checking is time-consuming and error-prone, especially for large and complex processes. In this paper, we record the process knowledge by means of process pattern. We provide an automatic compliance checking approach for process models against constrains defined in process patterns. Checking results indicate where and which constrains are violated, and therefore suggests the focuses of future process improvement. We have applied this approach in three real projects and the experimental results are also presented.",2009,0, 4689,Hierarchical Understandability Assessment Model for Large-Scale OO System,"Understanding software, especially in large-scale, is an important issue for software modification. In large-scale software systems, modularization provides help for understanding them. But, even if a system has a well-modularized design, the modular design can be deteriorated by system change over time. Therefore it is needed to assess and manage modularization in the view of understandability. However, there are rarely studies of a quality assessment model for understandability in the module-level. In this paper, we propose a hierarchical model to assess understandability of modularization in large-scale object-oriented software. To assess understandability, we define several design properties, which capture the characteristics influencing on understandability, and design metrics based on the properties, which are used to quantitatively assess understandability. We validate our model and its usefulness by applying the model to an open-source software system.",2009,0, 4690,Improved Fusion Machine Based on T-norm Operators for Robot Perception,"Map reconstruction for autonomous mobile robots navigation needs to deal with uncertainties, imprecisions and even imperfections due to the limited sensors quality and knowledge acquisition. An improved fusion machine is proposed by replacing the classical conjunctive operator with T-norm operator in Dezert-Smarandache Theory (DSmT)framework for building grid map using noisy sonar measurements. An experiment using a Pioneer II mobile robot with 16 sonar detectors on board is done in a small indoor environment, and a 2D Map is built online with our self-developing software platform. A quantitative comparison of the results of our new method for map reconstruction with respect to the classical fusion machine is proposed. We show how the new approach developed in this work outperforms the classical one.",2009,0, 4691,A FPGA-Based Reconfigurable Software Architecture for Highly Dependable Systems,"Nowadays, systems-on-chip are commonly equipped with reconfigurable hardware. The use of hybrid architectures based on a mixture of general purpose processors and reconfigurable components has gained importance across the scientific community allowing a significant improvement of computational performance. Along with the demand for performance, the great sensitivity of reconfigurable hardware devices to physical defects lead to the request of highly dependable and fault tolerant systems. This paper proposes an FPGA-based reconfigurable software architecture able to abstract the underlying hardware platform giving an homogeneous view of it. The abstraction mechanism is used to implement fault tolerance mechanisms with a minimum impact on the system performance.",2009,0, 4692,Automated Model Checking of Stochastic Graph Transformation Systems,"Non-functional requirements like performance and reliability play a prominent role in distributed and dynamic systems. To measure and predict such properties using stochastic formal methods is crucial. At the same time, graph transformation systems are a suitable formalism to formally model distributed and dynamic systems. Already, to address these two issues, Stochastic Graph Transformation Systems (SGTS) have been introduced to model dynamic distributed systems. But most of the researches so far are concentrated on SGTS as a modeling means without considering the need for suitable analysis tools. In this paper, we present an approach to verify this kind of graph transformation systems using PRISM (a stochastic model checker). We translate the SGTS to the input language of PRISM and then PRISM performs the model checking and returns the results back to the designers.",2009,0, 4693,Real Time Optical Network Monitoring and Surveillance System,"Optical diagnosis, performance monitoring, and characterization are essential for achieving high performance and ensuring high quality of services (QoS) of any efficient and reliable optical access network. In this paper, we proposed a practical in-service transmission surveillance scheme for passive optical network (PON). The proposed scheme is designed and simulated by using the OptiSystem CAD program with the system sensitivity - 35 dBm. A real time optical network monitoring software tool named smart access network _ testing, analyzing and database (SANTAD) is introduced for providing remote controlling, centralized monitoring, system analyzing, and fault detection features in large scale network infrastructure management and fault diagnostic. 1625 nm light source is assigned to carry the troubleshooting signal for line status monitoring purposes in the live network system. Three acquisition parameters of optical pulse: distance range, pulse width, and acquisition time, is contributed to obtain the high accuracy in determining the exact failure location. The lab prototype of SANTAD was implemented in the proposed scheme for analyzing the network performance. The experimental results showed SANTAD is able to detect any occurrence of fault and address the failure location within 30 seconds. The main advantages of this work are to manage network efficiently, reduce hands on workload, minimize network downtime and rapidly restore failed services when problems are detected and diagnosed.",2009,0, 4694,Two Heads Better Than One: Metric+Active Learning and its Applications for IT Service Classification,"Large IT service providers track service requests and their execution through problem/change tickets. It is important to classify the tickets based on the problem/change description in order to understand service quality and to optimize service processes. However, two challenges exist in solving this classification problem: 1) ticket descriptions from different classes are of highly diverse characteristics, which invalidates most standard distance metrics; 2) it is very expensive to obtain high-quality labeled data. To address these challenges, we develop two seemingly independent methods 1) discriminative neighborhood metric learning (DNML) and 2) active learning with median selection (ALMS), both of which are, however, based on the same core technique: iterated representative selection. A case study on real IT service classification application is presented to demonstrate the effectiveness and efficiency of our proposed methods.",2009,0, 4695,A Complexity Reliability Model,"A model of software complexity and reliability is developed. It uses an evolutionary process to transition from one software system to the next, while complexity metrics are used to predict the reliability for each system. Our approach is experimental, using data pertinent to the NASA satellite systems application environment. We do not use sophisticated mathematical models that may have little relevance for the application environment. Rather, we tailor our approach to the software characteristics of the software to yield important defect-related predictors of quality. Systems are tested until the software passes defect presence criteria and is released. Testing criteria are based on defect count, defect density, and testing efficiency predictions exceeding specified thresholds. In addition, another type of testing efficiency - a directed graph representing the complexity of the software and defects embedded in the code - is used to evaluate the efficiency of defect detection in NASA satellite system software. Complexity metrics were found to be good predictors of defects and testing efficiency in this evolutionary process.",2009,0, 4696,Wavelet-Based Approach for Estimating Software Reliability,"Recently, wavelet methods have been frequently used for not only multimedia information processing but also time series analysis with high speed and accuracy requirements. In this paper we apply the wavelet-based techniques to estimate software intensity functions in non-homogeneous Poisson process based software reliability models. There are two advantages for use of the wavelet-based estimation; (i) it is a non-parametric estimation without specifying a parametric form of the intensity function under any software debugging scenario, (ii) the computational overhead arising in statistical estimation is rather small. Especially, we apply two kinds of data transforms, called Anscombe transform and Fisz transform, and four kinds of thresholding schemes for empirical wavelet coefficients, to non-parametric estimation of software intensity functions. In numerical validation test with real software fault data, we show that our wavelet-based estimation method can provide higher goodness-of-fit performances than the conventional maximum likelihood estimation and the least squares estimation in some cases, in spite of its non-parametric nature.",2009,0, 4697,Optimal Adaptive System Health Monitoring and Diagnosis for Resource Constrained Cyber-Physical Systems,"Cyber-physical systems (CPS) are complex net-centric hardware/software systems that can be applied to transportation, healthcare, defense, and other real-time applications. To meet the high reliability and safety requirements for these systems, proactive system health monitoring and management (HMM) techniques can be used. However, to be effective, it is necessary to ensure that the operation of the underlying HMM system does not adversely impact the normal operation of the system being monitored. In particular, it must be ensured that the operation of the HMM system will not lead to resource contentions that may prevent the system being monitored from timely completion of critical tasks. This paper presents an adaptive HMM system model that defines the fault diagnosis quality metrics and supports diagnosis requirement specifications. Based on the model, the sensor activation decision problem (SADP) is defined along with a steepest descent based heuristic algorithm to make the HMM configuration decisions that best satisfy the diagnosis quality requirements. Evaluation results show that the technique reduces the overall system resource consumption without adversely impacting the diagnosis capability of the HMM.",2009,0, 4698,Approximating Deployment Metrics to Predict Field Defects and Plan Corrective Maintenance Activities,"Corrective maintenance activities are a common cause of schedule delays in software development projects. Organizations frequently fail to properly plan the effort required to fix field defects. This study aims to provide relevant guidance to software development organizations on planning for these corrective maintenance activities by correlating metrics that are available prior to release with parameters of the selected software reliability model that has historically best fit the product's field defect data. Many organizations do not have adequate historical data, especially historical deployment and field usage information. The study identifies a set of metrics calculable from available data to approximate these missing predictor categories. Two key metrics estimable prior to release surfaced with potentially useful correlations, (1) the number of periods until the next release and (2) the peak deployment percentage. Finally, these metrics were used in a case study to plan corrective maintenance efforts on current development releases.",2009,0, 4699,Variance Analysis in Software Fault Prediction Models,"Software fault prediction models play an important role in software quality assurance. They identify software subsystems (modules,components, classes, or files) which are likely to contain faults. These subsystems, in turn, receive additional resources for verification and validation activities. Fault prediction models are binary classifiers typically developed using one of the supervised learning techniques from either a subset of the fault data from the current project or from a similar past project. In practice, it is critical that such models provide a reliable prediction performance on the data not used in training. Variance is an important reliability indicator of software fault prediction models. However, variance is often ignored or barely mentioned in many published studies. In this paper, through the analysis of twelve data sets from a public software engineering repository from the perspective of variance, we explore the following five questions regarding fault prediction models: (1) Do different types ofclassification performance measures exhibit different variance? (2) Does the size of the data set imply a more (or less) accurate prediction performance? (3) Does the size of training subset impact model's stability? (4) Do different classifiers consistently exhibit different performance in terms of model's variance? (5) Are there differences between variance from 1000 runs and 10 runs of 10-fold cross validation experiments? Our results indicate that variance is a very important factor in understanding fault prediction models and we recommend the best practice for reporting variance in empirical software engineering studies.",2009,0, 4700,Putting It All Together: Using Socio-technical Networks to Predict Failures,"Studies have shown that social factors in development organizations have a dramatic effect on software quality. Separately, program dependency information has also been used successfully to predict which software components are more fault prone. Interestingly, the influence of these two phenomena have only been studied separately. Intuition and practical experience suggests,however, that task assignment (i.e. who worked on which components and how much) and dependency structure (which components have dependencies on others)together interact to influence the quality of the resulting software. We study the influence of combined socio-technical software networks on the fault-proneness of individual software components within a system. The network properties of a software component in this combined network are able to predict if an entity is failure prone with greater accuracy than prior methods which use dependency or contribution information in isolation. We evaluate our approach in different settings by using it on Windows Vista and across six releases of the Eclipse development environment including using models built from one release to predict failure prone components in the next release. We compare this to previous work. In every case, our method performs as well or better and is able to more accurately identify those software components that have more post-release failures, with precision and recall rates as high as 85%.",2009,0, 4701,Fault Tree Analysis of Software-Controlled Component Systems Based on Second-Order Probabilities,"Software is still mostly regarded as a black box in the development process, and its safety-related quality ensured primarily by process measures. For systems whose lion share of service is delivered by (embedded) software, process-centred methods are seen to be no longer sufficient. Recent safety norms (for example, ISO 26262) thus prescribe the use of safety models for both hardware and software. However, failure rates or probabilities for software are difficult to justify. Only if developers take good design decisions from the outset will they achieve safety goals efficiently. To support safety-oriented navigation of the design space and to bridge the existing gap between qualitative analyses for software and quantitative ones for hardware, we propose a fault-tree-based approach to the safety analysis of software-controlled systems. Assigning intervals instead of fixed values to events and using Monte-Carlo sampling, probability mass functions of failure probabilities are derived. Further analysis of PMF lead to estimates of system quality that enable safety managers to take an optimal choice between design alternatives and to target cost-efficient solutions in every phase of the design process.",2009,0, 4702,Metrics for Evaluating Coupling and Service Granularity in Service Oriented Architecture,"Service oriented architecture (SOA) is becoming an increasingly popular architectural style for many organizations due to the promised agility, flexibility benefits. Although the concept of SOA has been described in research and industry literature, there are currently few SOA metrics designed to measure the appropriateness of service granularity and service coupling between services and clients. This paper defines the metrics centered around service design principles concerning loosely-coupled and well-chosen granularity. The metrics are based on information-theoretic principles, and are used to predict the quality of the final software product. The usefulness of the metrics is illustrated through a case study.",2009,0, 4703,Wood Nondestructive Test Based on Artificial Neural Network,"It is important to detect defects in wood, when it reduce the performance. The data and signal processing technology providing researchers with more damage identification problem solution ideas and methods. This article explore the wavelet analysis and artificial neural network for the wood defects based on non-destructive testing, and build an artificial neural network model for wood non-destructive testing technology. After wavelet packet decomposition to extract the different frequency bands of energy levels characteristic of the signal, as the neural network input samples, the network training and learning. Training of the BP network model can be achieved on the different locations automatic recognition of defects, defects of the middle of more than 90% recognition rate on the left and right side of the recognition rate of over 80%.",2009,0, 4704,A Novel Method for the Fingerprint Image Quality Evaluation,"The performance of automatic fingerprint identification system relies heavily on the quality of the captured fingerprint images. A novel method for fingerprint image quality analysis has been presented, which overcomes the shortcoming most of existing methods have, considering the correlation of each quality feature as linear and paying no attention to the clarity of local texture. In this paper, ten features are extracted from the fingerprint image and then Fuzzy Relation Classifier is trained to classify the fingerprint images, which includes the unsupervised clustering and supervised classification to care more about the revelation of the data structure than other classifiers. Experimental results show that the proposed method has a good performance in evaluating the quality of the fingerprint images.",2009,0, 4705,Evaluation of Text Clustering Based on Iterative Classification,"Text clustering is a useful and inexpensive way to organize vast text repositories into meaningful topics categories. Although text clustering can be seen as an alternative to supervised text categorization, the question remains of how to determine if the resulting clusters are of sufficient quality in a real-life application. However, it is difficult to evaluate a given clustering of documents. Furthermore, the existing quality measures rely on the labor standard, which is difficult and time-consuming. The need for fair methods that can assess the validation of clustering results is becoming more and more critical. In this paper, we propose and experiment an innovative evaluation measure that allows one to effectively and correctly assess the clustering results.",2009,0, 4706,Application of Entire Synchronization in Frequency Measurement for SAW Sensors,"The high precision measure frequency is a key technology in SAW gas sensors, because the output of SAW CO gas sensors is frequency. This paper takes the SAW CO gas sensor as an example. Based on the analyzing currently method of measure frequency, the principle of high quality measure frequency for SAW CO gas sensors is brought forward with the aid of the entire synchronized mechanism. And the system structure of measure frequency and electronic element are presented.",2009,0, 4707,Software Reliability Prediction Based on Discrete Wavelet Transform and Neural Network,"Effective prediction of the software reliability is one of the active pares of software engineering. This paper proposes a novel approach based on wavelet transform and neural network (NN). Using this approach, the time series of software faults can be decomposed into four components information, and then predict them by NN respectively. The experience results show that the performance of novel software reliability prediction approach is satisfactory.",2009,0, 4708,Exploring Software Quality Classification with a Wrapper-Based Feature Ranking Technique,"Feature selection is a process of selecting a subset of relevant features for building learning models. It is an important activity for data preprocessing used in software quality modeling and other data mining problems. Feature selection algorithms can be divided into two categories, feature ranking and feature subset selection. Feature ranking orders the features by a criterion and a user selects some of the features that are appropriate for a given scenario. Feature subset selection techniques search the space of possible feature subsets and evaluate the suitability of each. This paper investigates performance metric based feature ranking techniques by using the multilayer perceptron (MLP) learner with nine different performance metrics. The nine performance metrics include overall accuracy (OA), default F-measure (DFM), default geometric mean (DGM), default arithmetic mean (DAM), area under ROC (AUC), area under PRC (PRC), best F-measure (BFM), best geometric mean (BGM) and best arithmetic mean (BAM). The goal of the paper is to study the effect of the different performance metrics on the feature ranking results, which in turn influences the classification performance. We assessed the performance of the classification models constructed on those selected feature subsets through an empirical case study that was carried out on six data sets of real-world software systems. The results demonstrate that AUC, PRC, BFM, BGM and BAM as performance metrics for feature ranking outperformed the other performance metrics, OA, DFM, DGMand DAM, unanimously across all the data sets and therefore are recommended based on this study. In addition, the performances of the classification models were maintained or even improved when over 85 percent of the features were eliminated from the original data sets.",2009,0, 4709,Comparison of Two Fitness Functions for GA-Based Path-Oriented Test Data Generation,"Automatic path-oriented test data generation is not only a crucial problem but also a hot issue in the research area of software testing today. As a robust metaheuritstic search method in complex spaces, genetic algorithm (GA) has been used to path-oriented test data generation since 1992 and outperforms other approaches. A fitness function based on branch distance (BDBFF) and another based on normalized extended Hamming distance (SIMILARITY) are both applied in GA-based path-oriented test data generation. To compare performance of these two fitness functions, a triangle classification program was chosen as the example. Experimental results show that BDBFF-based approach can generate path-oriented test data more effectively and efficiently than SIMILARITY- based approach does.",2009,0, 4710,A Multi-instance Model for Software Quality Estimation in OO Systems,"In this paper, a problem of object-oriented (OO) software quality estimation is investigated with a multi-instance (MI) perspective. In detail, each set of classes that have inheritance relation, named `class hierarchy', is regarded as a bag in the training, while each class in the bag is regarded as an instance. The task of the software quality estimation in this study is to predict the label of unseen bags, i.e. the fault-proneness of untested class hierarchies. It is stipulated that a fault-prone class hierarchy contains at least one fault-prone (negative) class, while a not fault-prone (positive) one has no negative class. Based on the modification records (MR) of previous project releases and OO software metrics, the fault-proneness of untested class hierarchy can be predicted. A MI kernel specifically designed for MI data was utilized to build the OO software quality prediction model. This model was evaluated on five datasets collected from an industrial optical communication software project. Among the MI learning algorithms applied in our empirical study, the support vector algorithms combined with dedicated MI kernel led others in accurately and correctly predicting the fault-proneness of the class hierarchy.",2009,0, 4711,High-Dimensional Software Engineering Data and Feature Selection,"Software metrics collected during project development play a critical role in software quality assurance. A software practitioner is very keen on learning which software metrics to focus on for software quality prediction. While a concise set of software metrics is often desired, a typical project collects a very large number of metrics. Minimal attention has been devoted to finding the minimum set of software metrics that have the same predictive capability as a larger set of metrics - we strive to answer that question in this paper. We present a comprehensive comparison between seven commonly-used filter-based feature ranking techniques (FRT) and our proposed hybrid feature selection (HFS) technique. Our case study consists of a very high-dimensional (42 software attributes) software measurement data set obtained from a large telecommunications system. The empirical analysis indicates that HFS performs better than FRT; however, the Kolmogorov-Smirnov feature ranking technique demonstrates competitive performance. For the telecommunications system, it is found that only 10% of the software attributes are sufficient for effective software quality prediction.",2009,0, 4712,Performance Appraisal System for Academic Staff in the Context of Digital Campus of Higer Education Institutions: Design and Implementation,"Academic staff at higher education institutions are crucial for quality education and research, the core competiveness of a top research-oriented university. Efforts have been made at many universities to work on a system to effectively and accurately assess the performance of academic staff to promote the overall teaching and research activities. The paper presents an appraisal information system developed for School of Economics and Management of Beihang University, which has taken into consideration the contextual factors of research-oriented universities. The appraisal information system, based on service-oriented architecture, is characterized by a multi-layer system in digital campus. The implementation of the system has significantly promoted the overall teaching and research quality at the school.",2009,0, 4713,An Improving Approach for Recovering Requirements-to-Design Traceability Links,"Requirement tracing is an important activity for its helpfulness to effective system quality assurance, impact analyzing of changes and software maintenance. In this paper, we propose an automatic approach called LGRTL to recover traceability links between high-level requirements and low-level design elements. This approach treats the recovery process as Bayesian classification process. Meanwhile, we add a synonym process to the preprocessing phase, and improve the Bayesian model for performing better. To evaluate the validity of the method, we perform a case study and the experimental results show that our method can enhance the effect to a certain extent.",2009,0, 4714,An Efficient Forward Recovery Checkpointing Scheme in Dissimilar Redundancy Computer System,"Roll-forward checkpointing schemes (RFCS) are developed in order to avoid rollback in the presence of independent faults and increase the possibility that a task completes within a tight deadline. But the assumption of RFCS does not exist in most time. Run the same software on the same hardware may result in correlated faults. Another question is these RFCS schemes may lose useful build-in self detection information results in performance degradation. In this paper, we propose a twice dissimilar redundancy computer based roll-forward recovery scheme (TDCS) that can avoid the correlated faults and realize fault-tolerance, without extra process. At last we use a novel technique based on a Markov reward model, to reveal our TDCS performance is quite better than the RFCS in average completion time when build-in self detection coverage be high.",2009,0, 4715,Software Reliability Growth Models Based on Non-Homogeneous Poisson Process,"Non-homogeneous Poisson process (NHPP) model with typical reliability growth patterns is an important technology about evaluating software reliability. Two parameters which affect software reliability are original failures number and failure-detected rate. The paper firstly defines the software failure distributed and discusses a few kinds' software reliability models. Non-homogeneous Poisson process model is presented. The models with different parameters are discussed and analyzed. The model's restrict condition and some of parameters, original failures number and failure-detected ratio, are extrapolated. Assessment methodology about key parameters is given.",2009,0, 4716,Memory Leak Dynamic Monitor Based On HOOK Technique,"Memory leak can cause performance decrease or even breakdown of computer system. According to the unavailability of COM component, this paper analyses COM component's memory leak mechanism, propose a testing architecture and provide a memory leak detection method based on HOOK technique. This method can locate functions which cause memory leak and get details of the leaking process. The experiment shows that the memory leak faults can be triggered and monitored effectively.",2009,0, 4717,A Kind of Novel Constrained Multipath Routing Algorithm in Mobile Ad Hoc Networks,Ad hoc network is a collection of wireless mobile nodes dynamically forming a temporary network without the use of any existing network infrastructure or centralized administration.. This paper proposes a novel constrained entropy-based multipath routing algorithm in adhoc (NCEMR). The key idea of NCEMR algorithm is to construct the new metric-entropy and select the stability path with the help of entropy metric to reduce the number of route reconstruction so as to provide QoS guarantee in the ad hoc network. It is typically proposed in order to increase the reliability of data transmission or to provide load balancing. The simulation results show that the proposed approach and parameters provide an accurate and efficient method of estimating and evaluating the route stability in dynamic ad hoc.,2009,0, 4718,Image Denoising Using Information Measure and Support Vector Machines,"Image denoising is one of important steps in a number of image processing applications. However, available methods mainly present by conducting filter of restoration on whole observation image, resulting in many image detail information have been lost. So, how to obtain the balance of remove noises from the smooth regions and preserved more image detail at high frequency regions have still worth to pay more attention. It is presents a novel approaches that can improve image quality by reducing corrupted pixels, but leave good pixels unchanged. First, information measure method is introduced to extract noise features from observation image. And then, a support vector machines (SVM) based classifier which is employed to divided noise corrupted image into noise candidates pixels and good pixels, so that a noise map is generated that can be used to guide the mixed mean and media filter (MMMF), which is designed to conduct restoration filter just for corrupted pixels. Three typical numerical experimental are reported and results show that the proposed algorithm can achieve better performance both on vision effect and a higher mark on objective criterion (peak signal and noise ratio, PSNR).",2009,0, 4719,Post-Forecast OT: A Novel Method of 3D Model Compression,"In this paper, we present Post-Forecast OT (Octree), a novel method of 3D model compression codec based on Octree. Vertices of 3D meshes are re-classified according to the Octree rule. All the nodes of the Octree are statistically analyzed to identify the type of nodes with the max proportion and are encoded with fewer bits. The vertices positions are predicted and recorded, which can effectively reduce the error between the decoded vertices and the corresponding vertices in the original 3D model. Compared with prior 3D model progressive codec method with severe distortion at low bit rates, Post-Forecast OT has better performance while providing a pleasant visual quality. We also encode the topology and attribute information corresponding to the geometry information, which enables progressive transmission of all encoded 3D data over the Internet.",2009,0, 4720,Fingerprint Chromatogram and Fuzzy Calculation for Quality Control of Shenrong Tonic Wine,"With computer software and fuzzy calculation, we determined several key compounds and established fingerprint to control the stability of food products. High performance liquid chromatography (HPLC) was used to establish the fingerprint chromatogram of Shenrong tonic wine to control its quality and stability, and to detect possible counterfeits. We used reverse phase C-18 column, equivalent elution and detection wavelength at 259 nm. The chromatographic fingerprint was established by using sample chromatography of 10 different production batches to calculate the relative retention time ¿ and the relative area Ar of each peak respectively, and 11 peaks of common characteristic had been found. Cosine method was used to calculate the similarity by which the comparative study was done on various batches. The results showed that the method was convenient and applicable for the quality evaluation of Chinese herb tonic wine.",2009,0, 4721,Parameters Tuning of Fuzzy Controller for Rotated Pendulum Based on Improved Particle Swarm Optimization,"The Particle Swarm Optimization (PSO) technique, which refines its search by attracting the particles to positions with good solutions, has ever since turned out to be a competitor in the field of numerical optimization. The PSO can generate high-quality solutions within shorter computation time and have more stable convergence charactoristic than other stochastic methods. In this article, an improved Particle Swarm Optimization is used to tune the parameters of fuzzy-PID controller. A new method to estimate scaling factors of the fuzzy PID controller for the rotated inverted pendulum is investigated. The performance of the fuzzy PID controller is sensitive to the variety of scaling factors. The design procedure dwells on the use of particle swarm optimization and estimate algorithm. The tuning of the scaling factors of the fuzyy PID controller is essential to the entire optimization process. And the scaling factors of the fuzyy PID controller is estimated by means of regression polynomials. Numerical studies verify the effectivenss.",2009,0, 4722,Automatic Test Data Generation Based on Ant Colony Optimization,Software testing is a crucial measure used to assure the quality of software. Path testing can detect bugs earlier because of it performs higher error coverage. This paper presents a model of generating test data based on an improved ant colony optimization and path coverage criteria. Experiments show that the algorithm has a better performance than other two algorithms and improve the efficiency of test data generation notably.,2009,0, 4723,Using Probabilistic Model Checking to Evaluate GUI Testing Techniques,"Different testing techniques are being proposed in software testing to improve systems quality and increase development productivity. However, it is difficult to determine from a given set of testing techniques, which is the most effective testing technique for a certain domain, particularly if they are random-based. We are proposing a strategy and a framework that can evaluate such testing techniques. Our framework is defined compositionally and parametric ally. This allows us to characterize different aspects of systems in an incremental way as well as test specific hypothesis about the system under test. In this paper we focus on GUI-based systems. That is, the specific internal behavior of the system is unknown but it can be approximated by probabilistic behaviors. And the empirical evaluation is based on the probabilistic model checker PRISM.",2009,0, 4724,Quantitative RAS Comparison of Sun CMT/Solaris and X86/Linux Servers,"By incorporating the chip multi-threading (CMT) and operating system predictive self-healing technologies, the Sun CMT/Solaris based servers are not only cost/performance effective, but also more robust in reliability, availability, and serviceability (RAS) than the X86/Linux servers with similar performance. The differentiators include higher levels of hardware integration, more fault tolerance provisions in processors, the Solaris memory page retirement (MPR), and the Solaris/SPARC processor offlining (PO) capabilities of the CMT/Solaris server. This study applies analytical models, with parameters calibrated by field experience, to quantitatively compare system RAS, against hardware faults, between the CMT/Solaris and X86/Linux servers. The results show significant RAS benefits of the CMT, MPR, and PO technologies.",2009,0, 4725,Fault Injection Scheme for Embedded Systems at Machine Code Level and Verification,"In order to evaluate software from the third party whose source codes are not available, after a careful analysis of the statistic data sorted by orthogonal defect classification, and the corresponding relation between patterns of high level language programs and machine codes, we propose a fault injection scheme at machine code level suitable respectively to the IA32 ARM and MIPS architecture, which takes advantage of mutating machine code. To prove the feasibility and validity of this scheme, two sets of programs are chosen as our experimental target: Set I consists of two different versions of triangle testing algorithms, and Set II is a subset of the Mibench which is a collection of performance benchmark programs designed for embedded systems; we inject both high level faults into the source code written in C language and the corresponding machine code level faults directly into the executables, and monitor their running on Linux. The results from experiments show that at least 96% of total similarity degree is obtained. Therefore, we conclude that the effect of injecting corresponding faults on both the source code level and machine code level are mostly the same. Therefore, our scheme is rather useful in analyzing system behavior under faults.",2009,0, 4726,Evaluating the Use of Reference Run Models in Fault Injection Analysis,"Fault injection (FI) has been shown to be an effective approach to assessing the dependability of software systems. To determine the impact of faults injected during FI, a given oracle is needed. Oracles can take a variety of forms, including (i) specifications, (ii) error detection mechanisms and (iii) golden runs. Focusing on golden runs, in this paper we show that there are classes of software which a golden run based approach can not be used to analyse. Specifically, we demonstrate that a golden run based approach can not be used in the analysis of systems which employ a main control loop with an irregular period. Further, we show how a simple model, which has been refined using FI experiments, can be employed as an oracle in the analysis of such a system.",2009,0, 4727,Mining Rank-Correlated Associations for Recommendation Systems,"Recommendation systems, best known for their use in e-commerce or social network applications, predict users' preferences and output item suggestions. Modern recommenders are often faced with many challenges, such as covering high volume of volatile information, dealing with data sparsity, and producing high-quality results. Therefore, while there are already several strategies of this category, some of them can still be refined. Association rules mining is one of the widely applied techniques of recommender implementation. In this paper, we propose a tuned method, trying to overcome some defects of existing association rules based recommendation systems by exploring rank correlations. It builds a model for preference prediction with the help of rank correlated associations on numerical values, where traditional algorithms of such kind would choose to do discretization. An empirical study is then conducted to see the efficiency of our method.",2009,0, 4728,Trustworthy Evaluation of a Safe Driver Machine Interface through Software-Implemented Fault Injection,"Experimental evaluation is aimed at providing useful insights and results that constitute a confident representation of the system under evaluation. Although guidelines and good practices exist and are often applied, the uncertainty of results and the quality of the measuring system is rarely discussed. To complement such guidelines and good practices in experimental evaluation, metrology principles can contribute in improving experimental evaluation activities by assessing the measuring systems and the results achieved. In this paper we present the experimental evaluation by software-implemented fault injection of a safe train-borne driver machine interface (DMI), to evaluate its behavior in presence of faults. The measuring system built for the purpose and the results obtained on the assessment of the DMI are scrutinized along basic principles of metrology and good practices of fault injection. Trustfulness in results has been estimated satisfactory and the experimental campaign has shown that the safety mechanisms of the DMI correctly identify the faults injected and that a proper reaction is executed.",2009,0, 4729,A New Algorithm for Fabric Defect Detection Based on Image Distance Difference,"This paper brings forward a new method of detection of fabric defect, namely image distance difference arithmetic. The system permit user to set appropriate control parameter of fabric defect defection based on the type of the fabric. It can detect more than 30 kinds of common defects, which has advantages of high identification correctness and fast inspection speed. Finally, using some image processing technology to score the grade of piece to satisfy the quality and elevate the finished product ratio.",2009,0, 4730,Zapmem: A Framework for Testing the Effect of Memory Corruption Errors on Operating System Kernel Reliability,"While monolithic operating system kernels are composed of many subsystems, during runtime they all share a common address space, making fault propagation a serious issue. The code quality of each subsystem is different, as OS development is a complex task commonly divided by different groups with different degrees of expertise. Since the memory space into which this code runs is shared, the occurrence of bugs or errors in one of the subsystems may propagate to others and affect general OS reliability. It is necessary, then, to test how errors propagate between the different kernel subsystems and how they affect reliability. This work presents a simple new technique to inject memory corruption faults and Zapmem, a fault injection tool which uses such technique to test the effect on reliability from memory corruption of statically allocated kernel data. Zapmem associates the runtime memory addresses to the corresponding high level (source code) memory structure definitions, which indicate which kernel subsystem allocated that memory region, and the tool has minimal intrusiveness, as our technique does not require kernel instrumentation. The efficacy of our approach and preliminary results are also presented.",2009,0, 4731,A Performance Monitoring Tool for Predicting Degradation in Distributed Systems,"Continuous performance monitoring is critical for detecting software aging and enabling performance tuning. In this paper we design and develop a performance monitoring system called PerfMon. It makes use of the /proc virtual file system's kernel-level mechanisms and abstractions in Linux-based operating system, which provides the building blocks for implementation of efficient, scalable and multi-level performance monitoring. Using PerfMon, we show that (1) monitoring functionality can be customized according to clients' requirements, (2) by filtering of monitoring information, the trade-offs can be attained between the quality of the information monitored and the associated overheads, and (3) by performing monitoring at application-level, we can predict software aging by taking into account the multiple resources used by applications. Finally, we evaluate PerfMon by experiments.",2009,0, 4732,A New Strategy for Pairwise Test Case Generation,"Pairwise testing has become an important approach to software testing because it often provides effective error detection at low cost, and a key problem of it is the test case generation method. As the part of an effort to develop an optimized strategy for pairwise testing, this paper proposes an efficient pairwise test case generation strategy, called VIPO (Variant of In-Parameter-order), which is a variant of IPO strategy. We compare its effectiveness with some existing strategies including IPO, Tconfig, Pict and AllPairs. Experimental results demonstrate that VIPO outperformed them in terms of the number of generated test case within reasonable execution times, in most cases.",2009,0, 4733,Application of Particle Swarm Optimization and RBF Neural Network in Fault Diagnosis of Analogue Circuits,"BP neural network has the shortcoming of over-fitting, local optimal solution, which affects the practicability of BP neural network. RBF neural network is a feedforward neural network, which has the global optimal closing ability. However, the parameters in RBF neural network need determination. Particle swarm optimization is presented to choose the parameters of RBF neural network. The particle swarm optimization-RBF neural network method has high classification performance, and is applied to fault diagnosis of analogue circuits. Finally, the result of fault diagnosis cases shows that the particle swarm optimization - RBF neural network method has higher classification than BP neural network.",2009,0, 4734,Chinese Named Entity Recognition with Inducted Context Patterns,"Since whether or not a word is a name is determined mostly by the context of the word, the context pattern induction plays an important role in name entity recognition (NER). We present a NER method based on the context pattern induction. It induces high-precision context patterns in an unsupervised way starting with some entity seeds. Then it uses directly the matched context patterns, instead of extracted entities by inducted patterns, as the features of a CRF-based NER model. The experiments show that the proposed method improves the performance of the high quality named entity recognizer, and achieves higher accuracy and recall rate.",2009,0, 4735,Design and Realization of Feeder Terminal Unit Based on DSP,"Distribution automation is an effective means to improve the reliability and energy quality of power supply. Feeder automation (FA) is one of the important contents in the distribution automation, and feeder terminal unit (FTU) is the key intelligent equipment and basic control unit of feeder automation. In recent years, with the rapid development of detection and embedded technology, higher request on reliability and accuracy of FTU is put forward. In this paper, based on the design schemes of lots of manufacturers, a FTU using DSP as the hardware core is designed. It uses AC sampling theory to measure electrical network's voltage and current, uses AC sampling value to calculate virtual value of voltage and current, power, power factor and other electrical parameters, and has several wave record functions. The software is based on uC/OS-II, enhances the system's modularization and expansibility.",2009,0, 4736,An Improved Spatial Error Concealment Algorithm Based on H.264,"The losses of packets are inevitable when the video is transported over error-prone networks. Error concealment methods can reduce the quality degradation of the received video by masking the effects of such errors. This paper presents a novel spatial error concealment algorithm based on the directional entropy in the available neighboring Macro Blocks (MBs), which can adaptively switch between weighted pixel average (WPA) adopted in H.264 and an improved directional interpolation algorithm to recover the lost MBs. In this work, the proposed algorithm was evaluated on H.264 reference software JM8.6. The illustrative examples demonstrate that the proposed method can achieve better Peak to Signal-to-Noise Ratio (PSNR) performance and visual quality, compared with WPA and the conventional directional interpolation algorithm respectively.",2009,0, 4737,Side-Channel Attacks on Cryptographic Software,"When it comes to cryptographic software, side channels are an often-overlooked threat. A side channel is any observable side effect of computation that an attacker could measure and possibly influence. Crypto is especially vulnerable to side-channel attacks because of its strict requirements for absolute secrecy. In the software world, side-channel attacks have sometimes been dismissed as impractical. However, new system architecture features, such as larger cache sizes and multicore processors, have increased the prevalence of side channels and quality of measurement available to an attacker. Software developers must be aware of the potential for side-channel attacks and plan appropriately.",2009,0, 4738,Monitoring Workflow Applications in Large Scale Distributed Systems,"This paper presents the design, implementation and testing of the monitoring solution created for integration with a workflow execution platform. The monitoring solution constantly checks the system evolution in order to facilitate performance tuning and improvement. Monitoring is accomplished at application level, by monitoring each job from each workflow and at system level, by aggregating state information from each processing node. The solution also computes aggregated statistics that allow an improvement to the scheduling component of the system, with which it will interact. The improvement on the performance of distributed application is obtained using the realtime information to compute estimates of runtime which are used to improve scheduling. Another contribution is an automated error detection systems, which can improve the robustness of grid by enabling fault recovery mechanisms to be used. These aspects can benefit from the particularization of the monitoring system for a workflow-based application: the scheduling performance can be improved through better runtime estimation and the error detection can automatically detect several types of errors. The proposed monitoring solution could be used in the SEEGRID project as a part of the satellite image processing engine that is being built.",2009,0, 4739,An empirical analysis for evaluating the link quality of robotic sensor networks,"This paper presents a comprehensive metric to evaluate the link quality for the distributed control of robotic swarms to maintain the communication links. The mobile robots dynamically reconfigure themselves to maintain reliable end-to-end communication links. Such applications require online measurements of communication link quality in real-time and require a connection between link quality and robot positions. In this paper, we present the empirical results and analysis of a link variability study for an indoor and outdoor environments including received signal strength indicator(RSSI), active throughput and packet loss rate(PLR) using the off-the-shelf software and hardware. RSSI is adpoted to reflect the basic performance of link quality in robotic sensor networks. Throughput acting as a decaying threshold generator is used to estimate the critical point for smooth and stable communication. Meanwhile, The metric PLR is adopted to reckon the cut-off point in end-to-end communication. The assessment of link quality acts as a feedback for cooperative control of mobile robots. The experimental results have shown the effectiveness of evaluation for the link quality of robotic sensor networks.",2009,0, 4740,Software-Based Hardware Fault Tolerance for Many-Core Architectures,"This presentation will point out the new opportunities and challenges for applying software-based hardware fault tolerance to emerging many-core architectures. The paper will discuss the tradeoff between the application of these techniques and the classical hardware-based fault tolerance in terms of fault coverage, overhead, and performance.",2009,0, 4741,On the Functional Qualification of a Platform Model,"This work focuses on the use of functional qualification for measuring the quality of co-verification environments for hardware/software (HW/SW) platform models. Modeling and verifying complex embedded platforms requires co-simulating one or more CPUs running embedded applications on top of an operating system, and connected to some hardware devices. The paper describes first a HW/SW co-simulation framework which supports all mechanisms used by software, in particular by device drivers, to access hardware devices so that the target CPU's machine code can be simulated. In particular, synchronization between hardware and software is performed by the co-simulation framework and, therefore, no adaptation is required in device drivers and hardware models to handle synchronization messages. Then, Certitude¿, a flexible functional qualification tool, is introduced. Functional qualification is based on the theory of mutation analysis, but it is extended by considering a mutation to be killed only if a testcase fails. Certitude¿ automatically inserts mutants into the HW/SW models and determines if the verification environment can detect these mutations. A known mutant that cannot be detected points to a verification weakness. If a mutant cannot be detected, there is evidence that actual design errors would also not be detected by the co-verification environment. This is an iterative process and functional qualification solution provides the verifier with information to improve the co-verification environment quality. The proposed approach has been successfully applied on an industrial platform as shown in the experimental result section.",2009,0, 4742,Parallelization of render engine for global illumination of graphics scenes,"Photorealistic image synthesis plays a central role in computer graphics since it has wide range of application areas such as scientific visualization, film industry, gaming and architectural modeling. Global illumination algorithms used for photorealistic image synthesis such as ray-tracing, radiosity, and photon mapping suffer from speed because of complex computational operations and hardware limitations. In this study, a parallelization technique for the photon mapping algorithm has been developed to reduce the rendering time of the images by distributing the jobs over computing units. To increase the visual quality of the output images, a multilevel dynamic anti-aliasing algorithm has also been implemented to smooth the jagged edges caused by the bucket render. Our render engine supports various geometry and material properties as well as hard-to-simulate visual effects such as caustics. System is tested on several three dimensional scene files to measure and analyze its performance.",2009,0, 4743,Traffic Provisioning for HTTP Applications in WiFi Networks,"Access networks based on cooper cables are costly to build and maintain. For this reason, last-mile access networks based on wireless technologies are gaining considerable attention. Among the wireless technologies being employed, the IEEE802.11 is common place. In such scenarios it is important to have well defined mechanisms to better evaluate traffic characteristics and overall system performance. Such understanding can help network designers to better estimate the resources needed to provide basic services with a reasonable level of quality (QoS). This task, however, has been shown to be non-trivial. The main contribution of this work is to present techniques than can be applied to estimate the throughput and the access pattern for basic services in the context of the IEEE802.11 based networks.",2009,0, 4744,Two-dimensional software reliability measurement technologies,"This paper discusses two-dimensional software reliability measurement technologies, which describe a software reliability growth process depending on two-types of software reliability growth factors: Testing-time and testing-effort factors. From the view point of actual software failure-occurrence mechanism, it is natural to consider that a software reliability growth process depends on not only testing-time factor but also other software reliability growth factors, i.e., testing-effort factors, such as test-execution time, testing-skill of test engineers, testing-coverage. From such reason, the two-dimensional software reliability measurement technologies enable us to conduct more feasible software reliability assessment than the one-dimensional (conventional) software reliability measurement approach, in which it is assumed that the software reliability growth process depends only on testing-time. We discuss two-types of two-dimensional software reliability growth modeling approaches for feasible software reliability assessment, and conduct goodness-of-fit comparisons of our models with existing one-dimensional software reliability growth models. Finally, as one of the examples for two-dimensional software reliability analysis, we show examples of the application of our two-dimensional software reliability growth model by using actual data.",2009,0, 4745,The selection of proper discriminative cognitive tasks â€?A necessary prerequisite in high-quality BCI applications,"While in brain computer interface (BCI) field the research is focused basically on finding improved processing methods leading to both high classification rates and high bit transfer rates, in this paper the same BCIs performances are addressed but, this time, with the emphasis set on the subject-specific discriminative cognitive tasks selection process. In this respect, a set of twelve electroencephalographic (EEG)-discriminative mental tasks was proposed to be studied in conjunction with four different subjects. For each subject, a particular set of four mental tasks was selected. The classification performances corresponding to these particular sets of tasks were obtained using some standard processing methods (i.e., the autoregressive model of the EEG signals and a multilayer perceptron classifier trained with back-propagation algorithm). The superior classification rates achieved for the selected sets compared to other set of mental tasks commonly used in the 4-class BCI studies (i.e. the set proposed by Keirn and Aunon) promote the idea of subject-oriented mental tasks selection process as a necessary preliminary step in any high-quality BCI application.",2009,0, 4746,On the application of predictive control techniques for adaptive performance management of computing systems,"This paper addresses adaptive performance management of real-time computing systems. We consider a generic model-based predictive control approach that can be applied to a variety of computing applications in which the system performance must be tuned using a finite set of control inputs. The paper focuses on several key aspects affecting the application of this control technique to practical systems. In particular, we present techniques to enhance the speed of the control algorithm for real-time systems. Next we study the feasibility of the predictive control policy for a given system model and performance specification under uncertain operating conditions. The paper then introduces several measures to characterize the performance of the controller, and presents a generic tool for system modeling and automatic control synthesis. Finally, we present a case study involving a real-time computing system to demonstrate the applicability of the predictive control framework.",2009,0, 4747,An Adaptive Service Composition Performance Guarantee Mechanism in Peer-to-Peer Network,"Service composition is becoming a hot research point in recent years, especially in distributed peer-to-peer (P2P) network. However, how to guarantee the high-quality composition service performance becomes the key technology and difficulty in P2P network, which draws lots of attention in computer research field. According to the characteristics of distributed network, an adaptive service composition performance guarantee mechanism in P2P network is presented. Firstly, the composition services are divided into preferred service and backup one; secondly, BP neural network is introduced to detect and evaluate the service performance exactly; finally, the backup service will adaptively replace the preferred one when the preferred composition service performance is less than the threshold value. Except that, the half-time window punishment mechanism is introduced to increase the punishment to the low-quality service and encourage users to provide more high-quality one, which further improves the service composition performance in P2P network. The simulations verify our method to be effective and enrich the proposed theory.",2009,0, 4748,Research and Implement of the Key Technology for IP QoS Based on Network Processor,"Along with the rapid development of network technology, the application of internet is wider. The network service also presents the characteristics of diversification. Thus the traditional real-time services have not already people's need. Immediately what to produce is various multi-media businesses such as, video, audio, the television meeting etc. with high request of real time. The traditional service of IP net can provide"" do the best one can"" only which can not satisfy the request of low time delay. So the trends of IP provide service of classification and guarantee the quality of service in network equipment. The network is developed rapidly, Router is the main network equipment and the control unit is network processor, so the research and implement of the quality of service is important orientation. The principal content of this dissertation is realization of the IP QoS based on network processor. According to the characteristic of network and the performance of network processor, a IP QoS system model is designed. The IP QoS in data transmit lay is mainly researched and realized, the validity of the QoS system proposed in this dissertation of is also proved in Router.",2009,0, 4749,An Investigation of the Effect of Discretization on Defect Prediction Using Static Measures,"Software repositories with defect logs are main resource for defect prediction. In recent years, researchers have used the vast amount of data that is contained by software repositories to predict the location of defect in the code that caused problems. In this paper we evaluate the effectiveness of software fault prediction with Naive-Bayes classifiers and J48 classifier by integrating with supervised discretization algorithm developed by Fayyad and Irani. Public datasets from the promise repository have been explored for this purpose. The repository contains software metric data and error data at the function/method level. Our experiment shows that integration of discretization method improves the software fault prediction accuracy when integrated with Naive-Bayes and J48 classifiers.",2009,0, 4750,EVM measurement techniques for MUOS,"Physical layer simulations and analysis techniques were used to develop the error vector magnitude (EVM) metric specifying transmitter signal quality. These tools also proved to be very useful in specifying lower level hardware unit performance and predicting mobile user objective system (MUOS) satellite 8-PSK transmitter performance before the hardware was built. However, the verification of EVM compliance at Ka frequencies posed challenges. Initial measurements showed unacceptably high levels of EVM which exceeded specification. Attempts to remove the contribution of the test equipment distortion and isolate the device-under-test distortion using commercial oscilloscope VSA software were unsuccessful. In this paper, we describe methods used to develop an accurate EVM measurement. The transmitted modulated signal was first recorded using a digitizing scope. In-house system identification, equalization, demodulation and analysis algorithms were then used to remove signal distortion due to the test equipment. Results from EVM measurements on MUOS single-channel hardware are given and performance is shown to be consistent with estimates made three years earlier. The results reduce technical risk and verify transmitter design by demonstrating signal quality.",2009,0, 4751,Characterization of Software Projects by Restructuring Parameters for Usability Evaluation,"Every software project progresses through a development process with necessary involvement of developers and users both. For evaluation of the software to confirm its phase wise goals, objectives and quality, developers and customers must reach to a win-win condition. It has been observed that the user satisfaction is the most important factor that must be emphasized in software development and thus, usability is an indispensable feature of a software project. Inclusion of usability dictates determination of existence of usability at different stages of development for measurement of user performance and user satisfaction. Various criteria known as software parameters are employed to evaluate the software for realizing certain level of usability. These criteria may be applied for software estimations and are also involved in several assessments during software development process. Variety of well defined and well structured parameters such as size, duration, files used, external inputs, external outputs, external queries etc. are available to support development of desirable software. Further, based upon these parameters other important software estimations may be derived. Though there exist various dimensions of employing the parameters based on the requirements of a software project, these parameters must be redefined and restructured to evaluate usability of software to produce usable software. In this paper, we restructure the software project parameters for evaluation of software usability. It will lead to overall characterization of the projects in usability perspective. Also, it will be useful for ranking usability attributes and measuring usability subsequently.",2009,0, 4752,A New Fault Tolerance Heuristic for Scientific Workflows in Highly Distributed Environments Based on Resubmission Impact,"Even though highly distributed environments such as Clouds and Grids are increasingly used for e-science high performance applications, they still cannot deliver the robustness and reliability needed for widespread acceptance as ubiquitous scientific tools. To overcome this problem, existing systems resort to fault tolerance mechanisms such as task replication and task resubmission. In this paper we propose a new heuristic called resubmission impact to enhance the fault tolerance support for scientific workflows in highly distributed systems. In contrast to related approaches, our method can be used effectively on systems even in the absence of historic failure trace data. Simulated experiments of three real scientific workflows in the Austrian Grid environment show that our algorithm drastically reduces the resource waste compared to conservative task replication and resubmission techniques, while having a comparable execution performance and only a slight decrease in the success probability.",2009,0, 4753,Early Software Fault Prediction Using Real Time Defect Data,"Quality of a software component can be measured in terms of fault proneness of data. Quality estimations are made using fault proneness data available from previously developed similar type of projects and the training data consisting of software measurements. To predict faulty modules in software data different techniques have been proposed which includes statistical method, machine learning methods, neural network techniques and clustering techniques. The aim of proposed approach is to investigate that whether metrics available in the early lifecycle (i.e. requirement metrics), metrics available in the late lifecycle (i.e. code metrics) and metrics available in the early lifecycle (i.e. requirement metrics) combined with metrics available in the late lifecycle (i.e. code metrics) can be used to identify fault prone modules by using clustering techniques. This approach has been tested with three real time defect datasets of NASA software projects, JM1, PC1 and CM1. Predicting faults early in the software life cycle can be used to improve software process control and achieve high software reliability. The results show that when all the prediction techniques are evaluated, the best prediction model is found to be the fusion of requirement and code metric model.",2009,0, 4754,Keynote 2,"Summary form only given. In the 2006 report of the SEI International Process Research Consortium we identified nine driving forces which are shaping the need for research into software process and product. These were the increased push for value-add, business diversification, technology change, systems complexity, product quality, regulation, security and safety, and globalization. Four research themes were developed of which only one will be explored in this presentation. In the theme entitled ""The relationships between processes and product qualities"" the authors document some twenty six broad research questions which remain as challenges for the software engineering research community. This presentation outlines research carried out since 2006 which addresses some of these questions. In particular the presentation explores research into the meaning of software product quality, the performance quality prediction of heterogeneous systems of systems, and the relationships between defined software process and enacted process. Examples of conclusions which can be drawn are that our research should no longer benchmark development process productivity without consideration of product quality. In another example it shows that we will need to evaluate heterogeneous SOA architectures in the context of processes and workflows. In order to achieve significant progress it is argued that we need to pursue our research at a much deeper and inclusive level of understanding than often seen in the past in terms of process and process / product relationships.",2009,0, 4755,Non-homogeneous Inverse Gaussian Software Reliability Models,"In this paper we consider a novel software reliability modeling framework based on non-homogeneous inverse Gaussian processes (NHIGPs). Although these models are derived in a different way from the well-known non-homogeneous Poisson processes (NHPPs), they can be regarded as interesting stochastic point processes with both arbitrary time non-stationary properties and the inverse Gaussian probability law. In numerical examples with two real software fault data, it is shown that the NHIGP-based software reliability models could outperform the goodness-of-fit and the predictive performances more than the existing NHPP-based models.",2009,0, 4756,RiTMO: A Method for Runtime Testability Measurement and Optimisation,"Runtime testing is emerging as the solution for the integration and assessment of highly dynamic, high availability software systems where traditional development-time integration testing is too costly, or cannot be performed. However, in many situations, an extra effort will have to be invested in implementing appropriate measures to enable runtime tests to be performed without affecting the running system or its environment. This paper introduces a method for the improvement of the runtime testability of a system, which provides an optimal implementation plan for the application of measures to avoid the runtime tests' interferences. This plan is calculated considering the trade-off between testability and implementation cost. The computation of the implementation plan is driven by an estimation of runtime testability, and based on a model of the system. Runtime testability is estimated independently of the test cases and focused exclusively on the architecture of the system at runtime.",2009,0, 4757,An Approach to Measure Value-Based Productivity in Software Projects,"Nowadays, after a lot of evolution in software engineering area, there is not yet a simple and direct answer to the question: What is the best software productivity metric? The simplest and most commonly used metric is the SLOC and its derivations, but these are admittedly problematic for this purpose. In another way, there are some indications of maturation in this topic, the new studies point to the use of more labored models, which are based on multiple dimensions and in the idea of produced value. Based in this tendency and on the evidence that each organization must define its own way to assess their productivity, we defined a process to support the definition of productivity measurement models in software organizations. We also discuss, in this paper, some issues about the difficulties related with the process adoption in real software organizations.",2009,0, 4758,A Neural Network Approach to Forecasting Computing-Resource Exhaustion with Workload,"Software aging refers to the phenomenon that applications will show growing failure rate or performance degradation after longtime execution. It is reported that this phenomenon usually has close relationship with computing-resource exhaustion. This paper analyzes computing-resource usage data collected on a LAN, and quantitatively investigates the relationship between computing-resource exhaustion trend and workload. First, we discuss the definition of workload, and then a multi-layer back propagation neural network is trained to construct the nonlinear relationship between input (workload) and output (computing-resource usage). Then we use the trained neural network to forecast the computing-resource usage, i.e., free memory and used swap, with workload as its input. Finally, the results were benchmarked against those obtained without regard to influence of workload reported in the literatures, such as non-parametric statistical techniques or parametric time series models.",2009,0, 4759,Reuse Strategies in Distributed Complex Event Detection,"In a Pub/Sub system, the procedure of distributed event detection can be divided into two interacting phases: the subscription matching and the subscription/event routing. Also, the performance of the system is greatly influenced by these two parts. Adopting a reuse strategy in matching and routing can reduce matching workload and network traffic. However, few of the existing researches provide a reuse solution comprehensively in distributed environments. This paper focuses on exploiting reuse to improve distributed complex event detection. The Pub/Sub system OGENS is introduced, in which the reuses are exploited in the channel-based subscription/event routing and the complex event matching.",2009,0, 4760,Quality Assessment of Mission Critical Middleware System Using MEMS,"Architecture evaluation methods provide general guidelines to assess quality attributes of systems, which are not necessarily straightforward to practice with. With COTS middleware based systems, this assessment process is further complicated by the complexity of middleware technology and a number of design and deployment options. Efficient assessment is key to produce accurate evaluation results for stakeholders to ensure good decisions are made on system acquisition. In this paper, a systematic evaluation method called MEMS is developed to provide some structure to this assessment process. MEMS produces the evaluation plan with thorough design of experiments, definition of metrics and development of techniques for measurement. This paper presents MEMS and its application to a mission critical middleware system.",2009,0, 4761,Increasing Diversity in Coverage Test Suites Using Model Checking,"Automated test case generation often results in test suites containing significant redundancy such as test cases that are duplicates, prefixes of other test cases, or cover the same test requirements. In this paper we consider the fact that items described by a coverage criterion can be covered in different ways. We introduce a technique where each created test case is guaranteed to cover a test requirement in a new way, even if it has previously been covered. This increases the diversity of how test objectives are satisfied, thus reducing the redundancy in test suites, improving their fault detection ability, and usually also decreasing the number of test cases generated. This approach is based in a scenario of specification based testing using model checkers as test case generation tools, and evaluation is performed on three different case study specifications.",2009,0, 4762,ADAM: Web Anomaly Detection Assistant Based on Feature Matrix,"Importance of web security cannot be overemphasized in the era of web-based economy. Although anomaly detection has long been considered a promising alternative to signature-based misuse detection technique, most studies to date used either small scale or artificially generated attack data. In this paper, based on security analysis applied on anonymous www.microsoft.com log of about 250 GB, we propose Anomaly Feature Matrix (AFM) as an effective framework to characterize anomalies. Feature selection of AFM is based on the characteristics of well-known (e.g., DDoS) attacks as well as patterns of anomalous logs found in the Microsoft data. Independent security analysis performed on the same data by Microsoft security engineers concluded that 1) We did not miss any major attacks; and 2) AFM is a general enough framework to characterize likely web attacks. In order to assist AFM-based anomaly analysis in large organizations, we implemented an interactive and visual analysis tool named ADAM (Anomaly Detection Assistant based on feature Matrix). Integrated with mapping software such as Virtual Earth, ADAM enables efficient and focused security analysis on web logs.",2009,0, 4763,A Hybrid Approach to Detecting Security Defects in Programs,"Static analysis works well at checking defects that clearly map to source code constructs. Model checking can find defects of deadlocks and routing loops that are not easily detected by static analysis, but faces the problem of state explosion. This paper proposes a hybrid approach to detecting security defects in programs. Fuzzy inference system is used to infer selection among the two detection approaches. A cluster algorithm is developed to divide a large system into several clusters in order to apply model checking. Ontology based static analysis employs logic reasoning to intelligently detect the defects. We also put forwards strategies to improve performance of the static analysis. At last, we perform experiments to evaluate the accuracy and performance of the hybrid approach.",2009,0, 4764,SmartClean: An Incremental Data Cleaning Tool,"This paper presents the SmartClean tool. The purpose of this tool is to detect and correct the data quality problems (DQPs). Compared with existing tools, SmartClean has the following main advantage: the user does not need to specify the execution sequence of the data cleaning operations. For that, an execution sequence was developed. The problems are manipulated (i.e., detected and corrected) following that sequence. The sequence also supports the incremental execution of the operations. In this paper, the underlying architecture of the tool is presented and its components are described in detail. The tool's validity and, consequently, of the architecture is demonstrated through the presentation of a case study. Although SmartClean has cleaning capabilities in all other levels, in this paper are only described those related with the attribute value level.",2009,0, 4765,What Makes Testing Work: Nine Case Studies of Software Development Teams,Recently there has been a focus on test first and test driven development; several empirical studies have tried to assess the advantage that these methods give over testing after development. The results have been mixed. In this paper we investigate nine teams who tested during coding to examine the effect it had on the external quality of their code. Of the top three performing teams two used a documented testing strategy and the other an ad-hoc approach to testing. We conclude that their success appears to be related to a testing culture where the teams proactively test rather than carry out only what is required in a mechanical fashion.,2009,0, 4766,Boundary Value Testing Using Integrated Circuit Fault Detection Rule,"Boundary value testing is a widely used functional testing approach. This paper presents a new boundary value selection approach by applying fault detection rules for integrated circuits. Empirical studies based on Redundant Strapped-Down Inertial Measurement Unit of the 34 program versions and 426 mutants compare the new approach to the current boundary value testing methods. The results show that the approach proposed in this paper is remarkably effective in conquering test blindness, reducing test cost and improving fault coverage.",2009,0, 4767,Using TTCN-3 in Performance Test for Service Application,"Service applications are applicable to provide services for requests of users from network. Due to the fact that they have to endure a big number of concurrent requests, the performance of service applications running under specific arrival rate of requests should be assessed. To measure the performance of a service application, multi-party testing context is needed to simulate a number of concurrent requests and collect the responses. TTCN-3 is a test description language; it provides basic language elements for multi-party testing context that can be used in performance tests. This paper proposes a general approach of using TTCN-3 in multi-party performance testing service application. To this aim, a model of service application is presented, and performance testing framework for service applications is discussed. This testing framework is realized for a typical application by developing a reusable TTCN-3 abstract test suite.",2009,0, 4768,Hierarchical Stability-Based Model Selection for Clustering Algorithms,"We present an algorithm called HS-means which is able to learn the number of clusters in a mixture model. Our method extends the concept of clustering stability to a concept of hierarchical stability. The method chooses a model for the data based on analysis of clustering stability; it then analyzes the stability of each component in the estimated model and chooses a stable model for this component. It continues this recursive stability analysis until all the estimated components are unimodal. In so doing, the method is able to handle hierarchical and symmetric data that existing stability-based algorithms have difficulty with. We test our algorithm on both synthetic datasets and real world datasets. The results show that HS-means outperforms a popular stability-based model selection algorithm, both in terms of handling symmetric data and finding high-quality clusterings in the task of predicting CPU performance.",2009,0, 4769,Feature Selection with Imbalanced Data for Software Defect Prediction,"In this paper, we study the learning impact of data sampling followed by attribute selection on the classification models built with binary class imbalanced data within the scenario of software quality engineering. We use a wrapper-based attribute ranking technique to select a subset of attributes, and the random undersampling technique (RUS) on the majority class to alleviate the negative effects of imbalanced data on the prediction models. The datasets used in the empirical study were collected from numerous software projects. Five data preprocessing scenarios were explored in these experiments, including: (1) training on the original, unaltered fit dataset, (2) training on a sampled version of the fit dataset, (3) training on an unsampled version of the fit dataset using only the attributes chosen by feature selection based on the unsampled fit dataset, (4) training on an unsampled version of the fit dataset using only the attributes chosen by feature selection based on a sampled version of the fit dataset, and (5) training on a sampled version of the fit dataset using only the attributes chosen by feature selection based on the sampled version of the fit dataset. We compared the performances of the classification models constructed over these five different scenarios. The results demonstrate that the classification models constructed on the sampled fit data with or without feature selection (case 2 and case 5) significantly outperformed the classification models built with the other cases (unsampled fit data). Moreover, the two scenarios using sampled data (case 2 and case 5) showed very similar performances, but the subset of attributes (case 5) is only around 15% or 30% of the complete set of attributes (case 2).",2009,0, 4770,Wrapper-Based Feature Ranking for Software Engineering Metrics,"The application of feature ranking to software engineering datasets is rare at best. In this study, we consider wrapper-based feature ranking where nine performance metrics aided by a particular learner are evaluated. We consider five learners and take two different approaches, each in conjunction with one of two different methodologies: 3-fold Cross-Validation (CV) and 3-fold Cross-Validation Risk Impact (CV-R). The classifiers are Naive Bayes (NB), Multi Layer Perceptron (MLP), k- Nearest Neighbors (kNN), Support Vector Machines (SVM), and Logistic Regression (LR). The performance metrics used as ranking techniques are Overall Accuracy (OA), F-Measure(FM), Geometric Mean (GM), Arithmetic Mean (AM), Area under ROC (AUC), Area under PRC (PRC), Best F-Measure (BFM), Best Geometric Mean (BGM), and Best Arithmetic Mean (BAM). To evaluate the classifier performance after feature selection has been applied, we use AUC as the performance evaluator. This paper represents a preliminary report on our proposed wrapper-based feature ranking approach to software defect prediction problems.",2009,0, 4771,Fault Diagnosis for Analogy Circuits Based on Support Vector Machines,"When it is hard to obtain training samples, the fault classifier based on support vector machine (SVM) can diagnose faults with high accuracy. It can easily be generalized and put to practical use. In this paper, a fault classifier based on support vector machine (SVM) is proposed for analog circuits. It can classify the faults in the target circuit effectively and accurately. In order to test the algorithm, an analog circuit fault diagnosis system based on SVM is designed for the measurement circuit that approximates the square curve with a broken line. After being trained with practical measurement data, the system is shown to be capable of diagnosing faults hidden in real measurement data accurately. Therefore, the effectiveness of the algorithm is verified.",2009,0, 4772,On the Reliability of Wireless Sensors with Software-Based Attestation for Intrusion Detection,"Wireless sensor nodes are widely used in many areas, including military operation surveillance, natural phenomenon monitoring, and medical diagnosis data collection. These applications need to store and transmit sensitive or secret data, which requires intrusion detection mechanisms be deployed to ensure sensor node health, as well as to maintain quality of service and survivability. Because wireless sensors have inherent resource constraints, it is crucial to reduce energy consumption due to intrusion detection activities. In this paper by means of a probability model, we analyze the best frequency at which intrusion detection based on probabilistic code attestation on the sensor node should be performed so that the sensor reliability is maximized by exploiting the trade-off between the energy consumption and intrusion detection effectiveness. When given a set of parameter values characterizing the operational and networking conditions, a sensor can dynamically set its intrusion detection rate identified by the mathematical model to maximize its reliability and the expected sensor lifetime.",2009,0, 4773,High-Performance Cloud Computing: A View of Scientific Applications,"Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a service level agreement (SLA), which ensure the desired quality of service (QoS). Aneka, an enterprise cloud computing solution, harnesses the power of compute resources by relying on private and public clouds and delivers to users the desired QoS. Its flexible and service based infrastructure supports multiple programming paradigms that make Aneka address a variety of different scenarios: from finance applications to computational science. As examples of scientific computing in the cloud, we present a preliminary case study on using Aneka for the classification of gene expression data and the execution of fMRI brain imaging workflow.",2009,0, 4774,An efficient error concealment method for mobile TV broadcasting,"Nowadays, TV broadcasting has found its application in mobile terminal, however, due to the prediction structure of video coding standards, compressed video bitstreams are vulnerable to wireless channel disturbances for real-time transmission. In this paper, we propose a novel temporal error concealment method for mobile TV sequences. The proposed ordering methods utilizes continuity feature among adjacent frames, so that both inter and intra error propagation are alleviated. Combined with our proposed fuzzy metric based boundary matching algorithm (FBMA) which provides more accurate distortion function, experiment results show our proposal achieves better performance under error-prone channel, compared existing error concealment algorithms.",2009,0, 4775,Ricean Factor Estimation and Performance Analysis,"In wireless communication, Ricean K factor which is the relative strength of the direct and scattered components of the received signal, is an important reflection of channel quality parameters. Therefore, the estimate of Ricean factor is great significance. Based on the introduction of the estimation methods of K from envelope and from the I/Q components, there is an academic analysis about the precision of these methods by simulations, and the complexity of the first- and second-order moments method and the second- and fourth-order moments method in engineering in the paper. Furthermore, we come to the conclusion is that: while could obtain high estimation accuracy, the second- and fourth-order moments method consume much less time and is used more widely in engineering applications.",2009,0, 4776,Application of Fuzzy Data Mining Algorithm in Performance Evaluation of Human Resource,"The assessment of human resource performance objectively, thoroughly, and reasonably is critical to choosing managerial personnel suited for organizational development. Therefore, an efficient tool should be able to deal with various employees' data and assist managers to make decision and strategic plan. As an effect mathematic tool to deal with the vagueness and uncertainty, fuzzy data mining is considered as a highly desirable tool being applied to many application areas. In this paper we applied the fuzzy data mining technique to make the assessment and selection of human resource in enterprise. We present and justify the capabilities of fuzzy data mining technology in the evaluation of human resource in enterprise by proposing a practical model for improving the efficiency and effectiveness of human resource management. Firstly, the paper briefly explained the basic fuzzy data mining theory, and proposed the fuzzy data mining algorithm in detail. We gave process steps and the flow chart of algorithm in this part. Secondly, we used the human resource management data as illustration to implement the algorithm. We used the maximal tree to cluster the human resource. Then the raw data of human resource management is compared with each cluster and calculate the proximal values based on the equation in fuzzy data mining algorithm. At last we determined the evaluation of human resource. The whole process was easy to be completed. The results of this study indicated that the methodology was practical and feasible. It could help managers in enterprise assess performance of the human resource swiftly and effectively.",2009,0, 4777,Spectral diagnosis of plasma jet at atmospheric pressure,"A new approach to surface modification of materials using dielectric barrier discharge (DBD) plasma jet at atmospheric pressure is presented in this paper. The emission spectral lines of argon plasma jet at atmospheric pressure were recorded by the grating spectrograph HR2000 and computer software. The argon plasma emission spectra, whose range is from 300nm to 1000nm, were measured at different applied voltage. Comparing to air plasma emission spectra under the same circumstance, it is shown that all of the spectral lines are attributed to neutral argon atoms. The spectral line 763.51mn and 772.42nm are chosen to estimate the electron excitation temperature. The purpose of the study is to research the relationship between the applied voltage and temperature to control the process of materials' surface modification promptly. The results show that electron excitation temperature is in the range of 0.1eV-0.5eV and it increases with increasing applied voltage. In the process of surface modification under the plasma jet, the infrared radiation thermometer was used to measure the material surface temperature under the plasma jet. The results show that the material surface temperature is in the range of 50~100 degrees centigrade and it also increases with increasing applied voltage. Because the material surface was under the plasma jet and its temperature was decided by the plasma, the material surface temperature increases with increasing the macro-temperature of plasma jet, the relationship of the surface temperature and applied voltage indicates the relationship of the macro-temperature of the plasma jet and the applied voltage approximately. The experimental results indicate that DBD plasma jet at atmospheric pressure is a new approach to improve the quality of materials' surface modification, and spectral diagnose is proved to be a kind of workable method by choosing suitable applied voltage.",2009,0, 4778,Reputation-Aware Scheduling for Storage Systems in Data Grids,"Data grids provide such data-intensive applications with a large virtual storage framework with unlimited power. However, conventional scheduling algorithms for data grids are unable to meet the reputation service requirements of data-intensive applications. In this paper we address the problem of scheduling data-intensive jobs on data grids subject to reputation service constraints. Using the reputation-aware technique, the dynamic scheduling strategy is proposed to improve the capability of predicting the reliability and credibility for data-intensive applications. To incorporate reputation service into job scheduling, we introduce a new performance metric, degree of reputation sufficiency, to quantitatively measure quality of reputation service provided by data grids. Experimental results based on a simulated grid show that the proposed scheduling strategy significantly is capable of significantly satisfying the reputation service requirements and guaranteeing the desired response times .",2009,0, 4779,A Heuristic Approach with Branch Cut to Service Substitution in Service Orchestration,"With the rapidly growing number of Web services throughout the Internet, Service Oriented Architecture (SOA) enables a multitude of service providers (SP) to provide loosely coupled and inter-operable services at different Quality of Service (QoS) levels. This paper considers the services are published to a QoS-aware registry. The structure of composite service is described as a Service Orchestration that allows atomic services to be brought together into one business process; This paper considers the problem of finding a set of substitution atomic services to make the Service Orchestration re-satisfies the given multi-QoS constraints when one QoS metric went unsatisfied at runtime. This paper leverage hypothesis test to detect possible fault atomic services, and propose heuristic algorithms with different level branch cut to determine the Service Orchestration substitutions. Experiments are given to testify the algorithms are effective and efficient, and the probability cut algorithm reaches a cut/search ratio of 137.04% without loss solutions.",2009,0, 4780,Using Hessian Locally Linear Embedding for autonomic failure prediction,"The increasing complexity of modern distributed systems makes conventional fault tolerance and recovery prohibitively expensive. One of the promising approaches is online failure prediction. However, the process of feature extraction depends on the experienced administrators and their domain knowledge to filtering and compressing error events into a form that is easy for failure prediction. In this paper, we present a novel performance-centric approach to automate failure prediction with Manifold Learning techniques. More specifically, we focus on methods that use Supervised Hessian Locally Embedding algorithm to achieve autonomic failure prediction. In our experimental work we found that our method can automatically predict more than 60% of the CPU and memory failures, and around 70% of the network failure based on the runtime monitoring of the performance metrics.",2009,0, 4781,How much input vectors affect nano-circuit's reliability estimates,"As the sizes of (nano-)devices are aggressively scaled deep towards the nanometer range, the design and manufacturing of future (nano-)circuits will become extremely complex and inevitably introduce more defects, while their functioning will be adversely affected by (transient) faults. Therefore, accurately calculating the reliability of future designs will become a very important factor for (nano-)circuit designers as they investigate several alternatives for optimizing the tradeoffs between the conflicting metrics of area-power-energy-delay versus reliability. This paper studies the effect of the input vectors on the (nano-)circuit's reliability, and introduces a time-efficient method for quickly and accurately identifying the lower/upper reliability bounds. Simulations results support the claim that the absolute difference between the lowest and the highest achievable reliability is of one-to-two orders of magnitude. Therefore, future designs should consider the worst case input vector(s) in order to guarantee the required reliability margins.",2009,0, 4782,Creating On-Demand Service-Oriented Applications from Intentions Model,"The recent years have seen a flurry of research inspired by social and biological models to achieve the software autonomy. This has been prompted by the need to automate laborious administration tasks, recovery from unanticipated systems failure, and provide self-protection from security vulnerabilities, whilst guaranteeing predictable autonomic software behavior. However, runtime assured adaptation of software to new requirement is still an outstanding issue for research. This paper presents a supporting language for process-oriented programming of autonomic software. The paper starts by a review of the state-of-the-art into runtime software adaptation. This is followed by an outline of a developed Neptune framework and language support, which is here described via an illustrative example pet shop benchmark. The paper ends with a discussion and some concluding remarks leading to suggested further works.",2009,0, 4783,Field-Based Branch Prediction for Packet Processing Engines,"Network processors have exploited many aspects of architecture design, such as employing multi-core, multi-threading and hardware accelerator, to support both the ever-increasing line rates and the higher complexity of network applications. Micro-architectural techniques like superscalar, deep pipeline and speculative execution provide an excellent method of improving performance without limiting either the scalability or flexibility, provided that the branch penalty is well controlled. However, it is difficult for traditional branch predictor to keep increasing the accuracy by using larger tables, due to the fewer variations in branch patterns of packet processing. To improve the prediction efficiency, we propose a flow-based prediction mechanism which caches the branch histories of packets with similar header fields, since they normally undergo the same execution path. For packets that cannot find a matching entry in the history table, a fallback gshare predictor is used to provide branch direction. Simulation results show that the our scheme achieves an average hit rate in excess of 97.5% on a selected set of network applications and real-life packet traces, with a similar chip area to the existing branch prediction architectures used in modern microprocessors.",2009,0, 4784,Reliable Software Distributed Shared Memory Using Page Migration,"Reliability has recently become an important issue in PC cluster technology. This research proposes a software distributed shared memory system, named SCASH-FT, as an execution platform for high performance and highly reliable parallel system for commodity PC clusters. To achieve fault tolerance, each node has redundant page data that allows recovery from node failure using SCASH-FT. All page data is checkpointed and duplicated to another node when a user explicitly calls the checkpoint function. When failure occurs, SCASH-FT invokes the rollback function by restarting an execution from the last checkpoint data. SCASH-FT takes charge of processes such as detecting failure and restarting execution. So, all you have to do is just adding checkpoint function calls in the source code to determine the timing of each checkpoint. Evaluation results show that the checkpoint cost and the rollback penalty depend on the data access pattern and the checkpoint frequency. Thus, users can control their application performance by adjusting checkpoint frequency.",2009,0, 4785,A Quantitative Evaluation of Software Quality Enhancement by Refactoring Using Dependency Oriented Complexity Metrics,"The maintainability of software depends on the quality of software. Software under evolution is modified and enhanced to cater the new requirements. Due to this the software becomes more complex and deviates from its original design, in turn lowering the quality. Refactoring makes object oriented software systems maintainable. Effective refactoring requires proper metrics to quantitatively ascertain the improvement in the quality after refactoring. In this direction, we have made an effort to quantitatively evaluate the quality enhancement by refactoring using dependency oriented complexity metrics. In this paper three experimental cases are given. The metrics have successfully indicated quantitatively the presence of defects and the improvement in the quality of designs after refactoring. These metrics have acted as quality indicators with respect to designs considered under ripple effects before and after refactoring. Metrics as quality indicators help to estimate the required maintenance efforts.",2009,0, 4786,PSO approach to preview tracking control systems,"Preview Control is a field well suited for application to systems that have reference signals known a priori. The use of advance knowledge of reference signal can improve the tracking quality of the concerned control system. The classical solution to the Preview Control problem is obtained using the Algebraic Riccati Equation. The solution obtained is good but it is not optimal and has a scope of improvement. This paper presents a novel method of design of H¿ Preview Controller using Particle Swarm Optimization (PSO) technique. The procedure is based on improving the performance by minimizing the objective function i.e. IAE (Integral of Absolute Error). The procedure is tested for two systems - Altitude Control System of second order and an industrial system of fifth order using MATLAB software environment. The results show that the solutions of PSO based technique are better in terms of the control characteristics of transient response of the system for various preview lengths, stability and also in terms of the objective function Integral of Absolute Error (IAE).",2009,0, 4787,A novel approach to minimizing the risks of soft errors in mobile and ubiquitous systems,"A novel approach to minimizing the risks of soft errors at modeling level of mobile and ubiquitous systems is outlined. From a pure dependability viewpoint, critical components, whose failure is likely to impact on system functionality, attract more attention of protection/prevention mechanisms (against soft errors) than others do. Tolerating soft errors can be much improved if critical components can be identified at an early design phase and measures are taken to lower their criticalities at that stage. This improvement is achieved by presenting a criticality ranking (among the components) formed by combining a prediction of soft errors, consequences of them, and a propagation of failures at system modeling phase; and pointing out the ways to apply changes in the model to minimize the risks of degradation of desired functionalities. Case study results are given to illustrate and validate the approach.",2009,0, 4788,An approach to automatic verification of stochastic graph transformations,"Non-functional requirements like performance and reliability play a prominent role in distributed and dynamic systems. To measure and predict such properties using stochastic formal methods is crucial. At the same time, graph transformation systems are a suitable formalism to formally model distributed and dynamic systems. Already, to address these two issues, stochastic graph transformation systems (SGTS) have been introduced to model dynamic distributed systems. But most of the researches so far are concentrated on SGTS as a modelling means without considering the need for suitable analysis tools. In this paper, we present an approach to verify this kind of graph transformation systems using PRISM (a stochastic model checker). We translate the SGTS to the input language of PRISM and then PRISM performs the model checking and returns the results back to the designers.",2009,0, 4789,A multi-scaling based admission control approach for network traffic flows,"In this paper, we propose an analytical expression for estimating byte loss probability at a single server queue with multi-scale traffic arrivals. We extend our investigation on the application potentiality of the estimation method and possible its quality in connection admission control mechanisms. Extensive experimental tests validate the efficiency and accuracy of the proposed loss probability estimation approach and its superior performance for admission control applications in network connection with respect to some well-known approaches suggested in the literature.",2009,0, 4790,Diagnostic models for sensor measurements in rocket engine tests,"This paper presents our ongoing work in the area of using virtual reality (VR) environments for the Integrated Systems Health Management (ISHM) of rocket engine test stands. Specifically, this paper focuses on the development of an intelligent valve model that integrates into the control center at NASA Stennis Space Center. The intelligent valve model integrates diagnostic algorithms and 3D visualizations in order to diagnose and predict failures of a large linear actuator valve (LLAV). The diagnostic algorithm uses auto-associative neural networks to predict expected values of sensor data based on the current readings. The predicted values are compared with the actual values and drift is detected in order to predict failures before they occur. The data is then visualized in a VR environment using proven methods of graphical, measurement, and health visualization. The data is also integrated into the control software using an ActiveX plug-in.",2009,0, 4791,Nonlinear adaptive flight control law design and handling qualities evaluation,"This paper considers the design of a stability and control augmentation system for a modern fighter aircraft. The aim of the flight control system is to offer the pilot consistent good flying and handling qualities over a specified flight envelope and to provide robustness to model uncertainties. A nonlinear adaptive backstepping method is proposed to directly deal with the nonlinearities and the uncertainties of the system. B-spline neural networks are used to partition the flight envelope into multiple connecting regions. In each partition a locally valid linear-in-the-parameters nonlinear aircraft model is defined, of which the unknown parameters are approximated online by Lyapunov based update laws. These update laws take aircraft state and input constraints into account so that they do not corrupt the parameter estimation process. The desired aircraft response characteristics are enforced with command filters and verified by applying conventional handling qualities analysis techniques to numerical simulation data. Simulation results show that the controller is capable of giving desired closed-loop nominal and robust performance in the presence of aerodynamic uncertainties.",2009,0, 4792,Predicting Defect-Prone Software Modules at Different Logical Levels,"Effective software defect estimation can bring cost reduction and efficient resources allocation in software development and testing. Usually, estimation of defect-prone modules is based on the supervised learning of the modules at the same logical level. Various practical issues may limit the availability or quality of the attribute-value vectors extracting from the high-level modules by software metrics. In this paper, the problem of estimating the defect in high-level software modules is investigated with a multi-instance learning (MIL) perspective. In detail, each high-level module is regarded as a bag of its low-level components, and the learning task is to estimate the defect-proneness of the bags. Several typical supervised learning and MIL algorithms are evaluated on a mission critical project from NASA. Compared to the selected supervised schemas, the MIL methods improve the performance of the software defect estimation models.",2009,0, 4793,CBCT-subsystem performance of the multi-modality Brightview XCT system (M09-26),"The new Brightview XCT system uses a flat-panel detector to perform CBCT imaging for attenuation correction and localization. Features include a small footprint due to an offset-detector geometry, advanced scatter correction - both software and hardware - isotropic voxels, and GPU-accelerated reconstruction. System performance characteristics of the CBCT system such as spatial resolution, HU linearity, uniformity, noise, low-contrast detectability, and dose measurements are discussed in this paper.",2009,0, 4794,Improving lesion detectability of a PEM system with post-reconstruction filtering,"We present a method to quantify the image quality of a positron emission mammography (PEM) imaging system through the metric of lesion detectability. For a customized image quality phantom, we assess the impact of different post-reconstruction filters on the acquired PEM image. We acquired six image quality phantom images on a Naviscan PEM scanner using different scan durations which gave differing amounts of background noise. The image quality phantom has dimensions of 130 mm × 130 mm × 66 mm and consists of 15 hot rod inserts with diameters of approximately 10 mm, 5 mm, 4 mm, 3 mm, and 2 mm filled with activity ratios of 3.5, 6.8 and 12.7 times the background activity. One region of the phantom had no inserts so as to measure the uniformity of the background noise. Lesion detectability was determined for each background uniformity and each activity ratio by extrapolating a fit of the recovery coefficients to the point where the lesion would be lost in the noise of the background (defined as 3 times the background's standard deviation). The data were reconstructed by the system's standard clinical software using an MLEM algorithm with 5 iterations. We compare the lesion detectability of an unfiltered image to the image after applying one of five common post-reconstruction filters. Two of the filters were found to improve lesion detectability: a bilateral filter (9% improvement) and a Perona-Malik filter (8% improvement). One filter was found to have negligible effect: a Gaussian filter showed a 1% decrease in lesion detectability. The other two filters tested were found to worsen lesion detectability: a median filter (8% decrease) and a Stick filter (7% decrease).",2009,0, 4795,The Utah PET lesion detection database,"Task-based assessment of image quality is a challenging but necessary step in evaluating advancements in PET instrumentation, algorithms, and processing. We have been developing methods of evaluating observer performance for detecting and localizing focal warm lesions using experimentally-acquired whole-body phantom data designed to mimic oncologic FDG PET imaging. This work describes a new resource of experimental phantom data that is being developed to facilitate lesion detection studies for the evaluation of PET reconstruction algorithms and related developments. A new large custom-designed thorax phantom has been constructed to complement our existing medium thorax phantom, providing two whole-body setups for lesion detection experiments. The new phantom is ~50% larger and has a removable spine/rib-cage attenuating structure that is held in place with low water resistance open cell foam. Several series of experiments have been acquired, with more ongoing, including both 2D and fully-3D acquisitions on tomographs from multiple vendors, various phantom configurations and lesion distributions. All raw data, normalizations, and calibrations are collected and offloaded to the database, enabling subsequent retrospective offline reconstruction with research software for various applications. The offloaded data are further processed to identify the true lesion locations in preparation for use with both human observers and numerical studies using the channelized non-prewhitened observer. These data have been used to study the impact of improved statistical algorithms, point spread function modeling, and time-of-flight measurements upon focal lesion detection performance, and studies on the effects of accelerated block-iterative algorithms and advanced regularization techniques are currently ongoing. Interested researchers are encouraged to contact the author regarding potential collaboration and application of database experiments to their projects.",2009,0, 4796,Data Quality Monitoring of the CMS Tracker,"The data quality monitoring (DQM) of the Compact Muon Solenoid (CMS) silicon tracking detectors (tracker) at the Large Hadron Collider (LHC) at CERN is a software based system designed to monitor the detector and reconstruction performance, to identify problems and to certify the collected data for physics analysis. It uses the framework provided by the central CMS DQM as well as tools developed especially for the CMS tracker DQM. This paper describes aim, framework conditions, tools and work flows of the CMS tracker DQM and shows examples of its successful use during the recent commissioning phase of the CMS experiment.",2009,0, 4797,Online monitoring system for Double Chooz experiment,"Double Chooz is a reactor-neutrino experiment at Chooz nuclear power plant in France to measure the unknown neutrino mixing angle ¿13 with a better sensitivity than the current best limit. A remote-monitoring framework has been developed to check status of DAQ systems, which enables collaborators off-site to access online monitoring data through the internet. As for these graphical user interfaces (GUI) are developed with platform independent technologies to be available through the internet. As fault tolerant monitoring systems in Double Chooz, Gaibu (external in Japanese) system and Log-Message system have been developed. Gaibu system works as an alarm message provider working outside DAQs and Log-Message system is a log file manager. Developments of these systems are almost finished and we are preparing software commissioning at the far detector site now.",2009,0, 4798,CZT quasi-hemispherical detectors with improved spectrometric characteristics,"At present time various CZT detectors of different designs and sizes are widely and successfully used for different applications due to its favorable detection properties. Among them there are hemispherical or quasi-hemispherical detectors. These detectors have rather simple design and do not require special electronics for application. Development of quasi-hemispherical detectors fabrication methods allows noticeable improves detector's spectrometric performance due to a charge collection optimization. This will allow increasing the yield of high quality detectors. For example energy resolution at 662 keV line and peak-to-Compton ratio of the quasi-hemispherical detector with volume of 150 mm3 were improved from 27 keV and 2.5 to 13 keV and 4.9 correspondingly. Total absorption peaks efficiencies in all energy range remained without changes. At that, improvements were obtained without increasing of detectors high operation voltages and without application of any pulse shape correction/selection schemes.",2009,0, 4799,Iterative layer-based raytracing on CUDA,"A raytracer consists in an application capable of tracing rays from a point into a scene in order to determine the closest sphere intersection along the ray's direction. Because of their recursive nature, raytracing algorithms are hard to implement on architectures which do not support recursion, such as the NVIDIA CUDA architecture. Even if the recursive portion of a typical raytracing algorithm can be rewritten iteratively, naively doing so could tamper with the image generation process, preventing the parallel algorithm's results from maintaining high fidelity to the sequential algorithm's and resulting, in many cases, in lower quality images. In this paper we address both issues by presenting a novel approach for expressing the recursive structure of raytracer algorithms iteratively, while still maintaining high fidelity to the images generated by the sequential algorithm, and leveraging the processing power of the GPU for parallelizing the image generation process. Our work focuses on designing and implementing a raytracer that renders arbitrary scenes and the reflections among the objects contained in it. However, it can be easily extended to implement other natural phenomena, such as light refraction, and to aid the iterative implementation of recursive algorithms in architectures like CUDA, which do not support recursive function calls.",2009,0, 4800,Automatic Testing of Financial Charting Software Component: A Case Study on Point and Figure Chart,"
First Page of the Article
",2009,0, 4801,A new approach for DCT coefficients estimation and super resolution of compressed video,"Reconstructing high resolution images from compressed low resolution video is a hot issue now. In many real applications, such as surveillance users often only get low quality videos which are hard to identify interested objects, super resolution restoration is a effective tool to enhance the quality of images. Quantization loss is the major cause for losing images details, this paper propose an new method to estimate the quantization noise in frequency domain using Laplace model for DCT coefficients, then the distribution parameter of Laplace is treated as variable simultaneously iterated with the object high resolution images. Experiments indicate that this algorithm in compressed video restoration details due to quantization noise have better performance than others.",2009,0, 4802,Quality prediction model of object-oriented software system using computational intelligence,"Effective prediction of the fault-proneness plays a very important role in the analysis of software quality and balance of software cost, and it also is an important problem of software engineering. Importance of software quality is increasing leading to development of new sophisticated techniques, which can be used in constructing models for predicting quality attributes. In this paper, we use fuzzy c-means clustering (FCM) and radial basis function neural network (RBFNN) to construct prediction model of the fault-proneness, RBFNN is used as a classificatory, and FCM is as a cluster. Object-oriented software metrics are as input variables of fault prediction model. Experiments results confirm that designed model is very effective for predicting a class's fault-proneness, it has a high accuracy, and its implementation requires neither extra cost nor expert's knowledge. It also is automated. Therefore, proposed model was very useful in predicting software quality and classing the fault-proneness.",2009,0, 4803,Adaptation of ATAMSM to software architectural design practices for organically growing small software companies,"The architecture of a software application determines the degree of success of both operation and development of software. Adopted architectural options not only affect the functionality and performance of the software, but they also affect delivery related factors such as cost, time, changeability, scalability, and maintainability. It is thus very important to find appropriate means of assessing benefits as well as liabilities of different architectural options to maximize the life-time benefit and reduce the overall cost of ownership of a software application. The Architecture Tradeoff Analysis Method (ATAMSM) developed by Software Engineering Institute (SEI) is that kind of tool. Considerably this is a very big framework for dealing with architectural tradeoff issues faced by large companies for developing large as well as complex software applications. The practicing of full blown ATAM without taking into consideration of diverse forces affecting the value addition from its practice does not maximize benefits from its adoption. Related forces faced by small software companies are significantly different than those faced by large software companies. Therefore, ATAM should be adapted to make it suitable for the practice by small software companies. This paper presents the information about the architectural practice level of organically grown small software companies within the context of ATAM followed by the gap analysis between the industry practices and ATAM, and adaptation recommendations. Both literature review and field investigation based on key informant interview have been performed for this purpose. Based on the findings of this study an adaptation process of ATAM for the small companies has been proposed.",2009,0, 4804,Nonparametric multivariate anomaly analysis in support of HPC resilience,"Large-scale computing systems provide great potential for scientific exploration. However, the complexity that accompanies these enormous machines raises challenges for both, users and operators. The effective use of such systems is often hampered by failures encountered when running applications on systems containing tens-of-thousands of nodes and hundreds-of-thousands of compute cores capable of yielding petaflops of performance. In systems of this size failure detection is complicated and root-cause diagnosis difficult. This paper describes our recent work in the identification of anomalies in monitoring data and system logs to provide further insights into machine status, runtime behavior, failure modes and failure root causes. It discusses the details of an initial prototype that gathers the data and uses statistical techniques for analysis.",2009,0, 4805,Study on performance testing of index server developed as ISAPI Extension,"A major concern of most businesses is their ability to meet customers' performance requirements. Correspondingly, in order to ensure the information system to provide service with high quality, it's necessary to test the performance before information system issued. This paper proposes an approach to performance testing of Index Server, which is developed as ISAPI Extension. This paper mainly focuses on a case study that demonstrates the approach to a security updating index server. With avalanche and testing in the test lab, we assessed the performance of the system under both current workloads and those likely to be encountered in the future. In addition, this leds to find out the bottleneck of its performance.",2009,0, 4806,Electrical equipment fault diagnosis system based on the decomposition products of SF6,"This paper presents electrical equipment fault diagnosis system based on the decomposition products of SF6, and makes an introduction of a method that electrical equipment fault diagnosis system of hardware and software implementation. The hardware uses ATmega128 series single-chip platform, and the software uses advanced wavelet neural network fault diagnosis method. To prove the superiority of this algorithm, we make the simulation and comparison with others. A good synergy of hardware and software is used in SF6 electrical equipment fault diagnosis, and analysis of SF6 gas content of decomposition products to judge if the electrical equipment fault happens and to make a fault prediction. In this paper we will introduce the hardware design method and the detailed design of the software just because of strong electromagnetic interference environment and it is very important to the system design, finally by giving the experimental data to prove system reliability and practicality.",2009,0, 4807,A HW/SW mixed mechanism to improve the dependability of a stack processor,"In this paper we are presenting a journaling mechanism to improve dependability of a stack processor. This approach is based on a HW/SW mixed mechanism, using hardware error detection and software error correction. The SW correction is based on a rollback mechanism and relies on a journal. The journal is located between processor and the main memory in a way that all the data written into the main memory must pass through it. In case of error detection the rollback mechanism is executed in the journal. Therefore processor re-execute from the last sure states. In this way only validated data is written in the main memory. In order to evaluate the performance of our proposed architecture, the clocks per instruction are measured for different benchmarks in presence of high error rate.",2009,0, 4808,Multilayer Architecture Based on HMM and SVM for Fault Classification,"In order to solve the problems of current machine learning in fault diagnosing system of the chemical plants, a better and effective multilayer architecture model is used in this paper. Hidden Markov model (HMM) is good at dealing with dynamic continuous data and support vector machine (SVM) shows superior performance for classification, especially for limited samples. Combining their respective virtues, we propose a new multilayer architecture model to improve classification accuracy for a fault diagnosis example. The simulation result shows that this two level architecture framework combining HMM and SVM is better than the single HMM method in high classification accuracy with small training samples.",2009,0, 4809,Application of pre-function information in software testing based on defect patterns,"In order to improve precision of software static testing based on defect patterns, inter-function information was extended and applied in software static testing. Pre-function information includes two parts, the effect of context to invoked function and the constraint of invoked function to context, which can be used to detect the common defects, such as null pointer defect, un-initial defect, dangling pointer defect, illegal operation defect, out of bounds defect and so on. Experiments show that, pre-function information can reduce false negative in software static testing effectively.",2009,0, 4810,Computer simulation of the grey system prediction method and its application in water quality prediction,"The water quality prediction is an important basis to implement water pollution control programs. In this paper, the gray system method is used to build a mathematical model of water quality prediction for the Yangtze River and using computer to simulate it to complete the implementation of the method for solving the model. Finally we find the method to solve problem of predicting the water quality of Yangtze River. The key problems are focused on the following two issues. (1) If we do not take more effective control measures, we make prediction analysis of the future development trendbased on past data. (2). According to the prediction analysis, we use computer to simulate that how much sewage we need to address each year.",2009,0, 4811,Analog circuit fault diagnosis based on artificial neural network and embedded system,Analog circuit fault diagnosis system based on S3C2410 embedded board is achieved in this paper. The hardware and software design are presented. Momentum addition BP neural network algorithm with embedded system is applied to that system. Real-time data collection and on-line detection of analog circuit fault condition are designed as the basic functions in the embedded system. Diagnosis system of intelligence and minimization is realized actually.,2009,0, 4812,Hierarchical parametric test metrics estimation: A ΣΔ converter BIST case study,"In this paper we propose a method for evaluating test measurements for complex circuits that are difficult to simulate. The evaluation aims at estimating test metrics, such as parametric test escape and yield loss, with parts per million (ppm) accuracy. To achieve this, the method combines behavioral modeling, density estimation, and regression. The method is demonstrated for a previously proposed Built-In Self-Test (BIST) technique for ΣΔ Analog-to-Digital Converters (ADC) explaining in detail the derivation of a behavioral model that captures the main nonidealities in the circuit. The estimated test metrics are further analyzed in order to uncover trends in a large device sample that explain the source of erroneous test decisions.",2009,0, 4813,A no-reference perceptual blur metric using histogram of gradient profile sharpness,"No-reference measurement of blurring artifacts in images is a challenging problem in image quality assessment field. One of the difficulties is that the inherently blurry regions in some natural images may disturb the evaluation of blurring artifacts. In this paper, we study the image gradients along local image structures and propose a new perceptual blur metric to deal with the above problem. The gradient profile sharpness of image edge is efficiently calculated along horizontal or vertical direction. Then the sharpness distribution histogram rectified by just noticeable distortion (JND) threshold is used to evaluate the blurring artifacts and assess the image quality. Experimental results show that the proposed method can achieve good image quality prediction performance.",2009,0, 4814,Resource prediction and quality control for parallel execution of heterogeneous medical imaging tasks,"We have established a novel control system for combining the parallel execution of deterministic and non-deterministic medical imaging applications on a single platform, sharing the same constrained resources. The control system aims at avoiding resource overload and ensuring throughput and latency of critical applications, by means of accurate resource-usage prediction. Our approach is based on modeling the required computation tasks, by employing a combination of weighted moving-average filtering and scenario-based Markov chains to predict the execution. Experimental validation on medical image processing shows an accuracy of 97%. As a result, the latency variation within non-deterministic analysis applications is reduced by 70% by adaptively splitting/merging of tasks. Furthermore, the parallel execution of a deterministic live-viewing application features constant throughput and latency by dynamically switching between quality modes. Interestingly, our solution can successfully be reused for alternative applications with several parallel streams, like in surveillance.",2009,0, 4815,Logo insertion transcoding for H.264/AVC compressed video,"H.264/AVC quickly gains ground in many aspects of video applications, due to its superior coding performance. Inserting a company logo into H.264/AVC compressed video streams has been a highly desirable application in the TV telecasting industry. In this paper, we propose a novel and efficient logo-insertion scheme for H.264/AVC compressed videos. Our proposed scheme overcomes the numerous coding dependencies, and minimizes the changes to the original compressed videos. Experimental results show that our proposed transcoding scheme achieves extraordinary video quality and significantly reduces the bit rate and computational cost. Compared with the cascaded transcoding scheme, our proposed logo-insertion method achieves an average of 1.16 dB PSNR increase, or a 68.6% bit-rate reduction. Our scheme also dramatically reduces the total transcoding time and the motion-estimation time by at least 67.2% and 97.4%, respectively.",2009,0, 4816,Development of an Electrical Power Quality Monitor based on a PC,"This paper describes an electric power quality monitor developed at the University of Minho. The hardware of the monitor is constituted by four current sensors based on Rogowski effect, four voltage sensors based on Hall effect, a signal conditioning board and a computer. The software of the monitor consists of several applications, and it is based on LabVIEW. The developed applications allow the equipment to function as a digital scope, analyze harmonic content, detect and record disturbances in the voltage (wave shapes, sags, swells, and interruptions), measure energy, power, unbalances, power factor, register and visualize strip charts, register a great number of data in the hard drive and generate reports. This article also depicts an electrical power quality monitor integrated into active power filters developed at the University of Minho.",2009,0, 4817,Tutorial proposal efficient solving of optimization problems using advanced boolean satisfiability and Integer Linear Programming techniques,"Recent years have seen a tremendous growth in the number of research and development groups at universities, research labs, and companies that have started using Boolean Satisfiability (SAT) algorithms for solving different decision and optimization problems in Computer Science and Engineering. This has lead to the development of highly-efficient SAT solvers that have been successfully applied to solve a wide-range of problems in Electronic Design Automation (EDA), Artificial Intelligence (AI), Networking, Fault Tolerance, Security, and Scheduling. Examples of such problems include automatic test pattern generation for stuck-at faults (ATPG), formal verification of hardware and software, circuit delay computation, FPGA routing, power leakage minimization, power estimation, circuit placement, graph coloring, wireless communications, wavelength assignment, university classroom scheduling, and failure diagnosis in wireless sensor networks. SAT solvers have recently been extended to handle Pseudo-Boolean (PB) constraints which are linear inequalities with integer coefficients. This feature allowed SAT solvers to handle optimization problems, as opposed to only decision problems, and to be applied to a variety of new applications. Recent work has also showed that free open source SAT-based PB solvers can compete with the best generic Integer Linear Programming (ILP) commercial solvers such as CPLEX. This tutorial is aimed at introducing the latest advances in S AT technology. Specifically, we describe the simple new input format of SAT solvers and the common SAT algorithms used to solve decision/optimization problems. In addition, we highlight the use of SAT algorithms in solving a variety of EDA decision and optimization problems and compare its performance to generic ILP solvers. This should guide researchers in solving their existing optimization problems using the new SAT technology. Finally, we provide a prospective on future work on SAT.",2009,0, 4818,Chaos Immune Particle Swarm Optimization Algorithm with Hybrid Discrete Variables and its Application to Mechanical Optimization,"During the iterative process of standard particle swarm optimization (PSO), the premature convergence of particles decreases the algorithm's searching ability. Through analyzing the reason of particle premature convergence during the renewal process, by introducing the selection strategy based on antibody density and initiation based on equal probability chaos, chaos immune particle swarm optimization (CIPSO) algorithm with hybrid discrete variables model was proposed, and its program CIPSO1.0 with Matlab software was developed. Initiation based on chaos makes initial particles possess good performance and the selection strategy based on antibody density makes the particles of immune particle swarm optimization (CIPSO) maintain the diversity during the iterative process, thus overcomes the defect of premature convergence. Example for mechanical optimization indicates that compared with the exiting algorithms, CIPSO gets better result, thus certify the improvement of the algorithm's searching ability by immunity mechanism and chaos initiation particle swarm.",2009,0, 4819,"A multi-tenant oriented performance monitoring, detecting and scheduling architecture based on SLA","Software as a Service (SaaS) is thriving as a new mode of service delivery and operation with the development of network technology and the maturity of application software. SaaS application providers offer services for multiple tenants through the ¿single-instance multi-tenancy¿ model, which can effectively reduce service costs due to scale effect. Meanwhile, the providers allocate resources according to the SLA signed with tenants to meet the different needs of service quality. However, the service quality of some tenants will be affected by some abnormal consumption of system resources since both hardware and software resources are shared by tenants. In order to deal with this issue, we proposed a multi-tenant oriented monitoring, detecting and scheduling architecture based on SLA for performance isolation. It would monitor service quality of per tenant, discover abnormal status and dynamically adjust the use of resources based on quantization of SLA parameters to ensure the full realization of SLA tasks.",2009,0, 4820,An Airborne imaging Multispectral Polarimeter (AROSS-MSP),"Transport of sediment and organisms in rivers, estuaries and the near-shore ocean is dependent on the dynamics of waves, tides, turbulence, and the currents associated with these interacting bodies of water. We present measurements of waves, currents and turbulence from color and polarization remote sensing in these regions using our Airborne Remote Optical Spotlight System-Multispectral Polarimeter (AROSS-MSP). AROSS-MSP is a 12-channel sensor system that measures 4 color bands (RGB-NTR) and 3 polarization states for the full linear polarization response of the imaged scene. Color and polarimetry, from airborne remotely-sensed time-series imagery, provide unique information for retrieving dynamic environmental parameters relating to sediment transport processes over a larger area than is possible with typical in situ measurements. Typical image footprints provide area coverage on the water surface on the order of 2 square kilometers with 2 m ground sample distance. A significant first step, in advanced sensing systems supporting a wide range of missions for organic UAVs, has been made by the successful development of the Airborne Remote Optical Spotlight System (AROSS) family of sensors. These sensors, in combination with advanced algorithms developed in the Littoral Remote Sensing (LRS) and Tactical Littoral Sensing (TLS) Programs, have exhibited a wide range of important environmental assessment products. An important and unique aspect of this combination of hardware and software has been the collection and processing of time-series imaging data from militarily-relevant standoff ranges that enable characterization of riverine, estuarine and nearshore ocean areas. However, an optimal EO sensor would further split the visible and near-infrared light into its polarimetric components, while simultaneously retaining the spectral components. AROSS-MSP represents the third generation of sophistication in the AROSS series, after AROSS-MultiChannel (AROSS-MC) which was d- veloped to collect and combine time-series image data from a 4-camera sensor package. AROSS-MSP extends the use of color or polarization filters on four panchromatic cameras that was provided by AROSS-MC to 12 simultaneous color and polarization data channels. This particular field of optical remote sensing is developing rapidly, and data of this much more general form is expected to enable the development of a number of additional important environmental data products. Important examples that are presently being researched are: minimizing surface reflections to image the sub-surface water column at greater depth, detecting objects in higher environmental clutter, improving ability to image through marine haze and maximizing wave contrast to improve oce?anographie parameter retrievals such as wave spectra and water depth and currents. These important capabilities can be supported using AROSS-MSP. The AROSS-MSP design approach utilizes a yoke-style positioner, digital framing cameras, and integrated Global Positioning System/Inertial Measurement Unit (GPS/IMU), with a computer-based data acquisition and control system. Attitude and position information are provided by the GPS/IMU, which is mounted on the sensor payload rather than on the airframe. The control system uses this information to calculate the camera pointing direction and maintain the intended geodetic location of the aim point in close proximity to the center of the image while maintaining a standoff range suitable for military applications. To produce high quality images for use in quantitative analysis, robust individual camera and inter-camera calibrations are necessary. AROSS-MSP is optimally focused and imagery is corrected for lens vignetting, non-uniform pixel response, relative radiometry and geometric distortion. The cameras are aligned with each other to sub-pixel accuracy for production of multichannel imagery products and with the IMU for mapping to a geodetic surface. The mapped, corrected",2009,0, 4821,Real-time and long-term monitoring of phosphate using the in-situ CYCLE sensor,"Dissolved nutrient dynamics broadly affect issues related to public health, ecosystem status and resource sustainability. Modeling ecosystem dynamics and predicting changes in normal variability due to potentially adverse impacts requires sustained and accurate information on nutrient availability. On site sampling is often resource limited which results in sparse data sets with low temporal and spatial density. For nutrient dynamics, sparse data sets will bias analyses because critical time scales for the relevant biogeochemical processes are often far shorter and spatially limited than sampling regimes. While data on an areal basis will always be constrained economically, an in-situ instrument that provides coherent data at a sub-tidal temporal scale can provide a significant improvement in the understanding of nutrient dynamics and biogeochemical cycles. WET Labs has developed an autonomous in-situ phosphate analyzer which is able to monitor variability in the dissolved reactive phosphate concentration (orthophosphate) for months with a sub-tidal sampling regime. The CYCLE phosphate sensor is designed to meet the nutrient monitoring needs of the community using a standard wet chemical method (heteropoly blue) and minimal user expertise. The heteropoly blue method for the determination of soluble reactive phosphate in natural waters is based on the reaction of phosphate ions with an acidified molybdate reagent to yield molybdophosphoric acid, which is then reduced with ascorbic acid to a highly colored blue phosphomolybdate complex. This method is selective, insensitive to most environmental changes (e.g., pH, salinity, temperature), and can provide detection limits in the nM range. The CYCLE sensor uses four micropumps that deliver the two reagents (ascorbic acid and acidified molybdate), ambient water, and a phosphate standard. The flow system incorporates an integrated pump manifold and fluidics housing that includes controller and mixing assemblies virtually i- nsensitive to bubble interference. A 5-cm pathlength reflective tube absorption meter measures the absorption at 880 nm associated with reactive phosphate concentration. Reagents and an on-board phosphate standard for quality assurance are delivered using a novel and simple-to-use cartridge system that eliminates the user's interaction with the reagents. The reagent cartridges are sufficient for more than 1000 samples. The precision of the CYCLE sensor is ~50 nM phosphate, with a dynamic range from ~0 to 10 ?M. The CYCLE sensor operates using 12 VDC input, and has a low current draw (milliamps). CYCLE also has 1 GB on-board data storage capacity, and communicates using a serial interface. The host software for the CYCLE sensor includes a variety of features, including deployment planning and sensor configuration, data processing, plotting of raw and processed data, tracking of reagent usage and a pre and post deployment calibration utility. The instrument has been deployed in a variety of sampling situations: freshwater, estuarine, and ocean. Deployments are typically for over 1000 samples worth of continuous run time without maintenance (4-12 wks). Using the CYCLE phosphate sensor, a sufficient sampling rate (~20-30 minutes per sample) is realized to monitor in-situ nutrient variability over a broad range of time scales including tidal cycles, runoff events, and phytoplankton bloom dynamics. We present a time series of phosphate data collected in Yaquina Bay, Oregon. Combining this data with complimentary measurements, the CYCLE phosphate provides a missing link in understanding nutrient dynamics in Yaquina Bay. We demonstrate that by correlating phosphate variability with nitrate, chlorophyll, dissolved oxygen, turbidity, CDOM, conductivity, and temperature, a greater understanding of the factors influencing nutrient flux in the bay is possible. What nutrients limit production and whether anthropogenic or oceanic sources of nutrients dominate bloom dynamics can",2009,0, 4822,Implementations of the Navy Coupled Ocean Data Assimilation system at the Naval Oceanographic Office,"The Naval Oceanographic Office uses the Navy Coupled Ocean Data Assimilation (NCODA) system to perform data assimilation for ocean modeling. Currently the system uses a 3D multivariate optimum interpolation (3D MVOI) algorithm to produce outputs of temperature, salinity, geopotential, and u/v velocity. NCODA is run in a standalone mode to support automated ocean data quality control (NCODA OcnQC) and to test software updates. NCODA is also coupled with the Regional/Global Navy Coastal Ocean Model (RNCOM/GNCOM). The RNCOM/NCODA system is being used as part of an Adaptive Sampling and Prediction (ASAP) pre-operational project, that makes use of the Ensemble Transform (ET) and Ensemble Transform Kalman Filter (ET KF) applied to ensemble runs of the RNCOM. The ET KF is used to predict the posterior error covariances resulting from possible profile measurements. These results aid in predicting the impact of ocean observations on the future analysis, and thus allow the direction of limited assets to areas that will have the maximum gain (for applications such as ocean acoustics). A review of these systems will be given as well as examples of the metrics used for the RNCOM/NCODA system, ensemble modeling, and ASAP.",2009,0, 4823,Instrumentation for continuous monitoring in marine environments,"Continuous monitoring data are a useful source of information for the understanding of seasonal chemical and biological changes in marine environments. They are useful to estimate nutrient dynamics, primary and secondary production as well as to assess C, N, P fluxes associated with biogeochemical cycling. More and better water quality data is needed to calculate Maximum Permissible Loading of coastal waters and we need better data to assess trends, to determine current status and impairments, and to test water quality models. For a long time these requirements were not met satisfactorily due to the absence of suitable instrumentation in the market. SYSTEA has tried to bridge this gap since ten years with the development of several field analyzers (NPA, NPA Plus and Pro, DPA series and latest is the WIZ-probe) and by participating in several R&D European projects (EXOCET/D, WARMER) we have proven our ability to build reliable and efficient in-situ probes which are now commercially available. The choice to work in collaboration with scientific institutions specialized in marine ecosystem study has been made very early by SYSTEA and is actually the only company able to offer a complete range of in-situ probes for continuous nutrient analysis, using its exclusive ?-LFA technology fully developed by SYSTEA, in collaboration with Sysmedia S.r.l with remote management capabilities. These innovative technical solutions allow deploying their DPA probe down to - 1500 m depth, maintaining a high level of accuracy and robustness as proved during the European project EXOCET/D in 2006. The WIZ probe is the latest development of SYSTEA, the state of the art portable ""in-situ"" probe, to measure up to four chemical parameters continuously in surface waters or marine environments. The innovative design allows an easy handling and field deployment by the user. WIZ probe allows, in the standard configuration, the detection of four nutrient parameters (orthophosphate, ammonia, nit- ite and nitrate) in low concentrations while autonomously managing the well tested spectrophotometric wet chemistries, and an advanced fluorimetric method for ammonia measurement. Analytical methods have been developed for several other parameters including silicates, iron and trace metals. Results are directly recorded in concentration units; all measured values are stored with date, time and sample optical density (O.D.). The same data are remotely available through a serial communication port, which allows the complete probe configuration and remote control using the external Windows? based Wiz Control Panel software.",2009,0, 4824,Modeling and analysis of two-component cluster system,"Businesses today are becoming increasingly dependent on information technology (IT) to meet business-critical demands. The more available a computer system is, the more value it can provide to its users. In many cases high availability (HA) requirement becomes as critical as high performance. Cluster computing has been attracting more and more attention from both the industrial and the academic world for its enormous computing power and scalability. High availability features need to be included to ensure that cluster computing environments can provide continuous services. In order to validate and compare the different techniques, it is often necessary to build models of the system. A typical availability modeling method is based on analytical formalisms such as fault tree, Markov chains, Stochastic Petri Net, etc. In this paper we describe (1) a two-component cluster configuration, and (2) an availability model of that system using Markov modeling techniques.",2009,0, 4825,On the Objective Evaluation of Real-Time Networked Games,"With the recent evolution of network-based multiplayer games and the increasing popularity of online games demanding strict real-time interaction among players - like First Person Shooter (FPS) -, game providers face the problem to correlate network conditions with quality of gaming experience. This paper addresses the problem of the estimation gameplay quality during real-time games; in particular, we focus on FPS ones. Current literature usually considers end-to-end delay as the only important parameter and deducts system performance indexes from graphical ones. Player satisfaction, on the other hand, is usually evaluated in a subjective way: asking the player, or measuring how long he/she stays connected. In this paper we use a testbed with synthetic players (bots) to directly correlate network end-to-end delay and jitter with expected players' satisfaction. Running extensive experiments we argue about effective in-game performances degradation of penalized players. Performances are measured in terms of score and number of actions kills, actually - performed per minute.",2009,0, 4826,Application of single pole auto reclosing in distributaion networks with high pentration of DGS,"Due to continued penetration of DG into existing distribution network, the tripping of DG during network fault condition may affect the network stability and reliability. Operation of DG should be remain during network temporary fault as far as possible in order to maximise the benefits of interconnection of DG and increase reliability of power supply service to consumers. Conventional three pole auto recloser in distribution network become incompatible with the presence of DG because it interrupts the operation of DG during network temporary single phase to earth fault event and cause unnecessary disconnection of DG. Therefore the application of single pole auto reclosing scheme (SPAR) which is widely used in transmission network should be considered in distribution network with DG. Literature survey shows that no investigation has been carried out for the application of fault identification and phase selection techniques to single pole auto reclosing scheme in distribution networks with high penetration of DG. This paper presents the development of an adaptive fault identification and phase selection scheme to be used in the implementation of SPAR in power distribution network with DG. The proposed method uses only three line current measurements at the relay point. The value of line current during prefault condition and transient period of fault condition is processed using condition rules with IF-THEN in order to determine the faulty phase and to initiate single pole auto reclosure. The analysis of the proposed method is performed using PSCAD/EMTDC power system software. Test results show that the proposed method can correctly detect the faulty phase within one cycle. The validity of the proposed method has been tested for different fault locations and network operating modes.",2009,0, 4827,How simulation languages should report results: A modest proposal,"The focus of simulation software environments is on developing simulation models; much less consideration is placed on reporting results. However, the quality of the simulation model is irrelevant if the results are not interpreted correctly. The manner in which results are reported, along with a lack of standardized guidelines for reports, could contribute to the misinterpretation of results. We propose a hierarchical report structure where each reporting level provides additional detail about the simulated performance. Our approach utilizes two recent developments in output analysis: a procedure for omitting statistically meaningless digits in point estimates, and a graphical display called a MORE Plot, which conveys operational risk and statistical error in an intuitive manner on a single graph. Our motivation for developing this approach is to prevent or reduce misinterpretation of simulation results and to provide a foundation for standardized guidelines for reporting.",2009,0, 4828,Using similarity measures for test scenario selection,"Specification based testing involves using specification as the basis for generating test cases. The Unified Modeling Language (UML) consists of diagrams to capture the static and dynamic behaviour of a system. Generating test scenarios using UML activity diagrams produces all possible scenarios which is impossible to test exhaustively. Test scenario selection involves selecting an effective subset of scenarios for testing. This paper presents a new metric based on common subscenario between scenarios, their lengths, and weights based on the position of the common subscenario in the scenario. The aim of this strategy is to select scenarios that are least similar and at the same time provide high coverage. The method is compared with results of random selection to study effectiveness of our technique.",2009,0, 4829,Optical measurement and management system,"This paper proposes an appreciate approach to alleviate the critical issues of using optical time domain reflectometer (OTDR) for monitoring and troubleshooting a conventional passive optical network (PON) equipped with passive branching device. In this enhancement, OTDR is located at central office (CO) and a tapper circuit is designed to allow the OTDR pulse bypassing the passive branching device in a conventional PON when emitted from CO towards customer sites in downstream direction. This provides a faster, simpler, and less complex way for surveillance applications. A simulation software named Smart Access Network _ Testing, Analyzing and Database (SANTAD) is developed for providing remote controlling, optical monitoring, fault detection, and centralized troubleshooting features to assist networks operators to manage PON more efficiency and deliver high quality of services (QoS) for end-users. SANTAD increases the workforce productivity and facilitates the network management of network through centralized monitoring and troubleshooting from CO.",2009,0, 4830,Automatically Recommending Triage Decisions for Pragmatic Reuse Tasks,"Planning a complex software modification task imposes a high cognitive burden on developers, who must juggle navigating the software, understanding what they see with respect to their task, and deciding how their task should be performed given what they have discovered. Pragmatic reuse tasks, where source code is reused in a white-box fashion, is an example of a complex and error-prone modification task: the developer must plan out which portions of a system to reuse, extract the code, and integrate it into their own system. In this paper we present a recommendation system that automates some aspects of the planning process undertaken by developers during pragmatic reuse tasks. In a retroactive evaluation, we demonstrate that our technique was able to provide the correct recommendation 64% of the time and was incorrect 25% of the time. Our case study suggests that developer investigative behaviour is positively influenced by the use of the recommendation system.",2009,0, 4831,Instant-X: Towards a generic API for multimedia middleware,"The globalisation of our society leads to an increasing need for spontaneous communication. However, the development of such applications is a tedious and error-prone process. This results from the fact that in general only basic functionality is available in terms of protocol implementations and encoders/decoders. This leads to inflexible proprietary software systems implementing unavailable functionality on their own. In this work we introduce Instant-X, a novel component-based middleware platform for multimedia applications. Unlike related work, Instant-X provides a generic programming model with an API for essential tasks of multimedia applications with respect to signalling and data transmission. This API abstracts from concrete component implementations and thus allows replacing specific protocol implementations without changing the application code. Furthermore, Instant-X supports dynamic deployment, i.e., unavailable components can be automatically loaded at runtime. To show the feasibility of our approach we evaluated our Instant-X prototype regarding code complexity and performance.",2009,0, 4832,Comparison of different ANN techniques for automatic defect detection in X-Ray images,"X-ray imaging is extensively used in the NDT. In the conventional method, interpretation of the large number of radiographs for defect detection and evaluation is carried out manually by operator or expert, which makes the system subjective. Also interpretation of large number of images is tedious and may lead to misinterpretation. Automation of Non-Destructive evaluation techniques is gaining greater relevance but automatic analysis of X-Ray images is still a complex problem, as the images are noisy, low contrast with a number of artifacts. ANN's are systems which can be trained to analyze input data based on conditions provided to derive required output. This makes the system automatic reducing the subjective interference in analysis of data. Artificial neural network based systems are thus a feasible solution to this problem of X-Ray NDT. Due to complex nature of input images and noise present, Noise removal becomes a problem in X-Ray images. Preprocessing techniques based on statistical analysis have shown improvement in image noise reduction. Pixels/group of pixels, which deviate from the general structural pattern and grey scale distribution are located. The statistically processed pixel values are used to obtain the features vector from defective as well as from non-defective areas. Software for pre-processing and analyzing NDT images has been developed. Software allows user to train neural networks for defect detection. Once trained satisfactorily, the software scans the new input image and uses the trained ANN for defect detection. The final image with defect regions marked will be displayed. This system can be used to obtain the probable defective areas in a given input image. This paper presents performance of MLP and RBF for detection of defect. The effect of different types of input viz. template and moments on performance of ANN is discussed.",2009,0, 4833,Directional relaying during power swing and single-pole tripping,"This paper presents a directional relaying scheme during power swing and single-pole tripping condition using fuzzy logic approach. If any balanced or unbalanced fault occurs during power swing condition, conventional directional relaying algorithm finds limitation due to complex nature of signal. Once a fault is identified as being only on a phase, then the single-pole tripping takes place. During this situation the sequence component based directional relay will have a tendency to trip incorrectly due to the unbalanced loading condition. Again if a single-pole tripping situation happens during power swing, the situation becomes more complex. This paper highlights these issues and proposes a fault direction estimation technique using fuzzy logic approach where feature selection using mutual information technique is emphasized. Performance of the technique is evaluated using the test system simulated through PSCAD/EMTP software.",2009,0, 4834,A biosignal analysis system applied for developing an algorithm predicting critical situations of high risk cardiac patients by hemodynamic monitoring,"A software system for efficient development of high quality biosignal processing algorithms and its application for the Computers in Cardiology Challenge 2009 Predicting Acute Hypotensive Episodes is described. The system is part of a medical research network, and supports import of several standard and non-standard data formats, a modular and therefore extremely flexible signal processing architecture, remote and parallel processing, different data viewers and a framework for annotating signals and for validating and optimizing algorithms. Already in 2001, 2004 and 2006 the system was used for implementing algorithms that successfully took part in the Challenges, respectively. In 2009 we received a perfect score of 10 out of 10 in event 1 and a score of 33 out of 40 in event 2 of the Challenge.",2009,0, 4835,The impact of virtualization on the performance of Massively Multiplayer Online Games,"Today's highly successful Massively Multiplayer Online Games (MMOGs) have millions of registered users and hundreds of thousands of active concurrent users. As a result of the highly dynamic MMOG usage patterns, the MMOG operators pre-provision and then maintain throughout the lifetime of the game tens of thousands of compute resources in data centers located across the world. Until recently, the difficulty of porting the MMOG software services to different platforms made it impractical to dynamically provision resources external to the MMOG operators' data centers. However, virtualization is a new technology that promises to alleviate this problem by providing a uniform computing platform with minimal overhead. To investigate the potential of this new technology, in this paper we propose a new hybrid resource provisioning model that uses a smaller and less expensive set of self-owned data centers, complemented by virtualized cloud computing resources during peak hours. Using real traces from RuneScape, one of the most successful contemporary MMOGs, we evaluate with simulations the effectiveness of the on-demand cloud resource provisioning strategy for MMOGs. We assess the impact of provisioning of virtualized cloud resources, analyze the components of virtualization overhead, and compare provisioning of virtualized resources with direct provisioning of data center resources.",2009,0, 4836,Effectiveness and Cost of Verification Techniques: Preliminary Conclusions on Five Techniques,"A group of 17 students applied 5 unit verification techniques in a simple Java program as training for a formal experiment. The verification techniques applied are desktop inspection, equivalence partitioning and boundary-value analysis, decision table, linearly independent path, and multiple condition coverage. The first one is a static technique, while the others are dynamic. JUnit test cases are generated when dynamic techniques are applied. Both the defects and the execution time are registered. Execution time is considered as a cost measure for the techniques. Preliminary results yield three relevant conclusions. As a first conclusion, performance defects are not easily found. Secondly, unit verification is rather costly and the percentage of defects it detects is low. Finally desktop inspection detects a greater variety of defects than the other techniques.",2009,0, 4837,Adaptive Cross-Diamond Search Algorithm for Fast Block Motion Estimation in H.264,"Motion estimation plays a key role in the H.264. It is the most computationally intensive module. In this paper, we proposed a novel adaptive cross-diamond search (ACDS) algorithm for fast block motion estimation in H.264. The proposed algorithm adaptively chooses different search pattern according to the seven types of macro block division in H.264 and the motion vector distribution characteristics of video sequence. Experimental results show that the proposed ACDS algorithm is much faster than the UMHexagonS algorithm, whereas similar prediction quality is still maintained.",2009,0, 4838,Research of E-Commerce Software Quality Evaluation Using Rough Set Theory,"In order to overcome the subjectivity on weight allocation of the traditional evaluation methods in e-commerce software quality evaluation, based on the object classification capability of rough set theory, the paper presents a completely data-driven e-commerce software quality evaluation method using measurement methods of knowledge dependence and attribute importance in rough set theory. It overcomes the subjectivity and ambiguity of traditional evaluation methods. An example is presented to explain the evaluation process.",2009,0, 4839,Predicting Object-Oriented Software Maintainability Using Projection Pursuit Regression,"This paper presents ongoing work on using projection pursuit regression model to predict object-oriented software maintainability. The maintainability is measured as the number of changes made to code during a maintenance period by means of object-oriented software metrics. To evaluate the benefits of using PPR over nonlinear modeling techniques, we also build artificial neural network model, and multivariate adaptive regression splines model. The models performance is evaluated and compared using leave-one-out cross-validation with RMSE. The results suggest that PPR can predict more accurately than the other two modeling techniques. The study also provided the useful information on how to constructing software quality model.",2009,0, 4840,Automatic configuration of spectral dimensionality reduction methods for 3D human pose estimation,"In this paper, our main contribution is a framework for the automatic configuration of any spectral dimensionality reduction methods. This is achieved, first, by introducing the mutual information measure to assess the quality of discovered embedded spaces. Secondly, we overcome the deficiency of mapping function in spectral dimensionality reduction approaches by proposing data projection between spaces based on fully automatic and dynamically adjustable Radial Basis Function network. Finally, this automatic framework is evaluated in the context of 3D human pose estimation. We demonstrate mutual information measure outperforms all current space assessment metrics. Moreover, experiments show the mapping associated to the induced embedded space displays good generalization properties. In particular, it allows improvement of accuracy by around 30% when refining 3D pose estimates of a walking sequence produced by an activity independent method.",2009,0, 4841,"Efficient, high-quality image contour detection","Image contour detection is fundamental to many image analysis applications, including image segmentation, object recognition and classification. However, highly accurate image contour detection algorithms are also very computationally intensive, which limits their applicability, even for offline batch processing. In this work, we examine efficient parallel algorithms for performing image contour detection, with particular attention paid to local image analysis as well as the generalized eigensolver used in Normalized Cuts. Combining these algorithms into a contour detector, along with careful implementation on highly parallel, commodity processors from Nvidia, our contour detector provides uncompromised contour accuracy, with an F-metric of 0.70 on the Berkeley Segmentation Dataset. Runtime is reduced from 4 minutes to 1.8 seconds. The efficiency gains we realize enable high-quality image contour detection on much larger images than previously practical, and the algorithms we propose are applicable to several image segmentation approaches. Efficient, scalable, yet highly accurate image contour detection will facilitate increased performance in many computer vision applications.",2009,0, 4842,Mixing Simulated and Actual Hardware Devices to Validate Device Drivers in a Complex Embedded Platform,"The structure and the functionalities of a device driver are strongly influenced by the target platform architecture, as well as by the device communication protocol. This makes the generation of device drivers designed for complex embedded platforms a very time consuming and error prone activity. Validation becomes then a nodal point in the design flow. The aim of this paper is to present a co-simulation framework that allows validation of device drivers. The proposed framework supports all mechanisms used by device drivers to communicate with HW devices so that both modeled and actual components can be included in the simulated embedded platform. In this way, the generated code can be tested and validated even if the final platform is not ready yet. The framework has been applied to some examples to highlight the performance and effectiveness of this approach.",2009,0, 4843,Performance and scalability of M/M/c based queuing model of the SIP Proxy Server - a practical approach,"In recent years, Session Initiation Protocol (SIP) based Voice over IP (VoIP) based applications are alternative to the traditional Public Switched Telephone Networks (PSTN) because of its flexibility in the implementation of new features and services. The Session Initiation Protocol (SIP) is becoming a popular signaling protocol for Voice over IP (VoIP) based applications. The SIP Proxy server is a software application that provides call routing services by parsing and forwarding all the incoming SIP packets in an IP telephony network. The efficiency of this process can create large scale, highly reliable packet voice networks for service providers and enterprises. Since, SIP Proxy server performance can be characterized by its transaction states of each SIP session, we proposed the M/M/c performance model of the SIP Proxy Server and studied some of the key performance benchmarks such as server utilization, queue size and memory utilization. Provided the comparative results between the predicted results with the experimental results conducted in a lab environment.",2009,0, 4844,Vehicle speed detection system,"This research intends to develop the vehicle speed detection system using image processing technique. Overall works are the software development of a system that requires a video scene, which consists of the following components: moving vehicle, starting reference point and ending reference point. The system is designed to detect the position of the moving vehicle in the scene and the position of the reference points and calculate the speed of each static image frame from the detected positions. The vehicle speed detection from a video frame system consists of six major components: 1) Image Acquisition, for collecting a series of single images from the video scene and storing them in the temporary storage. 2) Image Enhancement, to improve some characteristics of the single image in order to provide more accuracy and better future performance. 3) Image Segmentation, to perform the vehicle position detection using image differentiation. 4) Image Analysis, to analyze the position of the reference starting point and the reference ending point, using a threshold technique. 5) Speed Detection, to calculate the speed of each vehicle in the single image frame using the detection vehicle position and the reference point positions, and 6) Report, to convey the information to the end user as readable information. The experimentation has been made in order to assess three qualities: 1) Usability, to prove that the system can determine vehicle speed under the specific conditions laid out. 2) Performance, and 3) Effectiveness. The results show that the system works with highest performance at resolution 320×240. It takes around 70 seconds to detect a moving vehicle in a video scene.",2009,0, 4845,A New approach of Rate-Quantization modeling for Intra and Inter frames in H.264 rate control,"Video encoding rate control has been the research focus in the recent years. The existing rate control algorithms use Rate-Distortion (R-D) or Rate-Quantization (R-Q) models. These latter assume that the enhancement of the bit allocation process, the quantization parameter determination and the buffer management are essentially based on the improvement of complexity measures estimation. Inaccurate estimation leads to wrong quantization parameters and affects significantly the global performance. Therefore, several improved frame complexity measures are proposed in literature. The efficiency of such measures is however limited by the linear prediction model which remains still inaccurate to encode complexity between two neighbour frames. In this paper, we propose a new approach of Rate-Quantization modeling for both Intra and Inter frame without any complexity measure estimation. This approach results from extensive experiments and proposes two Rate-Quantization models. The first one (M1) aims at determining an optimal initial quantization parameter for Intra frames based on sequence target bit-rate and frame rate. The second model (M2) determines the quantization parameter of Inter coding unit (Frame or Macroblock) according to the statistics of the previous coded ones. This model substitutes both linear and quadratic models used in H.264 rate controller. The simulations have been carried out using both JM10.2 and JM15.0 reference softwares. Compared to JM10.2, M1 alone, improves the PSNR up to 1.93dB, M2 achieves a closer output bit-rate and similar quality while the combined model (M1+M2) minimizes the computational complexity. (M1+M2) outperforms both JM10.2 and JM15.0 in terms of PSNR.",2009,0, 4846,Design and implementation of adaptive scalable streaming system over heterogeneous network,"Distribution of multimedia content through the internet has gain popularity with the advancement of multimedia technology through software application with different transmission capabilities. Current conventional system implementation works perfectly for a guaranteed network bandwidth. However, when the packet experience losses due to drop in bandwidth, it could not preserve an acceptable video quality over the heterogeneous network. Thus, if the bandwidth could be determined in real-time, an application that controls the scalability of video stream sent over the channel can be used to control the video quality. This is done by reducing the bit-rate of the video to be sent according to current available bandwidth. In this paper, a scalable video streaming system with automate bandwidth control over heterogeneous network, using spatial scalability approach and end-to-end available bandwidth estimation are designed and implemented. Distribution of multimedia data streaming would be improved without any jitter and could fit to any devices with various performance capabilities.",2009,0, 4847,Flying capacitor multicell converter based dynamic voltage restorer,"One of the major power quality problems in distribution systems are voltage sags. This paper deals with a dynamic voltage restorer (DVR) as one of the different solutions to compensate these sags and protect sensitive loads. However, the quality of DVR output voltage itself, such as THD, is important. So, in this paper a configuration of DVR based on flying capacitor multicell (FCM) converter is proposed. The main properties of FCM converter, which causes increase in the number of output voltage levels, are transformer-less operation and natural self-balancing of flying capacitors voltages. The proposed DVR consists of a set of series converter and shunt rectifier connected back-to-back. To guarantee the proper operation of shunt rectifier and maintaining the dc link voltage at the desired value, which results in suitable performance of DVR, the DVR is characterized by installing the series converter on the source-side and the shunt rectifier on the load-side. Also, the pre-sag compensation strategy and the proposed voltage sag detection and DVR reference voltages determination methods based on synchronous reference frame (SRF) are adopted as the control system. The proposed DVR is simulated using PSCAD/EMTDC software and simulation results are presented to validate its effectiveness.",2009,0, 4848,"Integrated Detection of Attacks Against Browsers, Web Applications and Databases","Anomaly-based techniques were exploited successfully to implement protection mechanisms for various systems. Recently, these approaches have been ported to the web domain under the name of ""web application anomaly detectors"" (or firewalls) with promising results. In particular, those capable of automatically building specifications, or models, of the protected application by observing its traffic (e.g., network packets, system calls, or HTTP requests and responses) are particularly interesting, since they can be deployed with little effort. Typically, the detection accuracy of these systems is significantly influenced by the model building phase (often called training), which clearly depends upon the quality of the observed traffic, which should resemble the normal activity of the protected application and must be also free from attacks. Otherwise, detection may result in significant amounts of false positives (i.e., benign events flagged as anomalous) and negatives (i.e., undetected threats). In this work we describe Masibty, a web application anomaly detector that have some interesting properties. First, it requires the training data not to be attack-free. Secondly, not only it protects the monitored application, it also detects and blocks malicious client-side threats before they are sent to the browser. Third, Masibty intercepts the queries before they are sent to the database, correlates them with the corresponding HTTP requests and blocks those deemed anomalous. Both the accuracy and the performance have been evaluated on real-world web applications with interesting results. The system is almost not influenced by the presence of attacks in the training data and shows only a negligible amount of false positives, although this is paid in terms of a slight performance overhead.",2009,0, 4849,The criteria of system development methodology choice for the minimization of the time and resources,"Due to decreasing of demand on the corporate informational systems becomes an important question the improvement of price-quality relation of the software product. It requires increasing of the quality level in the same time with prime cost decreasing. The aim of the given work is to describe the possibility of resource usage optimization applying different system development methodologies. In the analysis were used waterfall, iteration, spiral, and agile methodologies. The criteria of choice from them are described taking into account necessity of project's time and cost spending minimization. In the work were analyses risks that can appear at application of different methodologies, and ways of their minimization. As a result of the performed analysis in the article is proposed to choose methodology not for the company as a whole but for each separate project. In the article are given practical recommendations that can be used in the real projects.",2009,0, 4850,Estimation of program reverse semantic traceability influence at program reliability with assistance of object-oriented metrics,"In article the approach to the estimation of program reverse semantic traceability (RST) influence on program reliability with assistance of object-oriented metrics is proposed. In order to estimate reasonability of RST usage it is naturally to define how it influences on major adjectives of software development project: project cost, quality of the developed application, etc. At present object-oriented metrics of Chidamber and Kemerer are widely used for predictive estimation of software reliability at early stage of life cycle. In number of works, for example, it is proposed to use logistic regression for estimation of probability π (that a module will have a fault). The parameters of this model are found by maximal likelihood method with calculation of object-oriented metrics. The paper shows how to change the software reliability model parameters, that was received using logistic regression, in order to estimate influence of program RST on program reliability.",2009,0, 4851,Automatic defects detection in industrial C/C++ software,"The solution to the problem of automatic defects detection in industrial software is covered in this paper. The results of the experiments with the existing tools are presented. These results stand for inadequate efficiency of the implemented analysis. Existing source code static analysis methods and defects detection algorithms are covered. The program model and the analysis algorithms based on existing approaches are proposed. The problems of co-execution of different analysis algorithms are explored. The ways for improvement of analysis precision and algorithms performance are proposed. Advantages of the approaches developed are: soundness of a solution, full support of the features of target programming languages and analysis of the programs lacking full source code using annotations mechanism. The algorithms proposed in the paper are implemented in the automatic defects detection tool.",2009,0, 4852,The ATLAS calorimeter trigger commissioning using cosmic ray data,"While waiting for the first LHC collisions, the ATLAS detector is undergoing integration exercises performing cosmic ray data taking runs. The ATLAS calorimeter trigger software uses a common framework capable of retrieving, decoding and providing data to different particle identification algorithms. This work reports on the results obtained during the first data-taking period of 2009 with the calorimeter trigger. Experts from detector areas such as calorimetry, data flow management, data quality monitoring and the trigger worked together to reach stable data taking conditions. Issues like noise readout channels were addressed and the global trigger robustness was evaluated.",2009,0, 4853,Simulation Model of Team Software Process Using Temporal Parallel Automata,"Team Software Process (TSP) is brought forward to help improve the efficiency of software development process. Further, establishing simulation model for TSP can facilitate developers' understanding, forecast possible shortcomings and bottlenecks in process and assist developers' decision-making, supervise and control project development process. In this paper, firstly Temporal Parallel Automata (TPA) theory expanded from the finite state automata theory is applied into software process modeling. Secondly the concepts and the process control structures of TSP which are divided into sequence, AND-join, AND-split, OR-join, OR-split and Loop structure are mapped to TPA, thereby the TSP simulation model based on TPA is built. Thirdly the definition of soundness is given for soundness verification of process model. Finally, an instance is used to verify the validity of process model. It is proved that activity planning, resource allocation and schedule control of software process can be implemented effectively by the model.",2009,0, 4854,A new critical variable analysis in processor-based systems,"Determining the dependability of integrated systems with respect to soft errors is necessary for a growing number of applications. The most critical information must be identified when selective hardening is necessary to achieve good efficiency/cost trade-offs. In processor-based systems, the most critical variables must thus be identified in the application program. An improved algorithm for critical variable identification is described and validated with respect to fault injection results.",2009,0, 4855,VGrADS: enabling e-Science workflows on grids and clouds with fault tolerance,"Today's scientific workflows use distributed heterogeneous resources through diverse grid and cloud interfaces that are often hard to program. In addition, especially for time-sensitive critical applications, predictable quality of service is necessary across these distributed resources. VGrADS' virtual grid execution system (vgES) provides an uniform qualitative resource abstraction over grid and cloud systems. We apply vgES for scheduling a set of deadline sensitive weather forecasting workflows. Specifically, this paper reports on our experiences with (1) virtualized reservations for batchqueue systems, (2) coordinated usage of TeraGrid (batch queue), Amazon EC2 (cloud), our own clusters (batch queue) and Eucalyptus (cloud) resources, and (3) fault tolerance through automated task replication. The combined effect of these techniques was to enable a new workflow planning method to balance performance, reliability and cost considerations. The results point toward improved resource selection and execution management support for a variety of e-Science applications over grids and cloud systems.",2009,0, 4856,On improving dependability of the numerical GPC algorithm,The paper studies the dependability of software implementation of the numerical Generalized Predictive Control (GPC) Model Predictive Control (MPC) algorithm. The algorithm is implemented for a control system of a multivariable chemical reactor - a process with strong cross-couplings. Fault sensitivity of the proposed implementations is verified in experiments with a software implemented fault injector. Experimental methodology of disturbing the software implementation of the control algorithm is presented. The impact of faults on the quality of the controller performance is analysed and techniques improving the dependability of the algorithm implementation are proposed. The experimental results prove the efficiency of the software improvements.,2009,0, 4857,Trade-off between safety and normal-case control performance based on probabilistic safety management of control laws,"This paper presents a probabilistic safety management framework for control laws to provide a balance between normal-case performance, safety and fault-case performance according to the international standard on safety, IEC 61508. It is based on multiobjective design for simultaneous problems for each context to optimize only normal-case performance out of the whole including fault-case performance. Also the framework establishes the existence of trade-off between them quantitatively for the first time ever.",2009,0, 4858,Integrated process and control system model for product quality control - A soft-sensor based application,"In the near future of chemical industry, communication between design, manufacturing, marketing and management should be centered on modeling and simulation, which could integrate the whole product and process development chains, process units and subdivisions of the company. Solutions to this topic often set aside one or more component from product, process and control models, hence as a novel know-how, an information system methodology was developed. Its structure integrates models of these components with process data warehouse where integration means information, location, application and time integrity. It supports complex engineering tasks related to analysis of system performance, process optimization, operator training systems (OTS), decision support systems (DSS), reverse engineering or software sensors (soft-sensors). The case study in this article presents the application of the proposed methodology for product quality soft-sensor application by on-line melt index prediction of an operating polymerization technology.",2009,0, 4859,A fast method for scaling color images,"Image scaling is an important processing step in any digital imaging chain containing a camera sensor and a display. Although resolutions of the mobile displays have increased to the level that is usable for imaging, images often have larger resolution than the mobile displays. In this paper we propose a software based fast and good quality image scaling procedure, which is suitable for mobile implementations. We describe our method in detail and we present experiments showing the performances of our approach on real images.",2009,0, 4860,An extensive comparison of bug prediction approaches,"Reliably predicting software defects is one of software engineering's holy grails. Researchers have devised and implemented a plethora of bug prediction approaches varying in terms of accuracy, complexity and the input data they require. However, the absence of an established benchmark makes it hard, if not impossible, to compare approaches. We present a benchmark for defect prediction, in the form of a publicly available data set consisting of several software systems, and provide an extensive comparison of the explanative and predictive power of well-known bug prediction approaches, together with novel approaches we devised. Based on the results, we discuss the performance and stability of the approaches with respect to our benchmark and deduce a number of insights on bug prediction models.",2010,1, 4861,Assessing UML design metrics for predicting fault-prone classes in a Java system,"Identifying and fixing software problems before implementation are believed to be much cheaper than after implementation. Hence, it follows that predicting fault-proneness of software modules based on early software artifacts like software design is beneficial as it allows software engineers to perform early predictions to anticipate and avoid faults early enough. Taking this motivation into consideration, in this paper we evaluate the usefulness of UML design metrics to predict fault-proneness of Java classes. We use historical data of a significant industrial Java system to build and validate a UML-based prediction model. Based on the case study we have found that level of detail of messages and import coupling-both measured from sequence diagrams, are significant predictors of class fault-proneness. We also learn that the prediction model built exclusively using the UML design metrics demonstrates a better accuracy than the one built exclusively using code metrics.",2010,1, 4862,An empirical approach for software fault prediction,"Measuring software quality in terms of fault proneness of data can help the tomorrow's programmers to predict the fault prone areas in the projects before development. Knowing the faulty areas early from previous developed projects can be used to allocate experienced professionals for development of fault prone modules. Experienced persons can emphasize the faulty areas and can get the solutions in minimum time and budget that in turn increases software quality and customer satisfaction. We have used Fuzzy C Means clustering technique for the prediction of faulty/ non-faulty modules in the project. The datasets used for training and testing modules available from NASA projects namely CM1, PC1 and JM1 include requirement and code metrics which are then combined to get a combination metric model. These three models are then compared with each other and the results show that combination metric model is found to be the best prediction model among three. Also, this approach is compared with others in the literature and is proved to be more accurate. This approach has been implemented in MATLAB 7.9.",2010,1, 4863,New Conceptual Coupling and Cohesion Metrics for Object-Oriented Systems,"The paper presents two novel conceptual metrics for measuring coupling and cohesion in software systems. Our first metric, Conceptual Coupling between Object classes (CCBO), is based on the well-known CBO coupling metric, while the other metric, Conceptual Lack of Cohesion on Methods (CLCOM5), is based on the LCOM5 cohesion metric. One advantage of the proposed conceptual metrics is that they can be computed in a simpler (and in many cases, programming language independent) way as compared to some of the structural metrics. We empirically studied CCBO and CLCOM5 for predicting fault-proneness of classes in a large open source system and compared these metrics with a host of existing structural and conceptual metrics for the same task. As the result, we found that the proposed conceptual metrics, when used in conjunction, can predict bugs nearly as precisely as the 58 structural metrics available in the Columbus source code quality framework and can be effectively combined with these metrics to improve bug prediction.",2010,1, 4864,Change Bursts as Defect Predictors,"In software development, every change induces a risk. What happens if code changes again and again in some period of time? In an empirical study on Windows Vista, we found that the features of such change bursts have the highest predictive power for defect-prone components. With precision and recall values well above 90%, change bursts significantly improve upon earlier predictors such as complexity metrics, code churn, or organizational structure. As they only rely on version history and a controlled change process, change bursts are straight-forward to detect and deploy.",2010,1, 4865,Self-Consistent MPI Performance Guidelines,"Message passing using the Message-Passing Interface (MPI) is at present the most widely adopted framework for programming parallel applications for distributed memory and clustered parallel systems. For reasons of (universal) implementability, the MPI standard does not state any specific performance guarantees, but users expect MPI implementations to deliver good and consistent performance in the sense of efficient utilization of the underlying parallel (communication) system. For performance portability reasons, users also naturally desire communication optimizations performed on one parallel platform with one MPI implementation to be preserved when switching to another MPI implementation on another platform. We address the problem of ensuring performance consistency and portability by formulating performance guidelines and conditions that are desirable for good MPI implementations to fulfill. Instead of prescribing a specific performance model (which may be realistic on some systems, under some MPI protocol and algorithm assumptions, etc.), we formulate these guidelines by relating the performance of various aspects of the semantically strongly interrelated MPI standard to each other. Common-sense expectations, for instance, suggest that no MPI function should perform worse than a combination of other MPI functions that implement the same functionality, no specialized function should perform worse than a more general function that can implement the same functionality, no function with weak semantic guarantees should perform worse than a similar function with stronger semantics, and so on. Such guidelines may enable implementers to provide higher quality MPI implementations, minimize performance surprises, and eliminate the need for users to make special, nonportable optimizations by hand. We introduce and semiformalize the concept of self-consistent performance guidelines for MPI, and provide a (nonexhaustive) set of such guidelines in a form that could be automat- - ically verified by benchmarks and experiment management tools. We present experimental results that show cases where guidelines are not satisfied in common MPI implementations, thereby indicating room for improvement in today's MPI implementations.",2010,0, 4866,Recent Developments in Fault Detection and Power Loss Estimation of Electrolytic Capacitors,"This paper proposes a comparative study of current-controlled hysteresis and pulsewidth modulation (PWM) techniques, and their influence upon power loss dissipation in a power-factor controller (PFC) output filtering capacitors. First, theoretical calculation of low-frequency and high-frequency components of the capacitor current is presented in the two cases, as well as the total harmonic distortion of the source current. Second, we prove that the methods already used to determine the capacitor power losses are not accurate because of the capacitor model chosen. In fact, a new electric equivalent scheme of electrolytic capacitors is determined using genetic algorithms. This model, characterized by frequency-independent parameters, redraws with accuracy the capacitor behavior for large frequency and temperature ranges. Thereby, the new capacitor model is integrated into the converter, and then, software simulation is carried out to determine the power losses for both control techniques. Due to this model, the equivalent series resistance (ESR) increase at high frequencies due to the skin effect is taken into account. Finally, for hysteresis and PWM controls, we suggest a method to determine the value of the series resistance and the remaining time to failure, based on the measurement of the output ripple voltage at steady-state and transient-state converter working.",2010,0, 4867,On the Quality of Service of Crash-Recovery Failure Detectors,We model the probabilistic behavior of a system comprising a failure detector and a monitored crash-recovery target. We extend failure detectors to take account of failure recovery in the target system. This involves extending QoS measures to include the recovery detection speed and proportion of failures detected. We also extend estimating the parameters of the failure detector to achieve a required QoS to configuring the crash-recovery failure detector. We investigate the impact of the dependability of the monitored process on the QoS of our failure detector. Our analysis indicates that variation in the MTTF and MTTR of the monitored process can have a significant impact on the QoS of our failure detector. Our analysis is supported by simulations that validate our theoretical results.,2010,0,3429 4868,Software Industry Performance: What You Measure Is What You Get,"The software industry's overall performance is uneven and, at first sight, puzzling. Delivery to time and budget is notoriously poor, and productivity shows limited improvement over time, yet quality can be amazingly good. Customers largely bear the costs of the poor aspects of performance. Many factors drive this performance. This article explores whether causal links exist between the overall observed performance and the commonly used performance metrics, estimating methods and processes, and the way these incentivize suppliers. The author proposes a set of possible improvements to current metrics and estimating methods and processes, and concludes that software professionals must educate their customers on the levers that are available to obtain a better all-round performance from their suppliers.",2010,0, 4869,An Online and Noninvasive Technique for the Condition Monitoring of Capacitors in Boost Converters,"Capacitors usually determine the overall lifetime of power converters since they are mainly responsible for breakdowns. Their failure results from the deterioration of their dielectric, the production of gases, and, eventually, their explosion. This process leads to an increase in the capacitor equivalent series resistance (ESR) and a decrease in its capacitance value for both electrolytic and metalized polypropylene film (MPPF) capacitors. In this paper, a novel noninvasive technique for capacitor diagnostic in Boost converters is presented. It can easily be applied online and even in real time, and it is suitable for the operation in both continuous current mode (CCM) and discontinuous current mode (DCM). The technique is based on the double estimations of the ESR and the capacitance, improving the diagnostic reliability. This way, predictive maintenance is provided, and it is possible to alarm for capacitor replacement, avoiding downtime. As the method is intended for railway high-power applications, it has been conceived neither to add any additional hardware in the power stage nor to even slightly modify it. To demonstrate and validate the effectiveness and accuracy of the proposed technique, several simulations and experimental results are discussed in a small prototype. In this prototype, the software for real-time estimation is programmed in a low-cost digital signal processor (DSP).",2010,0, 4870,An Integrated Data-Driven Framework for Computing System Management,"With advancement in science and technology, computing systems are becoming increasingly more complex with a growing number of heterogeneous software and hardware components. They are thus becoming more difficult to monitor, manage, and maintain. Traditional approaches to system management have been largely based on domain experts through a knowledge acquisition solution that translates domain knowledge into operating rules and policies. This process has been well known as cumbersome, labor intensive, and error prone. In addition, traditional approaches for system management are difficult to keep up with the rapidly changing environments. There is a pressing need for automatic and efficient approaches to monitor and manage complex computing systems. In this paper, we propose an integrated data-driven framework for computing system management by acquiring the needed knowledge automatically from a large amount of historical log data. Specifically, we apply text mining techniques to automatically categorize the log messages into a set of canonical categories, incorporate temporal information to improve categorization performance, develop temporal mining techniques to discover the relationships between different events, and take a novel approach called event summarization to provide a concise interpretation of the temporal patterns.",2010,0, 4871,Learning a Metric for Code Readability,"In this paper, we explore the concept of code readability and investigate its relation to software quality. With data collected from 120 human annotators, we derive associations between a simple set of local code features and human notions of readability. Using those features, we construct an automated readability measure and show that it can be 80 percent effective and better than a human, on average, at predicting readability judgments. Furthermore, we show that this metric correlates strongly with three measures of software quality: code changes, automated defect reports, and defect log messages. We measure these correlations on over 2.2 million lines of code, as well as longitudinally, over many releases of selected projects. Finally, we discuss the implications of this study on programming language design and engineering practice. For example, our data suggest that comments, in and of themselves, are less important than simple blank lines to local judgments of readability.",2010,0, 4872,Architectural Enhancement and System Software Support for Program Code Integrity Monitoring in Application-Specific Instruction-Set Processors,"Program code in a computer system can be altered either by malicious security attacks or by various faults in microprocessors. At the instruction level, all code modifications are manifested as bit flips. In this paper, we present a generalized methodology for monitoring code integrity at run-time in application-specific instruction-set processors. We embed monitoring microoperations in machine instructions, so the processor is augmented with a hardware monitor automatically. The monitor observes the processor's execution trace at run-time, checks whether it aligns with the expected program behavior, and signals any mismatches. Since the monitor works at a level below the instructions, the monitoring mechanism cannot be bypassed by software or compromised by malicious users. We discuss the ability and limitation of such monitoring mechanism for detecting both soft errors and code injection attacks. We propose two different schemes for managing the monitor, the operating system (OS) managed and application controlled, and design the constituent components within the monitoring architecture. Experimental results show that with an effective hash function implementation, our microarchitectural support can detect program code integrity compromises at a high probability with small area overhead and little performance degradation.",2010,0, 4873,Low-Complexity Transcoding of JPEG Images With Near-Optimal Quality Using a Predictive Quality Factor and Scaling Parameters,"A common transcoding operation consists of reducing the file size of a JPEG image to meet bandwidth or device constraints. This can be achieved by reducing its quality factor (QF) or reducing its resolution, or both. In this paper, using the structural similarity (SSIM) index as the quality metric, we present a system capable of estimating the QF and scaling parameters to achieve optimal quality while meeting a device's constraints. We then propose a novel low-complexity JPEG transcoding system which delivers near-optimal quality. The system is capable of predicting the best combination of QF and scaling parameters for a wide range of device constraints and viewing conditions. Although its computational complexity is an order of magnitude smaller than the system providing optimal quality, the proposed system yields quality results very similar to those of the optimal system.",2010,0, 4874,Modulation Quality Measurement in WiMAX Systems Through a Fully Digital Signal Processing Approach,"The performance assessment of worldwide interoperability for microwave access (WiMAX) systems is dealt with. A fully digital signal processing approach for modulation quality measurement is proposed, which is particularly addressed to transmitters based on orthogonal frequency-division multiplexing (OFDM) modulation. WiMAX technology deployment is rapidly increasing. To aid researchers, manufactures, and technicians in designing, realizing, and installing devices and apparatuses, some measurement solutions are already available, and new ones are being released on the market. All of them are arranged to complement an ad hoc digital signal processing software with an existing specialized measurement instrument such as a real-time spectrum analyzer or a vector signal analyzer. Furthermore, they strictly rely on a preliminary analog downconversion of the radio-frequency input signal, which is a basic front-end function provided by the cited instruments, to suitably digitize and digitally process the acquired samples. In the same way as the aforementioned solutions, the proposed approach takes advantage of existing instruments, but different from them, it provides for a direct digitization of the radio-frequency input signal. No downconversion is needed, and the use of general-purpose measurement hardware such as digital scopes or data acquisition systems is thus possible. A proper digital signal processing algorithm, which was designed and implemented by the authors, then demodulates the digitized signal, extracts the desired measurement information from its baseband components, and assesses its modulation quality. The results of several experiments conducted on laboratory WiMAX signals show the effectiveness and reliability of the approach with respect to the major competitive solutions; its superior performance in special physical-layer conditions is also highlighted.",2010,0, 4875,Hybrid Simulated Annealing and Its Application to Optimization of Hidden Markov Models for Visual Speech Recognition,"We propose a novel stochastic optimization algorithm, hybrid simulated annealing (SA), to train hidden Markov models (HMMs) for visual speech recognition. In our algorithm, SA is combined with a local optimization operator that substitutes a better solution for the current one to improve the convergence speed and the quality of solutions. We mathematically prove that the sequence of the objective values converges in probability to the global optimum in the algorithm. The algorithm is applied to train HMMs that are used as visual speech recognizers. While the popular training method of HMMs, the expectation-maximization algorithm, achieves only local optima in the parameter space, the proposed method can perform global optimization of the parameters of HMMs and thereby obtain solutions yielding improved recognition performance. The superiority of the proposed algorithm to the conventional ones is demonstrated via isolated word recognition experiments.",2010,0, 4876,Performability Analysis of Multistate Computing Systems Using Multivalued Decision Diagrams,"A distinct characteristic of multistate systems (MSS) is that the systems and/or their components may exhibit multiple performance levels (or states) varying from perfect operation to complete failure. MSS can model behaviors such as shared loads, performance degradation, imperfect fault coverage, standby redundancy, limited repair resources, and limited link capacities. The nonbinary state property of MSS and their components as well as dependencies existing among different states of the same component make the analysis of MSS difficult. This paper proposes efficient algorithms for analyzing MSS using multivalued decision diagrams (MDD). Various reliability, availability, and performability measures based on state probabilities or failure frequencies are considered. The application and advantages of the proposed algorithms are demonstrated through two examples. Furthermore, experimental results on a set of benchmark examples are presented to illustrate the advantages of the proposed MDD-based method for the performability analysis of MSS, as compared to the existing methods.",2010,0, 4877,Evaluation of Accuracy in Design Pattern Occurrence Detection,"Detection of design pattern occurrences is part of several solutions to software engineering problems, and high accuracy of detection is important to help solve the actual problems. The improvement in accuracy of design pattern occurrence detection requires some way of evaluating various approaches. Currently, there are several different methods used in the community to evaluate accuracy. We show that these differences may greatly influence the accuracy results, which makes it nearly impossible to compare the quality of different techniques. We propose a benchmark suite to improve the situation and a community effort to contribute to, and evolve, the benchmark suite. Also, we propose fine-grained metrics assessing the accuracy of various approaches in the benchmark suite. This allows comparing the detection techniques and helps improve the accuracy of detecting design pattern occurrences.",2010,0, 4878,Assessing Software Service Quality and Trustworthiness at Selection Time,"The integration of external software in project development is challenging and risky, notably because the execution quality of the software and the trustworthiness of the software provider may be unknown at integration time. This is a timely problem and of increasing importance with the advent of the SaaS model of service delivery. Therefore, in choosing the SaaS service to utilize, project managers must identify and evaluate the level of risk associated with each candidate. Trust is commonly assessed through reputation systems; however, existing systems rely on ratings provided by consumers. This raises numerous issues involving the subjectivity and unfairness of the service ratings. This paper describes a framework for reputation-aware software service selection and rating. A selection algorithm is devised for service recommendation, providing SaaS consumers with the best possible choices based on quality, cost, and trust. An automated rating model, based on the expectancy-disconfirmation theory from market science, is also defined to overcome feedback subjectivity issues. The proposed rating and selection models are validated through simulations, demonstrating that the system can effectively capture service behavior and recommend the best possible choices.",2010,0, 4879,A New Approach and a Related Tool for Dependability Measurements on Distributed Systems,"In recent years, experts in the field of dependability are recognizing experimental measurements as an attractive option for assessing distributed systems; contrary to simulation, measurement allows monitoring the real execution of a system in its real usage environment. However, the results of a recent survey have highlighted that the way measurements are carried out and measurement results are expressed is far from being in line with the approach commonly adopted by metrology. The scope of this paper is twofold. The first goal is to extend the discussion on the increasing role that measurements play in dependability and on the importance of cross-fertilization between the dependability and the instrumentation and measurement communities. The second objective is to present a different approach to dependability measurements, in line with the common practices in metrology. With regard to this, the paper presents a tool for dependability measurements in distributed systems that allows evaluating the uncertainty of measurement results. The tool is an enhancement of NekoStat, which is a powerful highly portable Java framework that allows analyzing distributed systems and algorithms. Together with the description of the tool and its innovative features, two experimental case studies are presented.",2010,0, 4880,A Genetic Programming Approach for Software Reliability Modeling,"Genetic programming (GP) models adapt better to the reliability curve when compared with other traditional, and non-parametric models. In a previous work, we conducted experiments with models based on time, and on coverage. We introduced an approach, named genetic programming and Boosting (GPB), that uses boosting techniques to improve the performance of GP. This approach presented better results than classical GP, but required ten times the number of executions. Therefore, we introduce in this paper a new GP based approach, named (¿¿ + ¿¿) GP. To evaluate this new approach, we repeated the same experiments conducted before. The results obtained show that the (¿¿ + ¿¿) GP approach presents the same cost of classical GP, and that there is no significant difference in the performance when compared with the GPB approach. Hence, it is an excellent, less expensive technique to model software reliability.",2010,0, 4881,Judy - a mutation testing tool for java,"Popular code coverage measures, such as branch coverage, are indicators of the thoroughness rather than the fault detection capability of test suites. Mutation testing is a fault-based technique that measures the effectiveness of test suites for fault localisation. Unfortunately, use of mutation testing in the software industry is rare because generating and running vast numbers of mutants against the test cases is time-consuming and difficult to do without an automated, fast and reliable tool. Our objective is to present an innovative approach to mutation testing that takes advantage of a novel aspect-oriented programming mechanism, called `pointcut and advice`, to avoid multiple compilation of mutants and, therefore, to speed up mutation testing. An empirical comparison of the performance of the developed tool, called Judy, with the MuJava mutation testing tool on 24 open-source projects demonstrates the value of the presented approach. The results show that there is a statistically significant (t(23) = -12.28, p < 0.0005, effect size d = 3.43) difference in the number of mutants generated per second between MuJava (M = 4.15, SD = 1.42) and Judy (M = 52.05, SD = 19.69). Apart from being statistically significant, this effect is considered very large and, therefore, represents a substantive finding. This therefore allows us to estimate the fault detection effectiveness of test suites of much larger systems.",2010,0, 4882,The Theory of Relative Dependency: Higher Coupling Concentration in Smaller Modules,"Our observations on several large-scale software products has consistently shown that smaller modules are proportionally more defect prone. These findings, challenge the common recommendations from the literature suggesting that quality assurance (QA) and quality control (QC) resources should focus on larger modules. Those recommendations are based on the unfounded assumption that a monotonically increasing linear relationship exists between module size and defects. Given that complexity is correlated with the size.",2010,0, 4883,Automatic Facial Expression Recognition Using Gabor Filter and Expression Analysis,"Facial expression extraction is the essential step of facial expression recognition. The paper presents a system that uses 28 facial feature key-points in images detection and Gabor wavelet filter provided with 5 frequencies, 8 orientations. In according to actual demand, It can extract the feature of low quality facial expression image target, and have good robust for automatic facial expression recognition. Experimental results show that the performance of the proposed method achieved excellent average recognition rates, when it is applied to facial expression recognition system.",2010,0, 4884,A Novel Method of Avionics Circuit Simulation,"As the drastic advancement in the computer and software technologies, simulation of avionics system is gradually moving towards the use of modular avionics for an important component of the aero plane simulator. Traditionally, avionics simulators use unrelated identifier of system to identify the different fault information and establish various operational functions without providing the circuit structure, this potentially leading to the simulation modular being implemented irrelevantly, which would not conducive to the cosmically systems realization. Basis on the characteristics of the avionics system, an avionics circuit model is designed, and it modeled avionics circuit structure and their connection according as the circuit schematic, application of A* algorithm and recursive iterative algorithm achieved circuit depth-traversing. It supports the multimeter to measure the circuit voltage, resistance and current, and estimating the value is right or not to find the fault point and replace the components. The simulation of the avionics system both simulated the operation state of aircraft various subsystems and realized the function of fault diagnosis.",2010,0, 4885,Assess Content Comprehensiveness of Ontologies,"This paper proposes a novel method to assess and evaluate content comprehensiveness of ontologies. Comparing to other researchers methods which just count the number of classes and properties, the method concerns about the actual content coverage of ontologies. By applying statistical analysis to a corpus, we assign different weights to different terms chosen from the corpus. These terms are then used for evaluating ontologies. Afterwards, a score is generated for each ontology to mark its content comprehensiveness. Experiments are then appropriately designed to evaluate the qualities of typical ontologies to show the effectiveness of the proposed evaluation method.",2010,0, 4886,Simulation Environment for IT Service Support Processes: Supporting Service Providers in Estimating Service Levels for Incident Management,"Because of a steadily increasing market for IT services, providers need to set apart from their competitors in order to successfully assert with their service offerings in the market. As compliance with negotiated service levels is becoming more and more important for service providers, the improvement of the quality of operational IT service support processes is coming to the fore. In this paper we present an approach based on discrete-event simulation in order to do a priori estimations on ticket workloads within a skills-based IT service desk. Service providers can perform tool-supported capability planning (analysis & optimization) during design time based on the potential impact of ticket workloads to agreed service levels. We additionally show how obtained results can be taken into account for determining service levels in Service Level Agreements (SLA).",2010,0, 4887,Business Process Automation Based on Dependencies,"Given a business language, this study presents a dependency-based mapping approach to automate the process of generating a workflow. First, the classification of various operators is carried out in terms of their usage in generating different flows. Second, an internally developed backward flow generating algorithm is used. In order to measure the effectiveness and accuracy of the approach, a forward flow generating algorithm is also developed to compare the results using an example on both approaches.",2010,0, 4888,Collaborative Educational System Analysis and Assessment,"The paper presents definitions of collaborative systems, their classification and the study of collaborative systems in education. It describes the key concepts of collaborative educational systems. There are listed main properties and quality characteristics of collaborative educational systems. It analyzes the application for the assessment of text entities orthogonality within a collaborative system in education, represented by a virtual campus. It implements a metric for evaluating the orthogonality.",2010,0, 4889,Per-Thread Cycle Accounting,"Resource sharing unpredictably affects per-thread performance in multithreaded architectures, but system software assumes all coexecuting threads make equal progress. Per-thread cycle accounting addresses this problem by tracking per-thread progress rates for each coexecuting thread. This approach has the potential to improve Quality Of Service (QoS), Service-Level Agreements (SLA), performance predictability, service differentiation, and proportional-share performance on multithreaded architectures.",2010,0, 4890,Research on Reliability Model of Practical Software,"The modeling technology of software reliability is analyzed in detail in the paper. According to practical software development, the evaluation standard of reliability model of practical software will be explored. The methods of constructing reliability model of practical software is discussed from the point of view of practical application.",2010,0, 4891,Context-Aware Adaptive Applications: Fault Patterns and Their Automated Identification,"Applications running on mobile devices are intensely context-aware and adaptive. Streams of context values continuously drive these applications, making them very powerful but, at the same time, susceptible to undesired configurations. Such configurations are not easily exposed by existing validation techniques, thereby leading to new analysis and testing challenges. In this paper, we address some of these challenges by defining and applying a new model of adaptive behavior called an Adaptation Finite-State Machine (A-FSM) to enable the detection of faults caused by both erroneous adaptation logic and asynchronous updating of context information, with the latter leading to inconsistencies between the external physical context and its internal representation within an application. We identify a number of adaptation fault patterns, each describing a class of faulty behaviors. Finally, we describe three classes of algorithms to detect such faults automatically via analysis of the A-FSM. We evaluate our approach and the trade-offs between the classes of algorithms on a set of synthetically generated Context-Aware Adaptive Applications (CAAAs) and on a simple but realistic application in which a cell phone's configuration profile changes automatically as a result of changes to the user's location, speed, and surrounding environment. Our evaluation describes the faults our algorithms are able to detect and compares the algorithms in terms of their performance and storage requirements.",2010,0, 4892,Low Overhead Incremental Checkpointing and Rollback Recovery Scheme on Windows Operating System,"Implementation of a low overhead incremental checkpointing and rollback recovery scheme that consists of incremental checkpointing combines copy-on-write technique and optimal checkpointing interval is addressed in this article. The checkpointing permits to save process state periodically during failure-free execution, and the recovery scheme maintains to normally execute the task when failure occurs in a PC-based computer-controlled system employed with Windows Operating System. Excess size of capturing state and arbitrary checkpointing results in either performance degradation or expensive recovery cost. For the objective of minimizing overhead, the checkpointing and recovery scheme is designed of Win32 API interception associated with incremental checkpointing and copy-on-write technique. Instead of saving entire process space, it only needs to save the modified pages and uses buffer to save state temporarily in the process of checkpointing so that the checkpointing overhead is reduced. While system is encountered with failure, the minimum expected time of the total overhead to complete a task is calculated by using probability to find the optimal checkpointing interval. From simulation results, the proposed checkpointing and rollback recovery scheme not only enhances the capability of the normal task executing but also reduces the overhead of checkpointing and recovery.",2010,0, 4893,Performance Evaluation of Handoff Queuing Schemes,"One of the main advantages of new wireless systems is the freedom to make and receive calls anywhere and at any time; handovers are considered a key element for providing this mobility. This paper presents the handoff queuing problem in cellular networks. We propose a model to study three handoff queuing schemes, and provide a performance evaluation of these ones. The paper begin by a presentation of the different handoff queuing scheme to evaluate. Then, gives an evaluation model with the different assumption considered in the simulations. The evaluation concerns the blocking probability for handoff and original calls. These simulations are conducted for each scheme, according to different offered loads, size of call (original and handoff) queue, and number of voice channels. A model is proposed and introduced in this paper for the study of three channel assignment schemes; namely, they are the non prioritized schemes (NPS), the original call queuing schemes, and the handoff call queuing schemes.",2010,0, 4894,Failure Detection and Localization in OTN Based on Optical Power Analysis,"In consideration of the new features of Optical Transport Networks (OTN), the failure detection and localization has become a new challenging issue in OTN management research area. This paper proposes a scheme to detect and locate the failures based on the optical power analysis. In failure detection section of the scheme, this paper propose a method to detect the performance degradation caused by possible failures based on real optical power analysis and build a status matrix which demonstrates the current optical power deviation of the fiber port of each node in OTN. In failure localization section of the scheme, this paper proposes the multiple failures location algorithm (MFLA), which deals with both single point failure and multi-point failures, to locate the multiple failures based on analyzing the status matrix and the switching relationship matrix. Then, an exemplary scenario is given to present the result of detecting and locating the fiber link failure and OXC device failure with the proposed scheme.",2010,0, 4895,A Comparative Study of Six Software Packages for Complex Network Research,"UCINET, Pajek, Networkx, iGraph, JUNG and statnet, are commonly used to perform analysis with complex network model. The scalability, and function coverage of these six software packages are assessed and compared. Some randomly generated datasets are used to evaluate the performance of these software packages with regard to input/output (I/O), basic graph algorithms, statistical metrics computation, graph generation, community detection, and visualization. A metric regarding both numbers of the nodes and the edges of complex networks, which is called Maximum Expected Network Processing Ability (MENPA), is proposed to measure the scalability of software packages. Empirical results show that these six software packages are complementary rather than competitive and the difference on the scalability among these six software packages may be attributed to the varieties in both of the programming languages and the network representations.",2010,0, 4896,Security and Performance Aspects of an Agent-Based Link-Layer Vulnerability Discovery Mechanism,"The identification of vulnerable hosts and subsequent deployment of mitigation mechanisms such as service disabling or installation of patches is both time-critical and error-prone. This is in part owing to the fact that malicious worms can rapidly scan networks for vulnerable hosts, but is further exacerbated by the fact that network topologies are becoming more fluid and vulnerable hosts may only be visible intermittently for environments such as virtual machines or wireless edge networks. In this paper we therefore describe and evaluate an agent-based mechanism which uses the spanning tree protocol (STP) to gain knowledge of the underlying network topology to allow both rapid and resource-efficient traversal of the network by agents as well as residual scanning and mitigation techniques on edge nodes. We report performance results, comparing the mechanism against a random scanning worm and demonstrating that network immunity can be largely achieved despite a very limited warning interval. We also discuss mechanisms to protect the agent mechanism against subversion, noting that similar approaches are also increasingly deployed in case of malicious code.",2010,0, 4897,Estimating Error-probability and its Application for Optimizing Roll-back Recovery with Checkpointing,"The probability for errors to occur in electronic systems is not known in advance, but depends on many factors including influence from the environment where the system operates. In this paper, it is demonstrated that inaccurate estimates of the error probability lead to loss of performance in a well known fault tolerance technique, Roll-back Recovery with checkpointing (RRC). To regain the lost performance, a method for estimating the error probability along with an adjustment technique are proposed. Using a simulator tool that has been developed to enable experimentation, the proposed method is evaluated and the results show that the proposed method provides useful estimates of the error probability leading to near-optimal performance of the RRC fault-tolerant technique.",2010,0, 4898,A Smart CMOS Image Sensor with On-chip Hot Pixel Correcting Readout Circuit for Biomedical Applications,"One of the most recent and exciting applications for CMOS image sensors is in the biomedical field. In such applications, these sensors often operate in harsh environments (high intensity, high pressure, long time exposure), which increase the probability for the occurrence of hot pixel defects over their lifetime. This paper presents a novel smart CMOS image sensor integrating hot pixel correcting readout circuit to preserve the quality of the captured images. With this approach, no extra non-volatile memory is required in the sensor device to store the locations of the hot pixels. In addition, the reliability of the sensor is ensured by maintaining a real-time detection of hot pixels during image capture.",2010,0, 4899,Population-Based Algorithm Portfolios for Numerical Optimization,"In this paper, we consider the scenario that a population-based algorithm is applied to a numerical optimization problem and a solution needs to be presented within a given time budget. Although a wide range of population-based algorithms, such as evolutionary algorithms, particle swarm optimizers, and differential evolution, have been developed and studied under this scenario, the performance of an algorithm may vary significantly from problem to problem. This implies that there is an inherent risk associated with the selection of algorithms. We propose that, instead of choosing an existing algorithm and investing the entire time budget in it, it would be less risky to distribute the time among multiple different algorithms. A new approach named population-based algorithm portfolio (PAP), which takes multiple algorithms as its constituent algorithms, is proposed based upon this idea. PAP runs each constituent algorithm with a part of the given time budget and encourages interaction among the constituent algorithms with a migration scheme. As a general framework rather than a specific algorithm, PAP is easy to implement and can accommodate any existing population-based search algorithms. In addition, a metric is also proposed to compare the risks of any two algorithms on a problem set. We have comprehensively evaluated PAP via investigating 11 instantiations of it on 27 benchmark functions. Empirical results have shown that PAP outperforms its constituent algorithms in terms of solution quality, risk, and probability of finding the global optimum. Further analyses have revealed that the advantages of PAP are mostly credited to the synergy between constituent algorithms, which should complement each other either over a set of problems, or during different stages of an optimization process.",2010,0, 4900,Applying an effective model for VNPT CDN,"Most operations of Content Distribution Network (CDN) have measured to evaluate the ability to serve users with content or services they want. Activity measurement process provides the ability to predict, monitor and ensure activities throughout the CDN. Five parameters or regular measurement units are often used by content providers to evaluate the operation of CDN including Cache hit ratio, reserved bandwidth, latency, surrogate server utilization and reliability. There are many ways to measure CDN activities, one which use simulation tools. The simulation CDN is implemented using software tools that have value for research and development, internal testing and diagnostic CDN performance, because of accessing real CDN traces and logs is not easy due to the proprietary nature of commercial CDN. In this article, we will apply a CDN simulation model (based on [CDNSim; 2007]) for design a CDN based on the network infrastructure of Vietnam Posts and Telecommunications Group (VNPT Network).",2010,0, 4901,A Lightweight Sanity Check for Implemented Architectures,"Software architecture has been loosely defined as the organizational structure of a software system, including the components, connectors, constraints, and rationale.1 Evaluating a system's software architecture helps stakeholders to check whether the architecture complies with their interests. Additionally, the evaluation can result in a common understanding of the architecture's strengths and weaknesses. All of this helps to determine which quality criteria the system meets because ""architectures allow or preclude nearly all of the system's quality attributes.""2",2010,0, 4902,Assessing communication media richness in requirements negotiation,"A critical claim in software requirements negotiation regards the assertion that group performances improve when a medium with different richness level is used. Accordingly, the authors have conducted a study to compare traditional face-to-face communication, the richest medium and two less rich communication media, namely a distributed three-dimensional virtual environment and a text-based structured chat. This comparison has been performed with respect to the time needed to accomplish a negotiation. Furthermore, as the only assessment of the time could not be meaningful, the authors have also analysed the media effect on the issues arisen in the negotiation process and the quality of the negotiated software requirements.",2010,0, 4903,Introducing Queuing Network-Based Performance Awareness in Autonomic Systems,"This paper advocates for the introduction of performance awareness in autonomic systems. The motivation is to be able to predict the performance of a target configuration when a self-* feature is planning a system reconfiguration.We propose a global and partially automated process based on queues and queuing networks models. This process includes decomposing a distributed application into black boxes, identifying the queue model for each black box and assembling these models into a queuing network according to the candidate target configuration. Finally, performance prediction is performed either through simulation or analysis.This paper sketches the global process and focuses on the black box model identification step. This step is automated thanks to a load testing platform enhanced with a workload control loop. Model identification is then based on statistical tests. The model identification process is illustrated by experimental results.",2010,0, 4904,Rate-distortion optimization using structural information in H.264 strictly Intra-frame encoder,"In this paper we employ Structural Similarity Index (SSIM) metric in the rate-distortion optimizations of H.264 strictly I-frame encoder to choose the best prediction mode(s). The SSIM is designed to improve on traditional metrics like PSNR and MSE, which have been proved to be inconsistent with human eye perception. The required modifications are done on the JVT reference software JM92 program. The simulation results show that there is reduction in bit rate by 3% while maintaining almost the same video quality and better encoding time.",2010,0, 4905,Performance analysis and comparison of JM 15.1 and Intel IPP H.264 encoder and decoder,"Joint Model (JM) reference software is used for academic reference of H.264 and it was developed by JVT (Joint Video Team) of ISO/IEC MPEG & ITU-T VCEG (Video coding experts group). The Intel IPP (Integrated Performance Primitives) H.264 is a product of Intel which uses IPP libraries and SIMD instructions available on modern processors. The Intel IPP H.264 is multi-threaded and uses CPU optimized IPP routines. In this paper, JM 15.1 and Intel IPP H.264 codec are compared in terms of execution time and video quality of the output decoded sequence. The metrics used for comparison are SSIM (Structural Similarity Index Metric), PSNR (Peak-to-Peak Signal to Noise Ratio), MSE (Mean Square Error), motion estimation time, encoding time, decoding time and the compression ratio of the H.264 file size (encoded output). Intel IPP H.264 clearly emerges as the winner against JM 15.1 in all parameters except for the compression ratio of H.264 file size.",2010,0, 4906,Simple Experimental Techniques to Characterize Capacitors in a Wide Range of Frequencies and Temperatures,"The aim of this paper is to present two very simple, cheap, and practical experimental techniques that are able to estimate the capacitor equivalent circuit for a wide range of frequencies and temperatures. The capacitor equivalent circuit considerably changes with temperature, aging, and frequency. Therefore, knowledge of their equivalent circuit at their operating conditions can lead to better design proposals. In addition, knowledge of the evolution of the equivalent series resistance (ESR) with temperature is essential for the development of reliable online fault diagnosis techniques. This issue is particularly important for aluminum electrolytic capacitors, since they are the preferred capacitor type in power electronics applications and simultaneously one of the most critical components in such applications. To implement the first technique, it is necessary to put the capacitor under test in series with a resistor and connect it to a sinusoidal voltage. The second technique requires a simple charge-discharge circuit. After acquiring both capacitors' current and voltage through an oscilloscope, which is connected to a PC with Matlab software, it is possible to compute both capacitor capacitance and resistance using the least mean square (LMS) algorithm. To simulate the variation of capacitor case temperature, a very simple prototype was used. Several experimental results were obtained to evaluate the accuracy and precision of the two experimental techniques.",2010,0, 4907,A Cubic 3-Axis Magnetic Sensor Array for Wirelessly Tracking Magnet Position and Orientation,"In medical diagnoses and treatments, e.g., endoscopy, dosage transition monitoring, it is often desirable to wirelessly track an object that moves through the human GI tract. In this paper, we propose a magnetic localization and orientation system for such applications. This system uses a small magnet enclosed in the object to serve as excitation source, so it does not require the connection wire and power supply for the excitation signal. When the magnet moves, it establishes a static magnetic field around, whose intensity is related to the magnet's position and orientation. With the magnetic sensors, the magnetic intensities in some predetermined spatial positions can be detected, and the magnet's position and orientation parameters can be computed based on an appropriate algorithm. Here, we propose a real-time tracking system developed by a cubic magnetic sensor array made of Honeywell 3-axis magnetic sensors, HMC1043. Using some efficient software modules and calibration methods, the system can achieve satisfactory tracking accuracy if the cubic sensor array has enough number of 3-axis magnetic sensors. The experimental results show that the average localization error is 1.8 mm.",2010,0, 4908,Prediction model of reservoir fluids properties using Sensitivity Based Linear Learning method,"This paper presented a new prediction model for Pressure-Volume-Temperature (PVT) properties based on the recently introduced learning algorithm called Sensitivity Based Linear Learning Method (SBLLM) for two-layer feedforward neural networks. PVT properties are very important in the reservoir engineering computations. The accurate determination of these properties such as bubble-point pressure and oil formation volume factor is important in the primary and subsequent development of an oil field. In this work, we develop Sensitivity Based Linear Learning method prediction model for PVT properties using two distinct databases, while comparing forecasting performance, using several kinds of evaluation criteria and quality measures, with neural network and the three common empirical correlations. Empirical results from simulation show that the newly developed SBLLM based model produced promising results and outperforms others, particularly in terms of stability and consistency of prediction.",2010,0, 4909,Investigation of higher order optical multiple pulse position modulation links over a highly dispersive optical channel,"This study describes a performance analysis of eight different multiple pulse position modulation (PPM) systems with 4, 7, 12, 15, 17, 22, 28 and 33 slots operating over a plastic optical fibre (POF) channel. The receiver/decoder uses slope detection and a maximum likelihood sequence detector (MLSD). As the analysis of any multiple pulse position modulation (MPPM) system is extremely time consuming, especially when the number of slots is large, a software solution was used. A measure of coding quality that accounts for efficiency of coding and bandwidth expansion was applied and original results showed that the middle systems of these MPPM families are the most efficient for a wide range of bandwidths. Although the efficient Gray coding was used for this investigation, a close to optimum mapping is presented for MPPM systems of the families mentioned above. These mappings were found to be superior to the efficient Gray codes, linear coding and a series of random mappings.",2010,0, 4910,An Efficient Duplicate Detection System for XML Documents,"Duplicate detection, which is an important subtask of data cleaning, is the task of identifying multiple representations of a same real-world object and necessary to improve data quality. Numerous approaches both for relational and XML data exist. As XML becomes increasingly popular for data exchange and data publishing on the Web, algorithms to detect duplicates in XML documents are required. Previous domain independent solutions to this problem relied on standard textual similarity functions (e.g., edit distance, cosine metric) between objects. However, such approaches result in large numbers of false positives if we want to identify domain-specific abbreviations and conventions. In this paper, we present the process of detecting duplicate includes three modules, such as selector, preprocessor and duplicate identifier which uses XML documents and candidate definition as input and produces duplicate objects as output. The aim of this research is to develop an efficient algorithm for detecting duplicate in complex XML documents and to reduce number of false positive by using MD5 algorithm. We illustrate the efficiency of this approach on several real-world datasets.",2010,0, 4911,Performance Evaluation of the Judicial System in Taiwan Using Data Envelopment Analysis and Decision Trees,"A time-honored maxim says that the judicial system is the last line of defending justice. Its performance has a great impact on how the citizen trust or distrust their state apparatus in a democracy. Technically speaking, the judicial process and its procedures are very complicated and the purpose of the whole system is to go through the law and due process to protect civil liberties and rights and to defend the public good of the nation. Therefore, it is worthwhile to assess the performance of judicial institutions in order to advance the efficiency and quality of judicial verdict. This paper combines data envelopment analysis (DEA) and decision trees to achieve this objective. In particular, DEA is first of all used to evaluate the relative efficiency of 18 district courts in Taiwan. Then, the efficiency scores and the overall efficiency of each decision making units are then used to train a decision tree model. Specifically, C5.0, CART, and CHAID decision trees are constructed for comparisons. The decision rules in the best decision tree model can be used to distinguish between efficient units and inefficient units and allow us to understand important factors affecting the efficiency of judicial institutions. The experimental result shows that C5.0 performs the best for predicting (in) efficient judicial institutions, which provides 80.37% average accuracy.",2010,0, 4912,A Partial Bandwidth Reservation Scheme for QoS Support in Ad Hoc Networks,"With the fast development of information technology, the application environment and multimedia services in Ad Hoc networks, all of these are needed to support QoS, and bandwidth is the one of key points needed to be considered. In this paper, we propose a partial bandwidth reservation scheme adapted to reactive routing protocol. In each node, it can estimate available bandwidth in real time, and introduce access control scheme to meet services QoS metric. Simulations integrated with DSR protocol show that this partial bandwidth reservation scheme proposed has improved network capability, and provided QoS support for network services in a certain extent.",2010,0, 4913,The LHCb Readout System and Real-Time Event Management,"The LHCb Experiment is a hadronic precision experiment at the LHC accelerator aimed at mainly studying b-physics by profiting from the large b-anti-b-production at LHC. The challenge of high trigger efficiency has driven the choice of a readout architecture allowing the main event filtering to be performed by a software trigger with access to all detector information on a processing farm based on commercial multi-core PCs. The readout architecture therefore features only a relatively relaxed hardware trigger with a fixed and short latency accepting events at 1 MHz out of a nominal proton collision rate of 30 MHz, and high bandwidth with event fragment assembly over Gigabit Ethernet. A fast central system performs the entire synchronization, event labelling and control of the readout, as well as event management including destination control, dynamic load balancing of the readout network and the farm, and handling of special events for calibrations and luminosity measurements. The event filter farm processes the events in parallel and reduces the physics event rate to about 2 kHz which are formatted and written to disk before transfer to the offline processing. A spy mechanism allows processing and reconstructing a fraction of the events for online quality checking. In addition a 5 Hz subset of the events are sent as express stream to offline for checking calibrations and software before launching the full offline processing on the main event stream.",2010,0,4626 4914,Post-TRL6 Dependable Multiprocessor technology developments,"Funded by the NASA New Millennium Program (NMP) Space Technology 8 (ST8) project since 2004, the Dependable Multiprocessor (DM) project is a major step toward NASA's and DoD's long-held desire to fly Commercial-Off-The-Shelf (COTS) technology in space to take advantage of the higher performance and lower cost of COTS-based onboard processing solutions. The development of DM technology represents a significant paradigm shift. For applications that only need to be radiation tolerant, DM technology allows the user to fly 10x - 100x the processing capability of designs implemented with radiation hardened technologies. As a software-based, platform and technology-independent technology, DM allows space missions to keep pace with COTS developments. As a result, the processing technologies used in space applications no longer need to be 2 - 3 generations behind state-of-the-art terrestrial processing technologies. The DM project conducted its TRL6 Technology Validation in 2008 and 2009. The preliminary DM TRL6 technology validation demonstration was conducted in September of 2008. The results of the preliminary TRL6 technology validation demonstration were described in a paper presented at the 2009 IEEE Aerospace Conference. The final TRL6 results are provided. This paper includes a brief overview of DM technology and a brief overview of the overall TRL6 effort, but focuses on the 2009 TRL6 effort. The 2009 effort includes system-level radiation testing and post-TRL6 technology enhancements. Post-TRL6 enhancements include an upgraded DM implementation with enhanced data integrity protection, coordinated checkpointing, repetitive transient fault detection, Open MPI, true open system HAM (High Availability Middleware), and an updated flight configuration which is not subject to the constraints of the original ST8 spacecraft.",2010,0, 4915,Model-based validation of safety-critical embedded systems,"Safety-critical systems have become increasingly software reliant and the current development process of ¿build, then integrate¿ has become unaffordable. This paper examines two major contributors to today's exponential growth in cost: system-level faults that are not discovered until late in the development process; and multiple truths of analysis results when predicting system properties through model-based analysis and validating them against system implementations. We discuss the root causes of such system-level problems, and an architecture-centric model-based analysis approach of different operational quality aspects from an architecture model. A key technology is the SAE Architecture Analysis & Design Language (AADL) standard for embedded software-reliant system. It supports a single source approach to analysis of operational qualities such as responsiveness, safety-criticality, security, and reliability through model annotations. The paper concludes with a summary of an industrial case study that demonstrates the feasibility of this approach.",2010,0, 4916,Validation of health-monitoring algorithms for civil aircraft engines,"Snecma builds CFM engines with GE for commercial aircrafts and now faces with the challenge of providing assistance to the maintenance operations of its wide fleet. Some years ago, Snecma engaged in the development of a set of algorithmic applications to monitor engine subsystems. Some health-monitoring (HM) application examples developed for Snecma's engines are presented below. An architecture proposition for HM applications is given with a list of quality indicators used in validation process. Finally, the problem of how to reach the drastic requirements in use for civil aircrafts is addressed. The conclusion sketches the methodology and software solution tested by Snecma's HM team to manage the algorithms.",2010,0, 4917,Modeling and performance considerations for automated fault isolation in complex systems,"The purpose of this paper is to document the modeling considerations and performance metrics that were examined in the development of a large-scale Fault Detection, Isolation and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FDIR team members developed a set of operational requirements for the models that would be used for fault isolation and worked closely with the vendor of the software tools selected for fault isolation to ensure that the software was able to meet the requirements. Once the requirements were established, example models of sufficient complexity were used to test the performance of the software. The results of the performance testing demonstrated the need for enhancements to the software in order to meet the demands of the full-scale ground and vehicle FDIR system. The paper highlights the importance of the development of operational requirements and preliminary performance testing as a strategy for identifying deficiencies in highly scalable systems and rectifying those deficiencies before they imperil the success of the project.",2010,0, 4918,Autonomic Composite-service Architecture with MAWeS,"The highly distributed nature and the load sensitivity of Service Oriented Architectures (SOA) make it very difficult to guarantee performance requirements under rapidly-changing load conditions. This paper deals with the development of service oriented autonomic systems that are capable to optimize themselves using a feed forward approach, by exploiting automatically generated performance predictions. The MAWeS (MetaPL/HeSSE Autonomic Web Services) framework allows the development of self-tuning applications that proactively optimize themselves by simulating the execution environment. After a discussion on the possible design choices for the development of autonomic web services applications, a soft real-time test application is presented and the performance results obtained in a composite-service execution scenario are commented.",2010,0, 4919,Resilient Critical Infrastructure Management Using Service Oriented Architecture,"The SERSCIS project aims to support the use of interconnected systems of services in Critical Infrastructure (CI) applications. The problem of system interconnectedness is aptly demonstrated by 'Airport Collaborative Decision Making' (A-CDM). Failure or underperformance of any of the interlinked ICT systems may compromise the ability of airports to plan their use of resources to sustain high levels of air traffic, or to provide accurate aircraft movement forecasts to the wider European air traffic management systems. The proposed solution is to introduce further SERSCIS ICT components to manage dependability and interdependency. These use semantic models of the critical infrastructure, including its ICT services, to identify faults and potential risks and to increase human awareness of them. Semantics allows information and services to be described in such a way that makes them understandable to computers. Thus when a failure (or a threat of failure) is detected, SERSCIS components can take action to manage the consequences, including changing the interdependency relationships between services. In some cases, the components will be able to take action autonomously -- e.g. to manage 'local' issues such as the allocation of CPU time to maintain service performance, or the selection of services where there are redundant sources available. In other cases the components will alert human operators so they can take action instead. The goal of this paper is to describe a Service Oriented Architecture (SOA) that can be used to address the management of ICT components and interdependencies in critical infrastructure systems.",2010,0, 4920,Wavelet Coherence and Fuzzy Subtractive Clustering for Defect Classification in Aeronautic CFRP,"Despite their high specific stiffness and strength, carbon fiber reinforced polymers, stacked at different fiber orientations, are susceptible to interlaminar damages. They may occur in the form of micro-cracks and voids, and leads to a loss of performance. Within this framework, ultrasonic tests can be exploited in order to detect and classify the kind of defect. The main object of this work is to develop the evolution of a previous heuristic approach, based on the use of Support Vector Machines, proposed in order to recognize and classify the defect starting from the measured ultrasonic echoes. In this context, a real-time approach could be exploited to solve real industrial problems with enough accuracy and realistic computational efforts. Particularly, we discuss the cross wavelet transform and wavelet coherence for examining relationships in time-frequency domains between. For our aim, a software package has been developed, allowing users to perform the cross wavelet transform, the wavelet coherence and the Fuzzy Inference System. Since the ill-posedness of the inverse problem, Fuzzy Inference has been used to regularize the system, implementing a data-independent classifier. Obtained results assure good performances of the implemented classifier, with very interesting applications.",2010,0, 4921,Localized QoS Routing with Admission Control for Congestion Avoidance,"Localized Quality of Service (QoS) routing has been recently proposed for supporting the requirements of multimedia applications and satisfying QoS constraints. Localized algorithms avoid the problems associated with the maintenance of global network state by using statistics of flow blocking probabilities. Using local information for routing avoids the overheads of global information with other nodes. However, localized QoS routing algorithms perform routing decisions based on information updated from path request to path request. This paper proposes to tackle a combined localized routing and admission control in order to avoid congestion. We introduce a new Congestion Avoidance Routing algorithm (CAR) in localized QoS routing which make a decision of routing in each connection request using an admission control to route traffic away from congestion. Simulations of various network topologies are used to illustrate the performance of the CAR. We compare the performance of the CAR algorithm against the Credit Based Routing (CBR) algorithm and the Quality Based Routing (QBR) under various ranges of traffic loads.",2010,0, 4922,Fault Tolerance and Recovery in Grid Workflow Management Systems,"Complex scientific workflows are now commonly executed on global grids. With the increasing scale complexity, heterogeneity and dynamism of grid environments the challenges of managing and scheduling these workflows are augmented by dependability issues due to the inherent unreliable nature of large-scale grid infrastructure. In addition to the traditional fault tolerance techniques, specific checkpoint-recovery schemes are needed in current grid workflow management systems to address these reliability challenges. Our research aims to design and develop mechanisms for building an autonomic workflow management system that will exhibit the ability to detect, diagnose, notify, react and recover automatically from failures of workflow execution. In this paper we present the development of a Fault Tolerance and Recovery component that extends the ActiveBPEL workflow engine. The detection mechanism relies on inspecting the messages exchanged between the workflow and the orchestrated Web Services in search of faults. The recovery of a process from a faulted state has been achieved by modifying the default behavior of ActiveBPEL and it basically represents a non-intrusive checkpointing mechanism. We present the results of several scenarios that demonstrate the functionality of the Fault Tolerance and Recovery component, outlining an increase in performance of about 50% in comparison to the traditional method of resubmitting the workflow.",2010,0, 4923,A Multidimensional Array Slicing DSL for Stream Programming,"Stream languages offer a simple multi-core programming model and achieve good performance. Yet expressing data rearrangement patterns (like a matrix block decomposition) in these languages is verbose and error prone. In this paper, we propose a high-level programming language to elegantly describe n-dimensional data reorganization patterns. We show how to compile it to stream languages.",2010,0, 4924,Fault Tolerance by Quartile Method in Wireless Sensor and Actor Networks,"Recent technological advances have lead to the emergence of wireless sensor and actor networks (WSAN) which sensors gather the information for an event and actors perform the appropriate actions. Since sensors are prone to failure due to energy depletion, hardware failure, and communication link errors, designing an efficient fault tolerance mechanism becomes an important issue in WSAN. However, most research focus on communication link fault tolerance without considering sensing fault tolerance on paper survey. In this situation, actor may perform incorrect action by receiving error sensing data. To solve this issue, fault tolerance by quartile method (FTQM) is proposed in this paper. In FTQM, it not only determines the correct data range but also sifts the correct sensors by data discreteness. Therefore, actors could perform the appropriate actions in FTQM. Moreover, FTQM also could be integrated with communication link fault tolerance mechanism. In the simulation results, it demonstrates FTQM has better predicted rate of correct data, the detected tolerance rate of temperature, and the detected temperature compared with the traditional sensing fault tolerance mechanism. Moreover, FTQM has better performance when the real correct data rate and the threshold value of failure are varied.",2010,0, 4925,An integrated life cycle-based software reliability assurance approach for NASA projects,This paper proposes a software reliability assurance approach for NASA projects. The approach provides a success road-map of integrated system risk management from early development phases for timely identification of valued proactive improvement on software while striving for achieving mission reliability goals. The informed decision making process throughout the life cycle is supported to ensure successful deployment of the system.,2010,0, 4926,FMEA at a residential care facility,"In the healthcare industry, reliability of the care provided at residential care facilities has come to the forefront as a major concern of residents and their families. Management at these facilities understands the high liability involved with residents' safety, but they do not perform formal analyses to mitigate the risk. Failure Modes and Effects Analyses (FMEAs) have become a very popular method to mitigate the risk involved with health care products, but can also be extended to analyze an entire process at a residential care facility which provides older adults the high quality residential living and healthcare services they need. One particular residential care facility understands and shares the concern for the well-being of its residents, so its management decided to perform a Failure Modes and Effects Analysis (FMEA) on their Emergency Alert System (EAS) and supporting processes. The EAS is a campus-wide notification system that is activated by pendants worn by residents, pull cords, and motion detectors in the event that a resident requires emergency assistance. When activated, the system alerts the main security station. The security guard radios the appropriate facility staff who respond to the location of the call. The alert system requires the proper operation of the hardware and software components as well as the proper response of various key staff members. In this paper, we describe an environment where a process FMEA was performed. The FMEA included the functional failures of an electronic system, as well as the human factors that could cause a failure in the process. A Risk Priority Number (RPN) was calculated for each failure mode so that management could focus on mitigating the risk associated with the most critical failure modes.",2010,0, 4927,Qualitative-Quantitative Bayesian Belief Networks for reliability and risk assessment,This paper presents an extension of Bayesian belief networks (BBN) enabling use of both qualitative and quantitative likelihood scales in inference. The proposed method is accordingly named QQBBN (Qualitative-Quantitative Bayesian Belief Networks). The inclusion of qualitative scales is especially useful when quantitative data for estimation of probabilities are lacking and experts are reluctant to express their opinions quantitatively. In reliability and risk analysis such situation occurs when for example human and organizational root causes of systems are modeled explicitly. Such causes are often not quantifiable due to limitations in the state of the art and lack of proper quantitative metrics. This paper describes the proposed QQBBN framework and demonstrates its uses through a simple example.,2010,0, 4928,Numerical Simulation of the Unsteady Flow and Power of Horizontal Axis Wind Turbine using Sliding Mesh,Horizontal axis wind turbine (hereafter HAWT) is the common equipment in wind turbine generator systems in recent years. The paper relates to the numerical simulation of the unsteady airflow around a HAWT of the type Phase VI with emphasis on the power output. The rotor diameter is 10-m and the rotating speed is 72 rpm. The simulation was undertaken with the Computational Fluid Dynamics (CFD) software FLUENT 6.2 using the sliding meshes controlled by user-defined functions (UDF). Entire mesh node number is about 1.8 million and it is generated by GAMBIT 2.2 to achieve better mesh quality. The numerical results were compared with the experimental data from NREL UAE wind tunnel test. The comparisons show that the numerical simulation using sliding meshes can accurately predict the aerodynamic performance of the wind turbine rotor.,2010,0, 4929,The Research of Power Quality Real Time Monitor for Coal Power System Based on Wavelet Analysis and ARM Chip,"In order to prevent coal power system fault for safety production, a novel power quality real time monitor is researched in this paper. According to the coal power system special characteristics and safety production standard, the harmonic of the coal power system is analyzed first based on the wavelet theory, and then the monitoring system is designed with ARM LPC2132 as the client computer to fulfill the common power parameter data acquisition. The whole monitoring system is composed of the signal transform module, data processing module, communication module, host computer interfaces and their function modules. The system software is designed with the platform of LabWindowns/CVI. The research result shows that the power quality monitor can detect the harmonic states in real time.",2010,0, 4930,A model for early prediction of faults in software systems,"Quality of a software component can be measured in terms of fault proneness of data. Quality estimations are made using fault proneness data available from previously developed similar type of projects and the training data consisting of software measurements. To predict faulty modules in software data different techniques have been proposed which includes statistical method, machine learning methods, neural network techniques and clustering techniques. Predicting faults early in the software life cycle can be used to improve software process control and achieve high software reliability. The aim of proposed approach is to investigate that whether metrics available in the early lifecycle (i.e. requirement metrics), metrics available in the late lifecycle (i.e. code metrics) and metrics available in the early lifecycle (i.e. requirement metrics) combined with metrics available in the late lifecycle (i.e. code metrics) can be used to identify fault prone modules using decision tree based Model in combination of K-means clustering as preprocessing technique. This approach has been tested with CM1 real time defect datasets of NASA software projects. The high accuracy of testing results show that the proposed Model can be used for the prediction of the fault proneness of software modules early in the software life cycle.",2010,0, 4931,Performance-effective operation below Vcc-min,"Continuous circuit miniaturization and increased process variability point to a future with diminishing returns from dynamic voltage scaling. Operation below Vcc-min has been proposed recently as a mean to reverse this trend. The goal of this paper is to minimize the performance loss due to reduced cache capacity when operating below Vcc-min. A simple method is proposed: disable faulty blocks at low voltage. The method is based on observations regarding the distributions of faults in an array according to probability theory. The key lesson, from the probability analysis, is that as the number of uniformly distributed random faulty cells in an array increases the faults increasingly occur in already faulty blocks. The probability analysis is also shown to be useful for obtaining insight about the reliability implications of other cache techniques. For one configuration used in this paper, block disabling is shown to have on the average 6.6% and up to 29% better performance than a previously proposed scheme for low voltage cache operation. Furthermore, block-disabling is simple and less costly to implement and does not degrade performance at or above Vcc-min operation. Finally, it is shown that a victim-cache enables higher and more deterministic performance for a block-disabled cache.",2010,0, 4932,Spatially Scalable Video Coding Based on Hybrid Epitomic Resizing,"Scalable video coding (SVC) is considered as a potentially promising solution to enable the adaptability of video to heterogeneous networks and various devices. In spatially scalable video encoder, how to resize the captured high-resolution video to get low-resolution video has great effect on the quality of experience (QoE) in the clients receiving low-resolution video. In this paper, we propose a new resizing algorithm called hybrid epitomic resizing (HER), which can make the resized image preserve the same `physical' resolution with original image by the way of utilizing texture similarity inside image and highlight regions of interest while avoiding potential artifacts. For hybrid epitomic resizing, we also design two new inter-layer prediction methods to eliminate the redundancy between adjacent spatial layers instead of conventional inter-layer prediction. Experimental results show that HER can get resized images with perceptually much better quality and the performance of new inter-layer prediction are comparable to that of conventional inter-layer prediction in H.264 SVC.",2010,0, 4933,Generating Test Plans for Acceptance Tests from UML Activity Diagrams,"The Unified Modeling Language (UML) is the standard to specify the structure and behaviour of software systems. The created models are a constitutive part of the software specification that serves as guideline for the implementation and the test of software systems. In order to verify the functionality which is defined within the specification documents, the domain experts need to perform an acceptance test. Hence, they have to generate test cases for the acceptance test. Since domain experts usually have a low level of software engineering knowledge, the test case generation process is challenging and error-prone. In this paper we propose an approach to generate high-level acceptance test plans automatically from business processes. These processes are modeled as UML Activity Diagrams (ACD). Our method enables the application of an all-path coverage criterion to business processes for testing software systems.",2010,0, 4934,Object oriented design metrics and tools a survey,"The most important measure that must be considered in any software product is its design quality. The design phase takes only 5-10 % of the total effort but a large part (up to 80%) of total effort goes into correcting bad design decisions. If bad design is not fixed, the cost for fixing it after software delivery is between 5 and 100 times or higher. Researches on object oriented design metrics have produced a large number of metrics that can be measured to identify design problems and assess design quality attributes. However the use of these design metrics is limited in practice due to the difficulty of measuring and using a large number of metrics. This paper presents a survey of object-oriented design metrics. The goal of this paper is to identify a limited set of metrics that have significant impact on design quality attributes. We adopt the notion of defining design metrics as independent variables that can be measured to assess their impact on design quality attributes as dependent variables. We also present survey of existing object oriented design metrics tools that can be used to automate the measurement process. We present our conclusions on the set of important object oriented design metrics that can be assessed using these tools.",2010,0, 4935,Applying an estimation framework in software projects a local experience,"This paper discusses our experience in building a framework for effort estimation and applying it in a local environment of sample Egyptian companies. The framework focuses on minimizing effort variance by enhancing the adjustments made to the functional sizing techniques. A special focus was made on the adjustment factors, which reflects the application's complexity and the actual environment in which this application will be implemented. We introduced the idea of grouping the adjustment factors to simplify the process of adjustment and to ensure more consistency in the adjustments. We have also studied how the quality of requirements impact effort estimation. We introduced the quality of requirements as an adjustment factor in our proposed framework. We have applied the proposed framework on a sample of a group of Egyptian companies with an objective to enhance effort estimation in these companies.",2010,0, 4936,An approach to measure the Hurst parameter for the Dhaka University network traffic,"The main goal of this work was to analyze the network traffic of the University of Dhaka and find out the Hurst parameter to assess the degree of self similarity. For this verification a number of tests and analyses were performed on the data collected from the University Gateway router. The conclusions were supported by a rigorous statistical analysis of 7.5 millions of data packets of high quality Ethernet traffic measurements collected between Aug '07 and March'08 and the data were analyzed using both visual and statistical experimentation. Busy hour traffic and non-busy hour traffic, both were considered. All the software was coded using MATLAB and can be used as a tool to determine the inherent self similarity of a data traffic.",2010,0, 4937,Utilizing CK metrics suite to UML models: A case study of Microarray MIDAS software,"Software metrics provide essential means for software practitioners to assess its quality. However, to assess software quality, it is important to assess its UML models because of UML wide and recent usage as an object-oriented modeling language. But the issue is which type of software metrics can be utilized on UML models. One of the most important software metrics suite is Chidamber and Kemerer metrics suite, known by CK suite. In the current work, an automated tool is developed to compute the six CK metrics by gathering the required information from class diagrams, activity diagrams, and sequence diagrams. In addition, extra information is collected from system designer, such as the relation between methods and their corresponding activity diagrams and which attributes they use. The proposed automated tool operates on XMI standard file format to provide independence from a specific UML tool. To evaluate the applicability and quality of this tool, it has been applied to two examples: an online registration system and one of the bioinformatics Microarray tools (MIDAS).",2010,0, 4938,When process data quality affects the number of bugs: Correlations in software engineering datasets,"Software engineering process information extracted from version control systems and bug tracking databases are widely used in empirical software engineering. In prior work, we showed that these data are plagued by quality deficiencies, which vary in its characteristics across projects. In addition, we showed that those deficiencies in the form of bias do impact the results of studies in empirical software engineering. While these findings affect software engineering researchers the impact on practitioners has not yet been substantiated. In this paper we, therefore, explore (i) if the process data quality and characteristics have an influence on the bug fixing process and (ii) if the process quality as measured by the process data has an influence on the product (i.e., software) quality. Specifically, we analyze six Open Source as well as two Closed Source projects and show that process data quality and characteristics have an impact on the bug fixing process: the high rate of empty commit messages in Eclipse, for example, correlates with the bug report quality. We also show that the product quality - measured by number of bugs reported - is affected by process data quality measures. These findings have the potential to prompt practitioners to increase the quality of their software process and its associated data quality.",2010,0, 4939,Perspectives on bugs in the Debian bug tracking system,"Bugs in Debian differ from regular software bugs. They are usually associated with packages, instead of software modules. They are caused and fixed by source package uploads instead of code commits. The majority are reported by individuals who appear in the bug database once, and only once. There also exists a small group of bug reporters with over 1,000 bug reports each to their name. We also explore our idea that a high bug-frequency for an individual package might be an indicator of popularity instead of poor quality.",2010,0, 4940,Validity of network analyses in Open Source Projects,"Social network methods are frequently used to analyze networks derived from Open Source Project communication and collaboration data. Such studies typically discover patterns in the information flow between contributors or contributions in these projects. Social network metrics have also been used to predict defect occurrence. However, such studies often ignore or side-step the issue of whether (and in what way) the metrics and networks of study are influenced by inadequate or missing data. In previous studies email archives of OSS projects have provided a useful trace of the communication and co-ordination activities of the participants. These traces have been used to construct social networks that are then subject to various types of analysis. However, during the construction of these networks, some assumptions are made, that may not always hold; this leads to incomplete, and sometimes incorrect networks. The question then becomes, do these errors affect the validity of the ensuing analysis? In this paper we specifically examine the stability of network metrics in the presence of inadequate and missing data. The issues that we study are: 1) the effect of paths with broken information flow (i.e. consecutive edges which are out of temporal order) on measures of centrality of nodes in the network, and 2) the effect of missing links on such measures. We demonstrate on three different OSS projects that while these issues do change network topology, the metrics used in the analysis are stable with respect to such changes.",2010,0, 4941,THEX: Mining metapatterns from java,"Design patterns are codified solutions to common object-oriented design (OOD) problems in software development. One of the proclaimed benefits of the use of design patterns is that they decouple functionality and enable different parts of a system to change frequently without undue disruption throughout the system. These OOD patterns have received a wealth of attention in the research community since their introduction; however, identifying them in source code is a difficult problem. In contrast, metapatterns have similar effects on software design by enabling portions of the system to be extended or modified easily, but are purely structural in nature, and thus easier to detect. Our long-term goal is to evaluate the effects of different OOD patterns on coordination in software teams as well as outcomes such as developer productivity and software quality. we present THEX, a metapattern detector that scales to large codebases and works on any Java bytecode. We evaluate THEX by examining its performance on codebases with known design patterns (and therefore metapatterns) and find that it performs quite well, with recall of over 90%.",2010,0, 4942,On Modeling of GUI Test Profile,"GUI (Graphical User Interface) test cases contain much richer information than the test cases in non-GUI testing. Based on the information, the GUI test profiles can be represented in more forms. In this paper, we study the modeling of the test profiles in GUI testing. Several models of GUI test profiles are proposed. Then we present a methodology of studying the relationship between the test profiles and the fault detection in GUI testing. A control scheme based on this relationship that may be able to improve the efficiency of GUI testing is also proposed.",2010,0, 4943,A Measurement Framework for Assessing Model-Based Testing Quality,This paper proposes a measurement framework for assessing the relative quality of alternative approaches to system level model-based testing. The motivation is to investigate the types of measures that the MBT community should apply. The purpose of this paper is to provide a basis for discussion by proposing some initial ideas on where we should probe for MBT quality measurement. The centerpiece of the proposal offered here is the concept of an operational profile (OP) and its relevance to model-based testing.,2010,0, 4944,An Empirical Evaluation of the First and Second Order Mutation Testing Strategies,"Various mutation approximation techniques have been proposed in the literature in order to reduce the expenses of mutation. This paper presents results from an empirical study conducted for first and second order mutation testing strategies. Its scope is to evaluate the relative application cost and effectiveness of the different mutation strategies. The application cost was based: on the number of mutants, the equivalent ones and on the number of test cases needed to expose them by each strategy. Each strategy's effectiveness was evaluated by its ability to expose a set of seeded faults. The results indicate that on the one hand the first order mutation testing strategies can be in general more effective than the second order ones. On the other hand, the second order strategies can drastically decrease the number of the introduced equivalent mutants, generally forming a valid cost effective alternative to mutation testing.",2010,0, 4945,Numerical simulations of thermo-mechanical stresses during the casting of multi-crystalline silicon ingots,"Silicon is an important semiconductor substrate for manufacturing solar cells. The mechanical and electrical properties of multi-crystalline silicon (mc-Si) are primarily influenced by the quality of the feedstock material and the crystallization process. In this work, numerical calculations, applying finite element analysis (FEA) and finite volume methods (FVM) are presented, in order to predict thermo-mechanical stresses during the solidification of industrial size mc-Si ingots. A two-dimensional global model of an industrial multi-crystallization furnace was created for thermal stationary and time-dependent calculations using the software tool CrysMAS. Subsequent thermo-mechanical analyses of the silica crucible and the ingot were performed with the FEA code ANSYS, allowing additional calculations to define mechanical boundary conditions as well as material models. Our results show that thermal analyses are in good agreement with experimental measurements. Furthermore we show that our approach is suitable to describe the generation of thermo-mechanical stress within the silicon ingot.",2010,0, 4946,Real-Time Ampacity and Ground Clearance Software for Integration Into Smart Grid Technology,"Output from a real-time sag, tension, and ampacity program was compared with measurements collected on an outdoor test span. The test site included a laser range-finder, load cells, and weather station. A fiber optic distributed temperature sensing system was routed along the conductor and thermocouples were attached to the conductor's surface. Nearly 40 million data points were statistically compared with the computer output. The program provided results with a 95% confidence interval for conductor temperatures within 10°C and sags within 0.3 m for a conductor temperature of 75°C. Test data were also used to determine the accuracy of the IEEE Standard 738 models. The computer program and the Standard 738 transient model gave comparable temperatures for temperatures up to 160°C. Measured temperatures were used to estimate the radial and axial temperature gradients in the ACSR conductor. The effect of insulators and instrumentation attached to the conductor on the local conductor temperature was determined. The real-time rating program is an alternative to installing instrumentation on the conductor for measuring tension, sag, or temperature. The program avoids the problems of installing and maintaining expensive instrumentation on the conductor, and it will provide accurate information on the conductor's temperature and ground clearance in real-time.",2010,0,5737 4947,Measurement and Analysis of Link Quality in Wireless Networks: An Application Perspective,"Estimating the quality of wireless link is vital to optimize several protocols and applications in wireless networks. In realistic wireless networks, link quality is generally predicted by measuring received signal strength and error rates. Understanding the temporal properties of these parameters is essential for the measured values to be representative, and for accurate prediction of performance of the system. In this paper, we analyze the received signal strength and error rates in an IEEE 802.11 indoor wireless mesh network, with special focus to understand its utility to measurement based protocols. We show that statistical distribution and memory properties vary across different links, but are predictable. Our experimental measurements also show that, due to the effect of fading, the packet error rates do not always monotonically decrease as the transmission rate is reduced. This has serious implications on many measurement-based protocols such as rate-adaptation algorithms. Finally, we describe real-time measurement framework that enables several applications on wireless testbed, and discuss the results from example applications that utilize measurement of signal strength and error rates.",2010,0, 4948,Evolutionary Optimization of Software Quality Modeling with Multiple Repositories,"A novel search-based approach to software quality modeling with multiple software project repositories is presented. Training a software quality model with only one software measurement and defect data set may not effectively encapsulate quality trends of the development organization. The inclusion of additional software projects during the training process can provide a cross-project perspective on software quality modeling and prediction. The genetic-programming-based approach includes three strategies for modeling with multiple software projects: Baseline Classifier, Validation Classifier, and Validation-and-Voting Classifier. The latter is shown to provide better generalization and more robust software quality models. This is based on a case study of software metrics and defect data from seven real-world systems. A second case study considers 17 different (nonevolutionary) machine learners for modeling with multiple software data sets. Both case studies use a similar majority-voting approach for predicting fault-proneness class of program modules. It is shown that the total cost of misclassification of the search-based software quality models is consistently lower than those of the non-search-based models. This study provides clear guidance to practitioners interested in exploiting their organization's software measurement data repositories for improved software quality modeling.",2010,0, 4949,Improving the performance of hypervisor-based fault tolerance,"Hypervisor-based fault tolerance (HBFT), a checkpoint-recovery mechanism, is an emerging approach to sustaining mission-critical applications. Based on virtualization technology, HBFT provides an economic and transparent solution. However, the advantages currently come at the cost of substantial overhead during failure-free, especially for memory intensive applications. This paper presents an in-depth examination of HBFT and options to improve its performance. Based on the behavior of memory accesses among checkpointing epochs, we introduce two optimizations, read fault reduction and write fault prediction, for the memory tracking mechanism. These two optimizations improve the mechanism by 31.1% and 21.4% respectively for some application. Then, we present software-superpage which efficiently maps large memory regions between virtual machines (VM). By the above optimizations, HBFT is improved by a factor of 1.4 to 2.2 and it achieves a performance which is about 60% of that of the native VM.",2010,0, 4950,A high-performance fault-tolerant software framework for memory on commodity GPUs,"As GPUs are increasingly used to accelerate HPC applications by allowing more flexibility and programmability, their fault tolerance is becoming much more important than before when they were used only for graphics. The current generation of GPUs, however, does not have standard error detection and correction capabilities, such as SEC-DED ECC for DRAM, which is almost always exercised in HPC servers. We present a high-performance software framework to enhance commodity off-the-shelf GPUs with DRAM fault tolerance. It combines data coding for detecting bit-flip errors and checkpointing for recovering computations when such errors are detected. We analyze performance of data coding in GPUs and present optimizations geared toward memory-intensive GPU applications. We present performance studies of the prototype implementation of the framework and show that the proposed framework can be realized with negligible overheads in compute intensive applications such as N-body problem and matrix multiplication, and as low as 35% in a highly-efficient memory intensive 3-D FFT kernel.",2010,0, 4951,An effective quality measure for prediction of context information,"Pervasive systems must be able to adapt themselves to changing environments to ensure the QoS for the user and to optimize resource usage. In this respect, acting proactively before an event has occurred, like starting to heat the house based on the prediction that the user will arrive within the next hour, can yield better results than reacting at the moment it occurs. Especially for context prediction, the quality of context is important since a wrong prediction can be costly and diminish user satisfaction. In this work we evaluate algorithms with respect to the quality of their inference for higher-level context information and describe how these are realized as plug-ins for a middleware that enables context-aware, self-adaptive applications.",2010,0, 4952,Comparison of exact static and dynamic Bayesian context inference methods for activity recognition,"This paper compares the performance of inference in static and dynamic Bayesian Networks. For the comparison both kinds of Bayesian networks are created for the exemplary application activity recognition. Probability and structure of the Bayesian Networks have been learnt automatically from a recorded data set consisting of acceleration data observed from an inertial measurement unit. Whereas dynamic networks incorporate temporal dependencies which affect the quality of the activity recognition, inference is less complex for dynamic networks. As performance indicators recall, precision and processing time of the activity recognition are studied in detail. The results show that dynamic Bayesian Networks provide considerably higher quality in the recognition but entail longer processing times.",2010,0, 4953,Resource management of enterprise cloud systems using layered queuing and historical performance models,"The automatic allocation of enterprise workload to resources can be enhanced by being able to make `what-if' response time predictions, whilst different allocations are being considered. It is important to quantitatively compare the effectiveness of different prediction techniques for use in cloud infrastructures. To help make the comparison of relevance to a wide range of possible cloud environments it is useful to consider the following. 1.) urgent cloud customers such as the emergency services that can demand cloud resources at short notice (e.g. for our FireGrid emergency response software). 2.) dynamic enterprise systems, that must rapidly adapt to frequent changes in workload, system configuration and/or available cloud servers. 3.) The use of the predictions in a coordinated manner by both the cloud infrastructure and cloud customer management systems. 4.) A broad range of criteria for evaluating each technique. However, there have been no previous comparisons meeting these requirements. This paper, meeting the above requirements, quantitatively compares the layered queuing and (¿HYDRA¿) historical techniques - including our initial thoughts on how they could be combined. Supporting results and experiments include the following: i.) defining, investigating and hence providing guidelines on the use of a historical and layered queuing model; ii.) using these guidelines showing that both techniques can make low overhead and typically over 70% accurate predictions, for new server architectures for which only a small number of benchmarks have been run; and iii.) defining and investigating tuning a prediction-based cloud workload and resource management algorithm.",2010,0, 4954,Experimental responsiveness evaluation of decentralized service discovery,"Service discovery is a fundamental concept in service networks. It provides networks with the capability to publish, browse and locate service instances. Service discovery is thus the precondition for a service network to operate correctly and for the services to be available. In the last decade, decentralized service discovery mechanisms have become increasingly popular. Especially in ad-hoc scenarios - such as ad-hoc wireless networks - they are an integral part of auto-configuring service networks. Albeit the fact that auto-configuring networks are increasingly used in application domains where dependability is a major issue, these environments are inherently unreliable. In this paper, we examine the dependability of decentralized service discovery. We simulate service networks that are automatically configured by Zeroconf technologies. Since discovery is a time-critical operation, we evaluate responsiveness - the probability to perform some action on time even in the presence of faults - of domain name system (DNS) based service discovery under influence of packet loss. We show that responsiveness decreases significantly already with moderate packet loss and becomes practicably unacceptable with higher packet loss.",2010,0, 4955,Optimizing RAID for long term data archives,We present new methods to extend data reliability of disks in RAID systems for applications like long term data archival. The proposed solutions extend existing algorithms to detect and correct errors in RAID systems by preventing accumulation of undetected errors in rarely accessed disk segments. Furthermore we show how to change the parity layout of a RAID system in order to improve the performance and reliability in case of partially defect disks. All methods benefit of a hierarchical monitoring scheme that stores reliability related information. Our proposal focuses on methods that do not need significant hardware changes.,2010,0, 4956,Meta-scheduling in advance using red-black trees in heterogeneous Grids,"The provision of Quality of Service in Grid environments is still an open issue that needs attention from the research community. One way of contributing to the provision QoS in Grids is by performing meta-scheduling of jobs in advance, that is, jobs are scheduled some time before they are actually executed. In this way, the aproppriate resources will be available to run the job when needed, so that QoS requirements (i.e., deadline) are met. This paper presents two new techniques, implemented over the red-black tree data structure, to manage the idle/busy periods of resources. One of them takes into account the heterogeneity of resources when estimating the execution times of jobs. A performance evaluation using a real testbed is presented that illustrates the efficiency of this approach to meet the QoS requirements of users.",2010,0, 4957,An extension of GridSim for quality of service,GridSim is a well known and useful open software product through which users can simulate a Grid environment. At present Qualities of Service are not modeled in GridSim. When utilising a Grid a user may wish to make decisions about type of service to be contracted. For instance performance and security are two levels of service upon which different decisions may be made. Subsequently during operation a grid may not be able to fulfill its contractual obligations. In this case renegotiation is necessary. This paper describes an extension to GridSim that enables various Qualities of Service to be modeled together with Service Level Agreements and renegotiation of contract with associated costs. The extension is useful as it will allow users to make better estimates of potential costs and will also enable grid service suppliers to more accurately predict costs and thus provide better service to users.,2010,0, 4958,Exploiting motion estimation resilience to approximated metrics on SIMD-capable general processors: From Atom to Nehalem,"In the past, efforts to speed up motion estimation for video encoding were directed at finding better predictive search algorithms. Now, they are directed toward the shrewd exploitation of the machine's advanced architectural features such as multimedia extensions, especially for the computation of the error metric which is known to be expensive. In this paper, we extend previous work by further exploring efficient implementation of approximate fast metrics for motion estimation. We show that the proposed metrics can be implemented using SIMD instructions to yield impressive speed-ups, up to 12:1 relative to non-vectorized but otherwise optimized C code, while sacrificing less than 0.1 dB on image quality.",2010,0, 4959,On the evaluation of the LTE-advanced proposal within the Canadian evaluation group (CEG) initiative: Preliminary work results,"In this paper, we present a preliminary work within the CEG to evaluate the LTE-advanced proposal as a candidate for further improvements of LTE earlier releases. We consider an open-source LTE simulator that implements most of the LTE features. We focus on channel state variations and related link adaptation. We study the performance of channel quality report to the transmitter to track the downlink time-frequency channel variations among users. This process aims at performing link adaptation. To insure this procedure, we implement a CQI feedback scheme to an LTE open-source simulator. SINRs per subcarriers are computed supposing a perfect channel knowledge then mapped into a single SINR value for the whole bandwidth. This SINR is represented by a CQI value that allows for dynamic MCS allocation. CQI mapping is evaluated through different SINR mapping schemes.",2010,0, 4960,Study on Impulse Magnetic Field of Magnetostrictive Position Sensor,"The magnetostrictive position sensor is a kind of displacement sensor utilizing the magnetostrictive effect and inverse effect of magnetostrictive material. This essay discussed the distribution characteristic of the magnetostrictive position sensor under the impulse magnetic field. We analyzed the impulse magnetic field of magnetostrictive line influenced by the magnetic field of permanent magnet and the impulse current magnetic field, and constructed relative theory models. Applying the numerical simulation software ANSYS, the impulse magnetic field on the sensor line was emulated based on electromagnetic theories. With experiment data, the values of magnetic field and magnetic field direction angle in magnetostrictive line were calculated. Consequently, this essay should provide the theory basis and data for promoting the signal quality of the signal detection of the sensor.",2010,0, 4961,The Study on the Fixed End Wave in Magnetostrictive Position Sensor,"The magnetostrictive position sensor is a kind of displacement sensor utilizing the magnetostrictive effect and inverse effect of magnetostrictive material. This essay discussed the influence of the fixed end of the sensor system on the detected signal with the driver impulse. The fixed end waves (a kind of elastic wave) were described and defined in this essay. The mechanisms of torsional magnetic field on the long magnetostrictive material line and fixed end waves were discussed, and relative theory models were constructed in this paper. Experiments showed that the fixed end wave of the system could be generated and transmit along the line with impulse current, which is also a kind of noise wave should be removed or weakened. Consequently, this essay should provide the theory basis and data for promoting the signal quality of the signal detection of the sensor.",2010,0, 4962,Real-Time Traffic Classification Based on Statistical and Payload Content Features,"In modern networks, different applications generate various traffic types with diverse service requirements. Thereby the identification and classification of traffic play an important role for increasing the performance in network management. Primitive applications were using well-known ports in transport layer, so their traffic classification can be performed based on the port number. However, the recent applications progressively use unpredictable port numbers. Consequently the later methods are based on “deep packet inspectionâ€? Notwithstanding proper accuracy, these methods impose heavy operational load and are vulnerable to encrypted flows. The recent methods classify the traffic based on statistical packet characteristics. However, having access to a little part of statistical flow information in real-time traffic may jeopardize the performance of these methods. Regarding the advantages and disadvantages of the two later methods, in this paper we propose an approach based on payload content and statistical traffic characteristics with Naive Bayes algorithm for real-time network traffic classification. The performance and low complexity of the propose approach confirm its competency for real-time traffic classification.",2010,0, 4963,Designing Modular Hardware Accelerators in C with ROCCC 2.0,"While FPGA-based hardware accelerators have repeatedly been demonstrated as a viable option, their programmability remains a major barrier to their wider acceptance by application code developers. These platforms are typically programmed in a low level hardware description language, a skill not common among application developers and a process that is often tedious and error-prone. Programming FPGAs from high level languages would provide easier integration with software systems as well as open up hardware accelerators to a wider spectrum of application developers. In this paper, we present a major revision to the Riverside Optimizing Compiler for Configurable Circuits (ROCCC) designed to create hardware accelerators from C programs. Novel additions to ROCCC include (1) intuitive modular bottom-up design of circuits from C, and (2) separation of code generation from specific FPGA platforms. The additions we make do not introduce any new syntax to the C code and maintain the high level optimizations from the ROCCC system that generate efficient code. The modular code we support functions identically as software or hardware. Additionally, we enable user control of hardware optimizations such as systolic array generation and temporal common subexpression elimination. We evaluate the quality of the ROCCC 2.0 tool by comparing it to hand-written VHDL code. We show comparable clock frequencies and a 18% higher throughput. The productivity advantages of ROCCC 2.0 is evaluated using the metrics of lines of code and programming time showing an average of 15× improvement over hand-written VHDL.",2010,0, 4964,Acceleration of a DWT-Based Algorithm for Short Exposure Stellar Images Processing on a HPRC Platform,"Our objective is to provide an enhanced algorithm for the FASTCAM instrument, developed by the Instituto de Astrofísica de Canarias in collaboration with the Universidad Politécnica de Cartagena. In this paper we propose an algorithm for the detection of astronomical objects and its implementation on a High Performance Reconfigurable Computer. Our algorithm introduces wavelet based preprocessing and post-processing stages that considerably enhance the image quality when compared to the initial algorithm.",2010,0, 4965,Towards Understanding the Importance of Variables in Dependable Software,"A dependable software system contains two important components, namely, error detection mechanisms and error recovery mechanisms. An error detection mechanism attempts to detect the existence of an erroneous software state. If an erroneous state is detected, an error recovery mechanism will attempt to restore a correct state. This is done so that errors are not allowed to propagate throughout a software system, i.e., errors are contained. The design of these software artefacts is known to be very difficult. To detect and correct an erroneous state, the values held by some important variables must be ensured to be suitable. In this paper we develop an approach to capture the importance of variables in dependable software systems. We introduce a novel metric, called importance, which captures the impact a given variable has on the dependability of a software system. The importance metric enables the identification of critical variables whose values must be ensured to be correct.",2010,0, 4966,Software Fault Prediction Model Based on Adaptive Dynamical and Median Particle Swarm Optimization,"Software quality prediction can play a role of importance in software management, and thus in improve the quality of software systems. By mining software with data mining technique, predictive models can be induced that software managers the insights they need to tackle these quality problems in an efficient way. This paper deals with the adaptive dynamic and median particle swarm optimization (ADMPSO) based on the PSO classification technique. ADMPSO can act as a valid data mining technique to predict erroneous software modules. The predictive model in this paper extracts the relationship rules of software quality and metrics. Information entropy approach is applied to simplify the extraction rule set. The empirical result shows that this method set of rules can be streamlined and the forecast accuracy can be improved.",2010,0, 4967,Applications of Support Vector Mathine and Unsupervised Learning for Predicting Maintainability Using Object-Oriented Metrics,"Importance of software maintainability is increasing leading to development of new sophisticated techniques. This paper presents the applications of support vector machine and unsupervised learning in software maintainability prediction using object-oriented metrics. In this paper, the software maintainability predictor is performed. The dependent variable was maintenance effort. The independent variable were five OO metrics decided clustering technique. The results showed that the Mean Absolute Relative Error (MARE) was 0.218 of the predictor. Therefore, we found that SVM and clustering technique were useful in constructing software maintainability predictor. Novel predictor can be used in the similar software developed in the same environment.",2010,0, 4968,Wireless Intrusion Detection System Using a Lightweight Agent,"The exponential growth in wireless network faults, vulnerabilities, and attacks make the Wireless Local Area Network (WLAN) security management a challenging research area. Deficiencies of security methods like cryptography (e.g. WEP) and firewalls, causes the use of more complex security systems, such as Intrusion Detection Systems, to be crucial. In this paper, we present a hybrid wireless intrusion detection system (WIDS). To implement the WIDS, we designed a simple lightweight agent. The proposed agent detect the most destroying and serious attacks; Man-In-The-Middle and Denial-of-Service; with the minimum selected feature set. To evaluate our proposed WIDS and its agent, we collect a complete data-set using open source attack generator softwares. Experimental results show that in comparison with similar systems, in addition of more simplicity, our WIDS provides high performance and precision.",2010,0, 4969,Grid of Segment Trees for Packet Classification,"Packet classification problem has received much attention and continued to be an important topic in recent years. In packet classification problem, each incoming packet should be classified into flows according to a set of pre-defined rules. Grid-of-tries (GoT) is one of the traditional algorithmic schemes for solving 2-dimensional packet classification problem. The advantage of GoT is that it uses the switch pointers to avoid backtracking operation during the search process. However, the primary data structure of GoT is base on binary tries. The traversal of binary tries decreases the performance of GoT due to the heights of binary tries are usually high. In this paper, we propose a scheme called GST (Grid of Segment Trees). GST modifies the original GoT by replacing the binary tries with segment trees. The heights of segment trees are much shorter than those of binary tries. As a result, the proposed GST can inherit the advantages of GoT and segment trees to achieve better performance. Experiments conducted on three different kinds of rule tables show that our proposed scheme performs better than traditional schemes, such as hierarchical tries and grid-of-tries.",2010,0, 4970,A Statistical Method for Middleware System Architecture Evaluation,"The architecture of complex software systems is a collection of decisions that are very expensive to change. This makes effective software architecture evaluation methods essential in today's system development for mission critical systems. We have previously developed MEMS for evaluating middleware architectures, which provides an effective assessment of important quality attributes and their characterizations. To provide additional quantitative assessments on the overall system performance using actual runtime data, we employed a set of statistical procedures in this work. Our proposed assessment procedures comprises a standard sensitivity analysis procedure that utilizes leverage statistics to identify and remove influential data points, and an estimator for evaluating system stability and a metric for evaluating system load capacity. Experiments were conducted using real runtime datasets. Results show that our procedures effectively identified and isolated abnormal data points, and provided valuable statistics to show system stability. Our approach thus provides a sound statistical basis to support software architecture evaluation.",2010,0, 4971,In Situ Software Visualisation,"Software engineers need to design, implement, comprehend and maintain large and complex software systems. Awareness of information about the properties and state of individual artifacts, and the process being enacted to produce them, can make these activities less error-prone and more efficient. In this paper we advocate the use of code colouring to augment development environments with rich information overlays. These in situ visualisations are delivered within the existing IDE interface and deliver valuable information with minimal overhead. We present CODERCHROME, a code colouring plug-in for Eclipse, and describe how it can be used to support and enhance software engineering activities.",2010,0, 4972,Managing Structure-Related Software Project Risk: A New Role for Project Governance,"This paper extends recent research on the risk implications of software project organization structures by considering how structure-related risk might be managed. Projects, and other organizations involved in projects, are usually structured according to common forms. These organizational entities interact with each other, creating an environment in which risks relating to their structural forms can impact the project and its performance. This source of risk has previously been overlooked in software project research. The nature of the phenomenon is examined and an approach to managing structure-related risk is proposed, responsibility for which is assigned as a new role for project governance. This assignment is necessary because, due to the structural and relational nature of these risks, the project is poorly placed to manage such threats. The paper argues that risk management practices need to be augmented with additional analyses to identify, analyze and assess structural risks to improve project outcomes and the delivery of quality software. The argument is illustrated and initially validated with two project case studies. Implications for research and practice are drawn and directions for future research are suggested, including extending the theory to apply to other organization structures.",2010,0, 4973,Towards Fully Automated Test Management for Large Complex Systems,"Development of large and complex software intensive systems with continuous builds typically generates large volumes of information with complex patterns and relations. Systematic and automated approaches are needed for efficient handling of such large quantities of data in a comprehensible way. In this paper we present an approach and tool enabling autonomous behavior in an automated test management tool to gain efficiency in concurrent software development and test. By capturing the required quality criteria in the test specifications and automating the test execution, test management can potentially be performed to a great extent without manual intervention. This work contributes towards a more autonomous behavior within a distributed remote test strategy based on metrics for decision making in automated testing. These metrics optimize management of fault corrections and retest, giving consideration to the impact of the identified weaknesses, such as fault-prone areas in software.",2010,0, 4974,Searching for a Needle in a Haystack: Predicting Security Vulnerabilities for Windows Vista,"Many factors are believed to increase the vulnerability of software system; for example, the more widely deployed or popular is a software system the more likely it is to be attacked. Early identification of defects has been a widely investigated topic in software engineering research. Early identification of software vulnerabilities can help mitigate these attacks to a large degree by focusing better security verification efforts in these components. Predicting vulnerabilities is complicated by the fact that vulnerabilities are, most often, few in number and introduce significant bias by creating a sparse dataset in the population. As a result, vulnerability prediction can be thought of us preverbally “searching for a needle in a haystack.â€?In this paper, we present a large-scale empirical study on Windows Vista, where we empirically evaluate the efficacy of classical metrics like complexity, churn, coverage, dependency measures, and organizational structure of the company to predict vulnerabilities and assess how well these software measures correlate with vulnerabilities. We observed in our experiments that classical software measures predict vulnerabilities with a high precision but low recall values. The actual dependencies, however, predict vulnerabilities with a lower precision but substantially higher recall.",2010,0, 4975,Fault Detection Likelihood of Test Sequence Length,"Testing of graphical user interfaces is important due to its potential to reveal faults in operation and performance of the system under consideration. Most existing test approaches generate test cases as sequences of events of different length. The cost of the test process depends on the number and total length of those test sequences. One of the problems to be encountered is the determination of the test sequence length. Widely accepted hypothesis is that the longer the test sequences, the higher the chances to detect faults. However, there is no evidence that an increase of the test sequence length really affect the fault detection. This paper introduces a reliability theoretical approach to analyze the problem in the light of real-life case studies. Based on a reliability growth model the expected number of additional faults is predicted that will be detected when increasing the length of test sequences.",2010,0, 4976,Satisfying Test Preconditions through Guided Object Selection,"A random testing strategy can be effective at finding faults, but may leave some routines entirely untested if it never gets to call them on objects satisfying their preconditions. This limitation is particularly frustrating if the object pool does contain some precondition-satisfying objects but the strategy, which selects objects at random, does not use them. The extension of random testing described in this article addresses the problem. Experimentally, the resulting strategy succeeds in testing 56% of the routines that the pure random strategy missed; it tests hard routines 3.6 times more often; although it misses some of the faults detected by the original strategy, it finds 9.5% more faults overall; and it causes negligible overhead.",2010,0, 4977,"We're Finding Most of the Bugs, but What are We Missing?","We compare two types of model that have been used to predict software fault-proneness in the next release of a software system. Classification models make a binary prediction that a software entity such as a file or module is likely to be either faulty or not faulty in the next release. Ranking models order the entities according to their predicted number of faults. They are generally used to establish a priority for more intensive testing of the entities that occur early in the ranking. We investigate ways of assessing both classification models and ranking models, and the extent to which metrics appropriate for one type of model are also appropriate for the other. Previous work has shown that ranking models are capable of identifying relatively small sets of files that contain 75-95% of the faults detected in the next release of large legacy systems. In our studies of the rankings produced by these models, the faults not contained in the predicted most fault prone files are nearly always distributed across many of the remaining files; i.e., a single file that is in the lower portion of the ranking virtually never contains a large number of faults.",2010,0, 4978,An Application of Six Sigma and Simulation in Software Testing Risk Assessment,"The conventional approach to Risk Assessment in Software Testing is based on analytic models and statistical analysis. The analytic models are static, so they don't account for the inherent variability and uncertainty of the testing process, which is an apparent deficiency. This paper presents an application of Six Sigma and Simulation in Software Testing. DMAIC and simulation are applied to a testing process to assess and mitigate the risk to deliver the product on time, achieving the quality goals. DMAIC is used to improve the process and achieve required (higher) capability. Simulation is used to predict the quality (reliability) and considers the uncertainty and variability, which, in comparison with the analytic models, more accurately models the testing process. Presented experiments are applied on a real project using published data. The results are satisfactorily verified. This enhanced approach is compliant with CMMI® and provides for substantial Software Testing performance-driven improvements.",2010,0, 4979,(Un-)Covering Equivalent Mutants,"Mutation testing measures the adequacy of a test suite by seeding artificial defects (mutations) into a program. If a test suite fails to detect a mutation, it may also fail to detect real defects-and hence should be improved. However, there also are mutations which keep the program semantics unchanged and thus cannot be detected by any test suite. Such equivalent mutants must be weeded out manually, which is a tedious task. In this paper, we examine whether changes in coverage can be used to detect non-equivalent mutants: If a mutant changes the coverage of a run, it is more likely to be non-equivalent. In a sample of 140 manually classified mutations of seven Java programs with 5,000 to 100,000 lines of code, we found that: (a) the problem is serious and widespread-about 45% of all undetected mutants turned out to be equivalent; (b) manual classification takes time-about 15 minutes per mutation; (c) coverage is a simple, efficient, and effective means to identify equivalent mutants-with a classification precision of 75% and a recall of 56%; and (d) coverage as an equivalence detector is superior to the state of the art, in particular violations of dynamic invariants. Our detectors have been released as part of the open source JAVALANCHE framework; the data set is publicly available for replication and extension of experiments.",2010,0, 4980,"Toward High-Throughput, Multicriteria Protein-Structure Comparison and Analysis","Protein-structure comparison (PSC) is an essential component of biomedical research as it impacts on, e.g., drug design, molecular docking, protein folding and structure prediction algorithms as well as being essential to the assessment of these predictions. Each of these applications, as well as many others where molecular comparison plays an important role, requires a different notion of similarity that naturally lead to the multicriteria PSC (MC-PSC) problem. Protein (Structure) Comparison, Knowledge, Similarity, and Information (ProCKSI) (www.procksi.org) provides algorithmic solutions for the MC-PSC problem by means of an enhanced structural comparison that relies on the principled application of information fusion to similarity assessments derived from multiple comparison methods. Current MC-PSC works well for moderately sized datasets and it is time consuming as it provides public service to multiple users. Many of the structural bioinformatics applications mentioned above would benefit from the ability to perform, for a dedicated user, thousands or tens of thousands of comparisons through multiple methods in real time, a capacity beyond our current technology. In this paper, we take a key step into that direction by means of a high-throughput distributed reimplementation of ProCKSI for very large datasets. The core of the proposed framework lies in the design of an innovative distributed algorithm that runs on each compute node in a cluster/grid environment to perform structure comparison of a given subset of input structures using some of the most popular PSC methods [e.g., universal similarity metric (USM), maximum contact map overlap (MaxCMO), fast alignment and search tool (FAST), distance alignment (DaliLite), combinatorial extension (CE), template modeling alignment (TMAlign)]. We follow this with a procedure of distributed consensus building. Thus, the new algorithms proposed here achieve ProCKSI's similarity assessment quality but with a fraction of- - the time required by it. Our results show that the proposed distributed method can be used efficiently to compare: 1) a particular protein against a very large protein structures dataset (target-against-all comparison), and 2) a particular very large-scale dataset against itself or against another very large-scale dataset (all-against-all comparison). We conclude the paper by enumerating some of the outstanding challenges for real-time MC-PSC.",2010,0, 4981,The research of outlier data cleaning based on accelerating method,"During the data integration process, it puts forward the accelerating trend comparison method to deal with the outlier data in this paper. Namely outlier data is discovered through the accelerating trend comparison. At the end of this article, it gives a specific description of the algorithm of outlier data cleaning results. It can improve the detection of outlier data and the data quality through the experiments.",2010,0, 4982,Evaluation approaches of information systems service quality,"This paper has built a new evaluation approach (called SQ-Atten) based on summary and comparative study of the four traditional service quality assessment instruments. SQ-Atten has a new term attention, which has similar function with zone of tolerance (ZOT). It can analyze the users' satisfaction using the score of attention and performance, and has less difficult than that of ZOT in collecting data. It has data predictive ability and richer data than SERVPERF. Also, it measures the same 44 items as SERVQUAL, but has higher data reliability because of without measuring expectation and difference operation.",2010,0, 4983,A software reliability prediction model based on benchmark measurement,Software reliability is a very important and active research field in software engineering. There have been one hundred of prediction model since the first prediction model published. But most of them are adapted after software test and only few of them can be used before test. The paper proposed an idea that to predict the software reliability making use of the similar projects measurement data based on software process benchmark. Its prediction uses benchmark measurement and software process data before software test.,2010,0, 4984,Predict protein subnuclear location with ensemble adaboost classifier,"Protein function prediction with computational method is becoming an important research field in protein science and bioinformatics. In eukaryotic cells, the knowledge of subnuclear localization is essential for understanding the life function of nucleus. In this study, A novel ensemble classifier is designed incorporating three AdaBoost classifiers to predict protein subnuclear localization. The base classifier algorithms in AdaBoost classifier is fuzzy K nearest neighbors (FKNN). Three parts amino acid pair compositions with different spaces are computed to construct features vector for representing a protein sample. Jackknife cross-validation test are used to evaluate performance of proposed with two benchmark datasets. Compared with prior works, promising results obtained indicate that the proposed method is more effective and practical. Current approach may also be used to improve the prediction quality of other protein attributes. The software written in Matlab are available freely by contacting the corresponding author.",2010,0, 4985,Developing Self-Managing Embedded Systems with ASSL,"This research targets formal modeling of embedded systems capable of self-management. In our approach, we use the ASSL (Autonomic System Specification Language) framework as a development environment, where self-management features of embedded systems are formally specified and an implementation is automatically generated. ASSL exposes a rich set of specification constructs that help developers specify event-driven embedded systems. Hardware is sensed via special metrics intended to drive events and self-management policies that help the system handle critical situations in an autonomous reactive manner. We present this approach along with a simulation case study where ASSL is used to develop control software for the wide-angle camera carried on board NASA's Voyager II spacecraft.",2010,0, 4986,Dynamic Policy-Driven Quality of Service in Service-Oriented Systems,"Service-oriented architecture (SOA) middleware has emerged as a powerful and popular distributed computing paradigm due to its high-level abstractions for composing systems and hiding platform-level details. Control of some details hidden by SOA middleware is necessary, however, to provide managed quality of service (QoS) for SOA systems that need predictable performance and behavior. This paper presents a policy-driven approach for managing QoS in SOA systems. We discuss the design of several key QoS services and empirically evaluate their ability to provide QoS under CPU overload and bandwidth-constrained situations.",2010,0, 4987,Levenberg-Marquardt neural network for gear fault diagnosis,"In this study we are trying with the Levenberg-Marquardt neural network model to the problem of gear fault diagnosis. By using second derivative information, the network convergence speed is promoted and the generalization performance is enhanced. Taking a certain gearbox fault signal acquisition experimental system for instance, Matlab software and its neural network toolbox are used to model and simulate. The simulation result shows that Levenberg-Marquardt neural network has a good performance for the common gear fault diagnosis and it can identify various types of faults stably and accurately. Furthermore, compared with conventional BP neural network, the Levenberg-Marquardt neural network reduces training epochs and promotes diagnosis accuracy.",2010,0, 4988,Trust-based rating prediction for recommendation in Web 2.0 collaborative learning social software,"Benefiting from the advent of social software, information sharing becomes pervasive. Personalized rating systems have emerged to evaluate the quality of user-generated content in open environment and provide recommendation based on users' past experience. In this paper, a trust-based rating prediction approach for recommendation in Web 2.0 collaborative learning social software is proposed. Trust network is exploited in the rating prediction scheme and a multi-relational trust metric is developed in an implicit way. Finally the evaluation of the approach is performed using the dataset of collaborative learning social software, namely Remashed.",2010,0, 4989,Software Maintenance Prediction Using Weighted Scenarios: An Architecture Perspective,"Software maintenance is considered one of the most important issues in software engineering which has some serious implications in term of cost and effort. It consumes enormous amount of organization's overall resources. On the other hand, software architecture of an application has considerable effect on quality factors such as maintainability, performance, reliability and flexibility etc. Using software architecture for quantification of certain quality factor will help organizations to plan resources accordingly. This paper is an attempt to predict software maintenance effort at architecture level. The method takes requirements, domain knowledge and general software engineering knowledge as input in order to prescribe application architecture. Once application architecture is prescribed, then weighted scenarios and certain factors (i.e. system novelty, turnover and maintenance staff ability, documentation quality, testing quality etc) that affect software maintenance are applied to application architecture to quantify maintenance effort. The technique is illustrated and evaluated using web content extraction application architecture.",2010,0, 4990,Feature Selection for Medical Diagnosis Using Fuzzy Artmap Classification and Intersection Conflict,"Studying complex systems including biological systems is a multi-disciplinary research area. It must be derived by the recent explosion of ICT including high-performance computing, high-throughput experiments, the Internet, knowledge discovery and Artificial Intelligence (AI). The goal of this research is to establish a computational architecture and tools to deal with complex systems based on such advanced technologies. Therefore in the case of medical diagnosis based on machine learning model, we need to reduce the number of variables according with their relevance and allowing to take decisions in real-time. This approach is realized in two stages. In the first one, we classify the unfaulty functioning data of system using the fuzzy-ARTMAP classification. In the second stage, a conflict is accounted between features of test data based on the hyper-cubes resulted in the first stage. Two features are in conflict if her intersection does not belong to the model elaborated by fuzzy-ARTMAP classification.",2010,0, 4991,Evaluation of Error Control Mechanisms Based on System Throughput and Video Playable Frame Rate on Wireless Channel,"Error control mechanisms are widely used in video communications over wireless channels. However for improving end-to-end video quality: they consume extra bandwidth and reduce effective system throughput. In this paper, considering the parameters of system throughput and playable frame rate as evaluating metrics, we investigate the efficiency of different error control mechanisms. We develop a throughput analytical model to present system effective throughput for different error control mechanisms under different conditions. For a given packet loss probability, both optimal retransmission times in adaptive ARQ and optimal number of redundant packets in adaptive FEC for each type of frames are derived by keeping the system throughput as a constant value. Also, end to end playable frame rates for the two schemes are computed. Then which error control scheme is the most suitable for which application condition is concluded. Finally empirical simulation experimental results with various data analysis are demonstrated.",2010,0, 4992,Research on a Software Trustworthy Measure Model,"Through analyzing the factors affecting software trustworthy, established the index system of trustworthy software estimation. Apply Analytic Hierarchy Process (AHP) to determine the relative importance of factors and indicator items affecting software trustworthy, and then determine the score of estimation index system through fuzzy estimation model, use the combination of both to measure software trustworthy. Finally, this paper presents an example to validate the validity and objectivity of this model.",2010,0, 4993,Reliability Analysis of Embedded Applications in Non-Uniform Fault Tolerant Processors,"Soft error analysis has been greatly aided by the concept of Architectural vulnerability Factor (AVF) and Architecturally Correct Execution (ACE). The AVF of a processor is defined as the probability that a bit flip in the processor architecture will result in a visible error in the final output of a program. In this work, we exploit the techniques of AVF analysis to introduce a software-level vulnerability analysis. This metric allows insight into the vulnerability of instruction and software to hardware faults with a micro-architectural involved fault injection method. The proposed metric can be used to make judgments about the reliability of different programs on different processors with regard to architectural and compiler guidelines for improving the processor reliability.",2010,0, 4994,Centered Hyperspherical and Hyperellipsoidal One-Class Support Vector Machines for Anomaly Detection in Sensor Networks,"Anomaly detection in wireless sensor networks is an important challenge for tasks such as intrusion detection and monitoring applications. This paper proposes two approaches to detecting anomalies from measurements from sensor networks. The first approach is a linear programming-based hyperellipsoidal formulation, which is called a centered hyperellipsoidal support vector machine (CESVM). While this CESVM approach has advantages in terms of its flexibility in the selection of parameters and the computational complexity, it has limited scope for distributed implementation in sensor networks. In our second approach, we propose a distributed anomaly detection algorithm for sensor networks using a one-class quarter-sphere support vector machine (QSSVM). Here a hypersphere is found that captures normal data vectors in a higher dimensional space for each sensor node. Then summary information about the hyperspheres is communicated among the nodes to arrive at a global hypersphere, which is used by the sensors to identify any anomalies in their measurements. We show that the CESVM and QSSVM formulations can both achieve high detection accuracies on a variety of real and synthetic data sets. Our evaluation of the distributed algorithm using QSSVM reveals that it detects anomalies with comparable accuracy and less communication overhead than a centralized approach.",2010,0, 4995,A fault section detection method using ZCT when a single phase to ground fault in ungrounded distribution system,"A single line to ground fault (SLG) detection in ungrounded network is very difficult, because fault current magnitude is very small. It is generated by a charging current between distribution line and ground. As it is very small, is not used for fault detection in case of SLG. So, SLG has normally been detected by switching sequence method which makes customers experience blackouts. A new fault detection algorithm based on comparison of zero-sequence current and line-to-line voltage phases is proposed. The algorithm uses ZCT installed to ungrounded distribution network. The proposed in this paper algorithm has the advantage that it can detect fault phase and distinguish a faulted section as well. The simulation tests of proposed algorithm were performed using Matlab Simulink and the results are presented in the paper.",2010,0, 4996,Performance analysis of web service composition based on stochastic well-formed workflow,"Quality-of-service (QoS) in Web services encompasses various non-functional issues such as performance, dependability and security, etc. As more and more Web services become available, QoS capability is becoming a decisive factor to distinguishing services. The management of QoS metrics directly impacts the success of services participating in workflow-based applications. Therefore, when services are created or managed using workflows or Web processes, the underlying workflow engine must be able to estimate, monitor, and control the QoS rendered to customers. In this paper, we present an approach to give QoS of service composition based on a decomposing algorism and the numerical analysis of stochastic well-formed workflow (SWWF) models of web service composition.",2010,0, 4997,Speech compression using LPC and wavelet,"The past decade has witnessed substantial progress towards the application of low-rate speech coders to civilian and military communications as well as computer-related voice applications. Central to this progress has been the development of new speech coders capable of producing high-quality speech at low data rates. Most of these coders incorporate mechanisms to represent the spectral properties of speech, provide for speech waveform matching, and optimize the coder's performance for the human ear. A number of these coders have already been adopted in national and international cellular telephony standards. In mobile communication systems, service providers are continuously met with the challenge of accommodating more users within a limited allocated bandwidth. For this reason, manufactures and service providers are continuously in search of low bit-rate speech coders that deliver toll-quality speech. In this paper the simulated low bit rate vocoder (LPC) using MATLAB was implemented. The result obtained from LPC was compared with other implemented voice compression using wavelet transform. From the results we see that the performance of wavelet transform was better than LPC.",2010,0, 4998,Adaptive random testing of mobile application,"Mobile applications are becoming more and more powerful yet also more complex. While mobile application users expect the application to be reliable and secure, the complexity of the mobile application makes it prone to have faults. Mobile application engineers and testers use testing technique to ensure the quality of mobile application. However, the testing of mobile application is time-consuming and hard to automate. In this paper, we model the mobile application from a black box view and propose a distance metric for the test cases of mobile software. We further proposed an ART test case generation technique for mobile application. Our experiment shows our ART tool can both reduce the number of test cases and the time needed to expose first fault when compared with random technique.",2010,0, 4999,Evaluation of software testing process based on Bayesian networks,"In this paper, we will introduce a Bayesian networks (BN) approach for probability evaluation method of software quality assurance. Then, we will present a method for transforming Fault Tree Analysis (FTA) to Bayesian networks and build an evaluation model based on Bayesian networks. Bayesian networks can perform forward risk prediction and backward diagnosis analysis by deduction on the model. Finally, we will illustrate the rationality and validity of the Bayesian networks through an example of evaluation of software testing process.",2010,0, 5000,Defect association and complexity prediction by mining association and clustering rules,"Number of defects remaining in a system provides an insight into the quality of the system. Software defect prediction focuses on classifying the modules of a system into fault prone and non-fault prone modules. This paper focuses on predicting the fault prone modules as well as identifying the types of defects that occur in the fault prone modules. Software defect prediction is combined with association rule mining to determine the associations that occur among the detected defects and the effort required for isolating and correcting these defects. Clustering rules are used to classify the defects into groups indicating their complexity: SIMPLE, MODERATE and COMPLEX. Moreover the defects are used to predict the effect on the project schedules and the nature of risk concerning the completion of such projects.",2010,0, 5001,A source-based risk analysis approach for software test optimization,"In this paper we introduce our proposed technique for software component test prioritization and optimization which is based on a source-code based risk analysis. Software test is one of the most critical steps in the software development. Considering that the time and human resources of a software project are limited, software test should be scheduled and planned very carefully. In this paper we introduce a classification approach that provides the developers with a risk model of the application which is specifically designed to assist the testing process by identifying the most important components and their corresponding test effort estimation. We designed an analyser tool to apply our technique to a test software project and we presented the results in this paper.",2010,0, 5002,Keystroke identification with a genetic fuzzy classifier,"This paper proposes the use of fuzzy if-then rules for Keystroke identification. The proposed methodology modifies Ishibuchi's genetic fuzzy classifier to handle high dimensional problems such as keystroke identification. High dimensional property of a problem increases the number of rules with low fitness. For decreasing them, rule initialization and coding are modified. Furthermore a new heuristic method is developed for improving the population quality while running GA. Experimental result demonstrates that we can achieve better running time, interpretability and accuracy with these modifications.",2010,0, 5003,An efficient experimental approach for the uncertainty estimation of QoS parameters in communication networks,"In communication networks setup and tuning activities, a key issue is to assess the impact of a new service running on the network on the overall Quality of Service. To this aim suitable figures of merit and test beds have to be adopted and time-consuming measurement campaigns generally should be carried out. A preliminary issue to be accomplished for is the metrological characterization of the test set-up aimed to provide a confidence level and a variability interval to the measurement results. This allows identifying and evaluating the intrinsic uncertainty to be considered in the experimental measurement of Quality of Service parameters. This paper proposes an original experimental approach suitable for the purpose. The uncertainty components involved in the measurement process are identified and experimentally quantified by means of effective statistical analyses. The proposed approach takes into account the general characteristics of the network topology, the number and type of devices involved, the characteristics of the current services operating on the network, and of the new services to be implemented, as well as the intrinsic uncertainties related to the set-up and to the measurement method. As an application example, the proposed approach has been adopted to the measurement of the packet jitter on a test bed involving a real computer network displaced on several kilometers. The obtained results show the effectiveness of the proposal.",2010,0, 5004,Induction defectoscope based on uniform eddy current probe with GMRs,"Defect detection in conductive plates represents an important issue. The present work proposes an induction defectoscope that includes a uniform eddy current probe with a rectangular excitation coil and a set of giant magnetoresistance sensors (GMR). The excitation current, the acquisition of the voltages delivered by the GMR and the signal processing of the acquired signal are performed by a real-time control and processing unit based on a TMS320C6713 digital signal processor (DSP). Different tests were carried out regarding the excitation coil position versus crack orientation and also regarding the GMR position inside the coil and the best response concerning the crack detection for a given aluminum plate specimen. Embedded software was developed using a NI LabVIEW DSP module including sinusoidal signal generation, amplitude and phase extraction using a sine-fitting algorithm and GUI for the induction defectoscope. Experimental results with probe characterization and detection of defects were included in the paper.",2010,0, 5005,Trustable web services with dynamic confidence time interval,"One part of trustfulness on web services application over the network is confidence of services that providers can guarantee to their customers. Therefore, after the development process, web services developers must be sure that the delivered services are qualified for availability and reliability during their execution. However, there is a critical problem when errors occur during the execution time of the service agent, such as the infinite loop problem in the service process. This unexpected problem of the web services software can cause critical damages in various aspects, especially lives and dead of people. Although there are various methods have been proposed to protect the unexpected errors, most of them are procedures in the verification and validation during the development processes. Nevertheless, these methods cannot completely solve the infinite loop problem since this problem is usually occurred by an unexpected values obtained from the execution of request and response processes . Therefore, this paper proposed a system architecture includes with a protection mechanism that completely detect and protect the unbound loop problem of web services when request services of each requester are under the dynamic situation. This proposed solution can guarantee that users will definitely be protected from a critical lost occurred from the unexpected infinite loop of the web services system. Consequently, all service agents with dynamic loop control condition can be trustable.",2010,0, 5006,esrcTool: A Tool to Estimate the Software Risk and Cost,"Function Point is a well known established method to estimate the size of software projects. There are several areas of the software engineering in which we can use the function point analysis (FPA) like project planning, project construction, software implementation etc. In this paper we have used the function point approach in order to develop the architecture of the esrcTool. This tool is used for two different purposes, firstly, to estimate the risk in the software and secondly to estimate the cost of the software. In the literature of software engineering there are so many models to estimate the risk in the software like Soft Risk Model, SRAM, SRAEM and so on. But in the esrcTool we have used SRAEM i.e. Software Risk Assessment and Estimation Model, because in this model FP is used as an input variable, and on the other hand side, in order to determine the cost of the software we have used the International Software Benchmarking Standards Group Release Report (ISBSG).",2010,0, 5007,Complexity Estimation Approach for Debugging in Parallel,Multiple faults in a software many times prevent debuggers from efficiently localizing a fault. This is mainly due to not knowing the exact number of faults in a failing program as some of the faults get obfuscated. Many techniques have been proposed to isolate different faults in a program thereby creating separate sets of failing program statements. To evenly divide these statements amongst debuggers we must know the level of work required to debug that slice. In this paper we propose a new technique to calculate the complexity of faulty program slices to efficiently distribute the work among debuggers for simultaneous debugging. The technique calculates the complexity of entire slice by taking into account the suspiciousness of every faulty statement. To establish the confidence in effectiveness and efficiency of proposed techniques we illustrate the whole idea with help of an example. Results of analysis indicate the technique will be helpful (a) for efficient distribution of work among debuggers (b) will allow simultaneous debugging of different faulty program slices (c) will help minimize the time and manual labor.,2010,0, 5008,A Generic Model for the Specification of Software Interface Requirements and Measurement of Their Functional Size,"The European ECSS-E-40 series of standards for the aerospace industry includes interfaces as one of 16 types of non functional requirement (NFR) for embedded and real-time software. An interface is typically described at the system level as a non functional requirement, and a number of concepts and terms are provided in that series to describe various types of candidate interfaces. This paper collects and organizes these interface-related descriptions into a generic model for the specification of software interface requirements, and to measure their functional size for estimation purposes using the COSMIC ISO 19761 standard.",2010,0, 5009,Improved leak detection quality by automatic signal filtering,"Constant improvement of leak detection techniques in pipe systems used for liquids transportation is a priority for companies and authorities around the world. If a pipe presents leakage problems, the liquid which is lost generates specific signals which are transmitted in the material of the pipe. These signals can be recorded using accelerometers. They are analyzed with the purpose of identifying the location of the leak. An important data analysis tool which helps in the process of leak position detection is the Cross Correlation Function (CCF). It is calculated between two simultaneously recorded specific leak signals with the purpose of time delay estimation. Time delay estimation calculations lead to the identification of the leak position. However, a traditional implementation of the CCF may not be satisfactory. Recorded signals are affected by noise coming from pipe elbows, junctions or traffic. All these unwanted noise sources can influence the aspect and accuracy of the CCF. Further improvements are necessary. This paper presents a Labview 8.5 implementation of an algorithm which improves the quality of the calculated CCF. Being part of a more complex software application, the algorithm uses the calculation of the coherence function. By analyzing the coherence the application will automatically select two frequency intervals (narrow bands) for filtering the leak signals. After the filtering process, the calculation of the CCF shows an increased quality. The application can be accessed from distance if a remote user needs to study or save the CCF results. No filtering settings are required and the method assures good results especially in the case of recorded signals which are highly affected by unwanted noise.",2010,0, 5010,Portable artificial nose system for assessing air quality in swine buildings,"To practice an efficient air quality management in livestocks, a standardized measurement technology has always been requested in order to assess the odor, of which results are acceptable by every party involved, i.e., the owner, the state and the public. This paper has reported on a prototype of portable electronic nose (e-nose) designed specially to assess malodors in swine buildings atmosphere in the pig farm. The briefcase formed e-nose consists of eight chemical gas sensors that are sensitive to gases usually presented in pig farm such as ammonia, hydrogen sulfide, hydrocarbons etc. The system contains gas flow controller, measurement circuit and data acquisition unit, all of which are automated and controlled by an in-house software on a notebook PC via a USB port. We have tested the functionality of this e-nose in a pig farm under a real project aimed specifically to reduce the odor emission from swine buildings in the pig farm. The e-nose was used to assess the air quality inside sampled swine buildings. Based on the results given in this paper, recommendations on appropriate feeding menu, buildings' cleaning schedule and emission control program have been made.",2010,0, 5011,Using Cloud Constructs and Predictive Analysis to Enable Pre-Failure Process Migration in HPC Systems,Accurate failure prediction in conjunction with efficient process migration facilities including some Cloud constructs can enable failure avoidance in large-scale high performance computing (HPC) platforms. In this work we demonstrate a prototype system that incorporates our probabilistic failure prediction system with virtualization mechanisms and techniques to provide a whole system approach to failure avoidance. This work utilizes a failure scenario based on a real-world HPC case study.,2010,0, 5012,The Design and Implementation of IEEE 802.21 and Its Application on Wireless VoIP,"Supporting a multimode mobile device to seamlessly switch its connections between various wireless access networks, such as WiMAX, Wi-Fi, and LTE, is an important research issue in 4G communications. The IEEE 802.21 working group defines the Media Independent Handover (MIH) standard to provide cross layer control and the information of network environment and services to optimize the handoff process. This work designs and implements the MIH middleware in a Linux platform to provide MIH event services (MIH-ES), MIH command services (MIH-CS) and MIH information services (MIH-IS). Based on the MIH software, we develop a high quality VoIP system which integrates Stream Control Transmission Protocol (SCTP), two MIH-IS based user motion detection (UMD) services and an adaptive QoS playout algorithm (AQP). The MIH services, multi-homing capability and dynamic address configuration extension of SCTP are applied in the VoIP system to perform seamless handoffs. Moreover, AQP adjusts the playout buffer by using the retransmission of partial reliable SCTP (PR-SCTP) and the MIH-ES to reduce the packet loss rate and improve speech quality. The experiment results indicate that MIH can enhance the performance of the handoff process and the quality of wireless VoIP services.",2010,0, 5013,"The SlimSAR: A small, multi-frequency, Synthetic Aperture Radar for UAS operation","The SlimSAR is a small, low-cost, Synthetic Aperture Radar (SAR) and represents a new advancement in highperformance SAR. ARTEMIS employed a unique design methodology that exploits previous developments in designing the Slim-SAR to be smaller, lighter, and more flexible while consuming less power than typical SAR systems. With an L-band core, and frequency block converters, the system is very suitable for use on a number of small UAS's. Both linear-frequency-modulated continuous-wave (LFM-CW), which achieves high signal-to-noise ratio while transmitting with less power, and pulsed mode have been tested. The flexible control software allows us to change the radar parameters in flight. The system has a built-in high quality GPS/IMU motion measurement solution and can also be packaged with a small data link and a gimbal for high frequency antennas. Multi-frequency SAR provides day and night imaging through smoke, dust, rain, and clouds with the advantages of additional capabilites at different frequencies (i.e. dry ground and foliage penetration at low frequencies, and change detection at high frequencies.)",2010,0, 5014,An effective nonparametric quickest detection procedure based on Q-Q distance,"Quickest detection schemes are geared toward detecting a change in the state of a data stream or a real-time process. Classical quickest detection schemes invariably assume knowledge of the pre-change and post-change distributions that may not be available in many applications. In this paper, we present a distribution free nonparametric quickest detection procedure based on a novel distance measure, referred to as the Q-Q distance calculated from the Q-Q plot, for detection of distribution changes. Through experimental study, we show that the Q-Q distance-based detection procedure presents comparable or better performance compared to classical parametric and other nonparametric procedures. The proposed procedure is most effective when detecting small changes.",2010,0, 5015,Improved Internet traffic analysis via optimized sampling,"Applications to evaluate Internet quality-of-service and increase network security are essential to maintaining reliability and high performance in computer networks. These applications typically use very accurate, but high cost, hardware measurement systems. Alternate, less expensive software based systems are often impractical for use with analysis applications because they reduce the number and accuracy of measurements using a technique called interrupt coalescence, which can be viewed as a form of sampling. The goal of this paper is to optimize the way interrupt coalescence groups packets into measurements so as to retain as much of the packet timing information as possible. Our optimized solution produces estimates of timing distributions much closer to those obtained using hardware based systems. Further we show that for a real Internet analysis application, periodic signal detection, using measurements generated with our method improved detection times by at least 36%.",2010,0, 5016,Application of ANN in food safety early warning,"In recent years, frequent occurrence of food safety crisis has seriously affected people's health, which causes widespread concern around the world. To effectively track and trace food has become an extremely urgent global issue. Early warning of food safety can prevent food safety crisis. However, there is still very few automatic tracking systems for the entire food supply chain. In the paper we propose a data mining technique to predict food quality using back-propagation (BP) neural network. Some prediction errors could occur when predicted data are near threshold values. To reduce errors, data near the threshold values are selected to train our system. Special care of threshold values and performance of our proposed algorithm are discussed in the paper.",2010,0, 5017,Ultrasonic Waveguides Detection-based approach to locate defect on workpiece,"Conventional ultrasonic techniques, such as pulse-echo, has been limited to testing relatively simple geometries or interrogating the region in the immediate vicinity of the transducer. A novel, efficiency methodology uses ultrasonic waveguides to examine structural components. The advantages of this technique include: its ability to detect the entire structure in a single measurement through long distance with little attenuation; and its capacity to test inaccessible regions of complex components. However, in practical work, this technique exists dispersion and mode conversion phenomena which makes poor signal to noise ratio, thereby, influences the actual application of this technique. In order to solve this problem, simulation with experiments can not only verifies the feasibility of this technique, but also has guiding significant for actual work. This paper reports on a novel approach in the simplification of the simulation of Ultrasonic Waveguides Detection. The first step is the selection of the frequency of signal which has the fastest group velocity and relatively small dispersion. The second step is the decision of Δ and le. As the numerical analysis characteristics of general-purpose software ANSYS, two key parameters: time step Δt and mesh element size le need to be carefully selected. This report finds the balance point between the accuracy of results and calculation time to determine two key parameters which significantly influence the result of the simulation result. Finally, this report show the experiment results on two-dimensional flat panel structure and three-dimensional triangle-iron structure respectively. From the result shown, the error between the simulation and actual value is less than 0.4%, perfectly prove the feasibility of this approach.",2010,0, 5018,Probabilistic fault prediction of incipient fault,"In this work, a probabilistic fault prediction approach is presented for prediction of incipient fault in an uncertain way. The approach has two stages. In the first stage, normal data is analyzed by principle component analysis (PCA) to get control limits of the statistics of T2 and SPE. In the second stage, fault data starts by PCA so as to derive the statistics of T2 and SPE. Then, the samplings of these two statistics obeying some certain prediction distribution are obtained using Bayesian AR model on the basis of the Winbugs software. At last, one-step prediction fault probabilities are estimated by kernel density estimation method according to the statistics' corresponding control limits. The prediction performance of this approach is illustrated using the data from the simulator of the Tennessee Eastman process.",2010,0, 5019,Model Based Testing Using Software Architecture,"Software testing is an ultimate obstacle to the final release of software products. Software testing is also a leading cost factor in the overall construction of software products. On the one hand, model-based testing methods are new testing techniques aimed at increasing the reliability of software, and decreasing the cost by automatically generating a suite of test cases from a formal behavioral model of a system. On the other hand, the architectural specification of a system represents a gross structural and behavioral aspect of a system at the high level of abstraction. Formal architectural specifications of a system also have shown promises to detect faults during software back-end development. In this work, we discuss a hybrid testing method to generate test cases. Our proposed method combines the benefits of model-based testing with the benefits of software architecture in a unique way. A simple Client/Server system has been used to illustrate the practicality of our testing technique.",2010,0, 5020,Improving Change Impact Analysis with a Tight Integrated Process and Tool,"Change impact analysis plays an immanent role in the maintenance and enhancement of software systems, especially for defect prevention. In our previous work we have developed approaches to detect logical dependencies among artifacts in repositories and calculated different metrics. But that is not enough, because in order to use change impact analysis a detailed process with guidelines on the one hand, and appropriate tools on the other hand will be needed. To show the importance of such an approach, we have gathered problems and analyzed requirements in the field of a social insurance company. Based on these requirements we have developed a process and a tool which helps analysts in performing activities along the defined process. In this paper we present the tool for change impact analysis and the significance of the tight integration of the tool into the process.",2010,0, 5021,Diagnosing Failures in Wireless Networks Using Fault Signatures,"Detection and diagnosis of failures in wireless networks is of crucial importance. It is also a very challenging task, given the myriad of problems that plague present day wireless networks. A host of issues such as software bugs, hardware failures, and environmental factors, can cause performance degradations in wireless networks. As part of this study, we propose a new approach for diagnosing performance degradations in wireless networks, based on the concept of ``fault signatures''. Our goal is to construct signatures for known faults in wireless networks and utilize these to identify particular faults. Via preliminary experiments, we show how these signatures can be generated and how they can help us in diagnosing network faults and distinguishing them from legitimate network events. Unlike most previous approaches, our scheme allows us to identify the root cause of the fault by capturing the state of the network parameters during the occurrence of the fault.",2010,0, 5022,Sensitivity of Two Coverage-Based Software Reliability Models to Variations in the Operational Profile,"Software in field use may be utilized by users with diverse profiles. The way software is used affects the reliability perceived by its users, that is, software reliability may not be the same for different operational profiles. Two software reliability growth models based on structural testing coverage were evaluated with respect to their sensitivity to variations in operational profile. An experiment was performed on a real program (SPACE) with real defects, submitted to three distinct operational profiles. Distinction among the operational profiles was assessed by applying the Kolmogorov-Smirnov test. Testing coverage was measured according to the following criteria: all-nodes, all-arcs, all-uses, and all-potential-uses. Reliability measured for each operational profile was compared to the reliabilities estimated by the two models, estimated reliabilities were obtained using the coverage for the four criteria. Results from the experiment show that the predictive ability of the two models is not affected by variations in the operational profile of the program.",2010,0, 5023,Software Reliability Modeling with Integrated Test Coverage,"The models to predicate software reliability using test coverage (TC) have been widely studied in recent years. An increasing number of TC based software reliability models (TC-SRMs) have been developed. Meanwhile, to quantify the degree of effectiveness of software testing comprehensively, over ten kinds of TC measures have been proposed and each of them has its strength and weakness. A common problem with current TC-SRMs is that only a few classic TC are presented respectively. How to use all various TC metrics simultaneously in TC-SRMs has not been well discussed. This paper proposes a novel concept, integrated test coverage (ITC), to combine multi kinds of TC metrics as an integrated parameter for test effectiveness and reliability estimation. Then, the paper reviews the available TC-SRMs and classifies them into five categories by the modeling methods. Based on the weakness analysis of each category TC-SRMs respectively, we suggest that ITC can be applied to enhance these reported TC-SRMs.",2010,0, 5024,"QoS management for NNEW: requirements, challenges, and solutions",Presents a collection of slides covering the following topics:NNEW (NextGen network enabled weather); QoS management; and NextGen weather mission.,2010,0, 5025,A Comprehensive Diagnosis Methodology for Complex Hybrid Systems: A Case Study on Spacecraft Power Distribution Systems,"The application of model-based diagnosis schemes to real systems introduces many significant challenges, such as building accurate system models for heterogeneous systems with complex behaviors, dealing with noisy measurements and disturbances, and producing valuable results in a timely manner with limited information and computational resources. The Advanced Diagnostics and Prognostics Testbed (ADAPT), which was deployed at the NASA Ames Research Center, is a representative spacecraft electrical power distribution system that embodies a number of these challenges. ADAPT contains a large number of interconnected components, and a set of circuit breakers and relays that enable a number of distinct power distribution configurations. The system includes electrical dc and ac loads, mechanical subsystems (such as motors), and fluid systems (such as pumps). The system components are susceptible to different types of faults, i.e., unexpected changes in parameter values, discrete faults in switching elements, and sensor faults. This paper presents Hybrid Transcend, which is a comprehensive model-based diagnosis scheme to address these challenges. The scheme uses the hybrid bond graph modeling language to systematically develop computational models and algorithms for hybrid state estimation, robust fault detection, and efficient fault isolation. The computational methods are implemented as a suite of software tools that enable diagnostic analysis and testing through simulation, diagnosability studies, and deployment on the experimental testbed. Simulation and experimental results demonstrate the effectiveness of the methodology.",2010,0, 5026,A simple and efficient way to compute depth maps for multi-view videos,"This paper deals with depth maps extraction from multi-view video. Contrary to standard stereo matching-based approaches, depth maps are computed here using optical flow estimations between consecutive views. We compare our approach with the one proposed in the Depth Estimation Reference Software (DERS) for normalization purposes in the ISO-MPEG 3DV group. Experiments conducted on sequences provided to the normalization community show that the presented method provides high quality depth maps in terms of depth fidelity and virtual views synthesis. Moreover, being implemented on the GPU, it is far faster than the DERS.",2010,0, 5027,Evaluation of depth compression and view synthesis distortions in multiview-video-plus-depth coding systems,"Several quality evaluation studies have been performed for video-plus-depth coding systems. In these studies, however, the distortions in the synthesized views have been quantified in experimental setups where both the texture and depth videos are compressed. Nevertheless, there are several factors that affect the quality of the synthesized view. Incorporating more than one source of distortion in the study could be misleading; one source of distortion could mask (or be masked by) the effect of other sources of distortion. In this paper, we conduct a quality evaluation study that aims to assess the distortions introduced by the view synthesis procedure and depth map compression in multiview-video-plus-depth coding systems. We report important findings that many of the existing studies have overlooked, yet are essential to the reliability of quality evaluation. In particular, we show that the view synthesis reference software yields high distortions that mask those due to depth map compression, when the distortion is measured by average luma peak signal-to-noise ratio. In addition, we show what quality metric to use in order to reliably quantify the effect of depth map compression on view synthesis quality. Experimental results that support these findings are provided for both synthetic and real multiview-video-plus-depth sequences.",2010,0, 5028,A software-based receiver sampling frequency calibration technique and its application in GPS signal quality monitoring,"This paper has investigated the sampling frequency error impact on the signal processing in a software-correlator based GPS receiver as well as the periodic averaging technique in a pre-correlation GPS signal quality monitor. The refined signal model of receiver processing in the presence of clock error is established as the foundation of the performance analyses. A software-based method is developed to accurately calibrate both the digital IF and the sampling frequency simultaneously. The method requires no additional hardware other than the GPS receiver RF front end output samples. It enables inline calibration of the receiver measurements instead of complicated post-processing. The performance of the technique is evaluated using simulated signals as well as live GPS signals collected by several GPS data acquisition equipments, including clear time domain waveforms/eye patterns, amplitude probability density histograms, Power Spectrum Density (PSD) envelopes, and correlation-related characteristics. The results show that we can calibrate the sampling frequency with an accuracy resolution of 10âˆ? of the true sampling frequency online, and the pre-correlation SNR can be potentially improved by 39dB using periodic averaging.",2010,0, 5029,A Quality Model in a Quality Evaluation Framework for MDWE methodologies,"Nowadays, diverse development methodologies exist in the field of Model-Driven Web Engineering (MDWE), each of which covers different levels of abstraction on Model-Driven Architecture (MDA): CIM, PIM, PSM and Code. Given the high number of methodologies available, it is necessary to evaluate the quality of existing methodologies and provide helpful information to the developers. Furthermore, proposals are constantly appearing and the need may arise not only to evaluate the quality but also to find out how it can be improved. In this context, QuEF (Quality Evaluation Framework) can be employed to assess the quality of MDWE methodologies. This article presents the work being carried out and describes tasks to define a Quality Model component for QuEF. This component would be responsible for providing the basis for specifying quality requirements with the purpose of evaluating quality.",2010,0, 5030,Software risk assessment and evaluation process (SRAEP) using model based approach,"Software Risk Evaluation (SRE) is a process for identifying, analyzing, and developing mitigation strategies for risks in a software intensive system while it is in development. Risk assessment incorporates risk analysis and risk management, i.e. it combines systematic processes for risk identification and determination of their consequences, and how to deal with these risks? Many risk assessment methodologies exist, focusing on different types of risk or different areas of concern. Risk evaluation means to determine level of risk, prioritize the risk and categorize the risk. In this paper we have proposed a Software Risk Assessment and Evaluation Process (SRAEP) using model based approach. We have used model based approach because it requires correct description of the target system, its context and all security features. In SRAEP, we have used the software fault tree (SFT) to identify the risk. Finally we have compared the weaknesses of existing Software Risk Assessment and Estimation Model (SRAEM) with the proposed SRAEP in order to show the importance of software fault tree.",2010,0, 5031,Algorithm-based fault tolerance for many-core architectures,"Modern many-core architectures with hundreds of cores provide a high computational potential. This makes them particularly interesting for scientific high-performance computing and simulation technology. Like all nano scaled semiconductor devices, many-core processors are prone to reliability harming factors like variations and soft errors. One way to improve the reliability of such systems is software-based hardware fault tolerance. Here, the software is able to detect and correct errors introduced by the hardware. In this work, we propose a software-based approach to improve the reliability of matrix operations on many-core processors. These operations are key components in many scientific applications.",2010,0, 5032,Microprocessor fault-tolerance via on-the-fly partial reconfiguration,"This paper presents a novel approach to exploit FPGA dynamic partial reconfiguration to improve the fault tolerance of complex microprocessor-based systems, with no need to statically reserve area to host redundant components. The proposed method not only improves the survivability of the system by allowing the online replacement of defective key parts of the processor, but also provides performance graceful degradation by executing in software the tasks that were executed in hardware before a fault and the subsequent reconfiguration happened. The advantage of the proposed approach is that thanks to a hardware hypervisor, the CPU is totally unaware of the reconfiguration happening in real-time, and there's no dependency on the CPU to perform it. As proof of concept a design using this idea has been developed, using the LEON3 open-source processor, synthesized on a Virtex 4 FPGA.",2010,0, 5033,A Practical Approach to Robust Design of a RFID Triple-Band PIFA Structure,"This paper presents a practical methodology of obtaining a robust optimal solution for a multi U-slot PIFA (Planar Inverted F-Antenna) structure with triple bands of 433 MHz, 912 MHz and 2.45 GHz. Using evolutionary strategy, a global optimum is first sought out in terms of the dimensions of the slots and shorting strip. Then, starting with the optimized values, Taguchi quality method is carried out in order to obtain a robust design of the antenna against the changes of uncontrollable factors such as material property and feed position. To prove the validity of the proposed method, the performance of the antenna is predicted by general-purpose electromagnetic software and also compared to experimental results.",2010,0, 5034,Correlation between Indoor Air Distribution and Pollutants in Natural Ventilation,"The indoor air quality of lecture room in the university located in Xi'an city centre was assessed. The primary aim was to obtain correlations between the air distribution and pollutants. It was found that air distribution had important effect on the IAQ, and rational air distribution has played an important part in pollutants dispersion and attenuation. The visual air-flow field of lecture room was obtained via the computational fluid dynamics (CFD) method and Fluent software, meanwhile, some parameters of indoor pollutants was gained by field testing and the lecture room characteristics were investigated. The results showed that indoor pollutants can't be removed just depending on natural ventilation when there are vortexes in the air distribution. Furthermore, reasonable suggestions for creating healthy teaching environment and better IAQ are offered.",2010,0, 5035,Early software reliability prediction based on support vector machines with genetic algorithms,"With recent strong emphasis on rapid development of information technology, the decisions made on the basis of early software reliability estimation can have greatest impact on schedules and cost of software projects. Software reliability prediction models is very helpful for developers and testers to know the phase in which corrective action need to be performed in order to achieve target reliability estimate. In this paper, an SVM-based model for software reliability forecasting is proposed. It is also demonstrated that only recent failure data is enough for model training. Two types of model input data selection in the literature are employed to illustrate the performances of various prediction models.",2010,0, 5036,Study from implementing algorithms for face detection and recognition for embedded systems,"These Face detection and recognition research has attracted great attention in recent years. Automatic face detection has great potential in a large array of application areas, including banking and security system access control, video surveillance, and multimedia information retrieval. In this paper, we discuss the opportunity and methodology for implementation of an embedded face detection system like web-camera or digital camera. Face detection is a widely studied topic in computer vision, and advances in algorithms, low cost processing, and CMOS imagers make it practical for embedded consumer applications. As with graphics, the best cost-performance ratio is achieved with dedicated hardware. The challenges of face detection in embedded environments include bandwidth constraints set by low cost memory, or processor speed and a need to find optimal solution because applications need reliability, and low cost for final solution.",2010,0, 5037,"Constrained-random test bench for synthesis: Technique, tools and results","This paper presents the technique and tools for automatization of the synthesis of constrained-random test-bench for verification of the synthesizable designs of microprocessors and microcontrollers. The structure and parameters of constrained-random test-bench is coded by a stochastic grammar that is specified by elaborated tools. The Application, called RandGen, generates test-bench instantiation which is inserted in the design for synthesis. Also, elaborated tools allow to estimate various constrained-random parameters. The performed test experiments have showed that the apriori estimations and aposteriori test results are in good agreement.",2010,0, 5038,An indirect adaptive control strategy for a lactic fermentation bioprocess,"This paper presents the design and the analysis of an indirect adaptive control strategy for a lactic acid production that is carried out in two cascaded continuous stirred tank bioreactors. The indirect adaptive control structure is based on the nonlinear process model and is derived by combining a linearizing control law with a new parameter estimator, which plays the role of the software sensor for on-line estimation of the bioprocess unknown kinetics. The behaviour and performance of both estimation and control algorithms are illustrated by simulations applied in the case of a lactic fermentation bioprocess for which kinetic dynamics are strongly nonlinear, time-varying and completely unknown.",2010,0, 5039,Are Longer Test Sequences Always Better? - A Reliability Theoretical Analysis,"One of the interesting questions currently discussed in software testing, both in practice and academia, is the role of test sequences on software testing, especially on fault detection. Previous work includes empirical research on rather small examples tested by relatively short test sequences. Belief is ""the longer the better"", i.e., the longer test sequences are, the more faults are detected. This paper extends those approaches applied to a large commercial application using test sequences of increasing length, which are generated and selected by graph-model-based techniques. Experiments applying many software reliability models of different categories deliver surprising results.",2010,0, 5040,Quantitative Evaluation of Related Web-Based Vulnerabilities,Current web application scanner reports contribute little to diagnosis and remediation when dealing with vulnerabilities that are related or vulnerability variants. We propose a quantitative framework that combines degree of confidence reports pre-computed from various scanners. The output is evaluated and mapped based on derived metrics to appropriate remediation for the detected vulnerabilities and vulnerability variants. The objective is to provide a trusted level of diagnosis and remediation that is appropriate. Examples based on commercial scanners and existing vulnerabilities and variants are used to demonstrate the framework's capability.,2010,0, 5041,Studying the Impact of Social Structures on Software Quality,"Correcting software defects accounts for a significant amount of resources such as time, money and personnel. To be able to focus testing efforts where needed the most, researchers have studied statistical models to predict in which parts of a software future defects are likely to occur. By studying the mathematical relations between predictor variables used in these models, researchers can form an increased understanding of the important connections between development activities and software quality. Predictor variables used in past top-performing models are largely based on file-oriented measures, such as source code and churn metrics. However, source code is the end product of numerous interlaced and collaborative activities carried out by developers. Traces of such activities can be found in the repositories used to manage development efforts. In this paper, we investigate statistical models, to study the impact of social structures between developers and end-users on software quality. These models use predictor variables based on social information mined from the issue tracking and version control repositories of a large open-source software project. The results of our case study are promising and indicate that statistical models based on social information have a similar degree of explanatory power as traditional models. Furthermore, our findings suggest that social information does not substitute, but rather augments traditional product and process-based metrics used in defect prediction models.",2010,0, 5042,Development of a Novel Large Capacity Charger for Aircraft Battery,"This paper presents a novel large capacity charger for aircraft battery, which can be used as maintainable equipment for lead-acid and cadmium-nickel batteries. The circuit scheme, control principle, software design, charger failure detection and protection circuit are also designed in detail. The main circuit of the proposed charger uses half-bridge DC-DC converter. The control circuit, which uses voltage and current double closed-loop control, takes the ADμC812 chip with A/D transformation as its core. It can control the charger in a state of constant current or trickle charge. Simultaneously, the battery charger's soft start circuit, over-current protection circuit, over-temperature protection circuit as well as the polarity detection circuit are also designed. A prototype with ADμC812 is developed, which can control the current of the charger in the state of constant current and trickle charge. Its superiority performance such as electric separation, simple frame, high efficiency, small volume, light weight is verified by experimental results.",2010,0, 5043,Objective video quality assessment of mobile television receivers,"The automated evaluation of mobile television receivers shall be facilitated by objective video quality assessment. Therefore, a test bench was set up, whose design and signal flow will be presented first. We describe and compare different full-reference video quality metrics and a simple no-reference metric from literature. The implemented metrics are evaluated and then used to assess the video quality of receivers for digital television broadcast by applying different RF scenarios. We present the achieved results with the different metrics for the purpose of receiver comparison.",2010,0, 5044,A safety related analog input module based on diagnosis and redundancy,"This paper introduces a safety-related analog input module to achieve data acquisition of 4-20mA current signals. It is an integral part of the safety-instrumented systems (SIS) which is used to provide critical control and safety applications for automation users. In order to ensure the performance of analog input circuit in good condition, a combination of hardware and software diagnosis should be carried out periodically. These kinds of internal self-diagnosis allow the device to detect improper operation within itself. If potentially dangerous process occurs, the AI has redundancy to maintain operation even when parts fail. The article presents special hardware, diagnostic software and full fault injection testing of the complete design. The test result shows that the safety-related AI is capable of detecting and locating of mostly potential faults and internal component failures.",2010,0, 5045,A study of medical image tampering detection,"Currently, methods of image tampering detection are divided into two categories, active detection and passive detection. In this paper, we try to review several detecting methods and hope this will offer some help to this field. We will focus on the passive detection method for medical images and show some results of our experiments in which we extract statistical features (IQM and HOWS based) of source images and their doctored version respectively. Manipulations we take to doctor the images include: brightness adjustment, rotation, scale, filtering, compression and so on, using fix manipulation parameter and random selected parameter. Different classifiers are chosen then to discriminate the source images from the doctored ones. We compare the performance of the classifiers to show that the passive detection methods are effective while dealing with medical image tapering detecting.",2010,0, 5046,Overview of power system operational reliability,"The traditional reliability evaluation assesses the long-term performance of power system but its constant failure rate cannot reflect the time-varying performances in an operational time frame. This paper studies the operational reliability of power system in the real-time operating conditions based on online operation information obtained from the Energy Management System (EMS). A framework of operational reliability evaluation is proposed systematically. The effects of components' inherence conditions, environment conditions and operating electrical conditions on failure rates are considered in their operational reliability models. To meet the restrictive requirement of real-time evaluation, a special algorithm is presented. The indices for operational reliability are defined as well. The software, Operational Reliability Evaluation Tools (ORET), is developed to implement the above described functions. The work reported in this paper can be used to assess the system risk in an operational timeframe and give warnings when the system reliability is low.",2010,0, 5047,Using Aurora Road Network Modeler for Active Traffic Management,"Active Traffic Management (ATM) is the ability to dynamically manage recurrent and nonrecurrent congestion based on prevailing traffic conditions. Focusing on trip reliability, it maximizes the effectiveness and efficiency of freeway corridors. ATM relies on fast and trustworthy traffic simulation software that can assess a large number of control strategies for a given road network, given various scenarios, in a matter of minutes. Effective traffic density estimation is crucial for the successful deployment of feedback algorithms for congestion control. Aurora Road Network Modeler (RNM) is an open-source macrosimulation tool set for operational planning and management of freeway corridors. Aurora RNM employs Cell Transmission Model (CTM) for road networks extended to support multiple vehicle classes. It allows dynamic filtering of measurement data coming from traffic sensors for the estimation of traffic density. In this capacity, it can be used for detection of faulty sensors. The virtual sensor infrastructure of Aurora RNM serves as an interface to the real world measurement devices, as well as a simulation of such measurement devices.",2010,0, 5048,Analytically redundant controllers for fault tolerance: Implementation with separation of concerns,"Diversity or redundancy based software fault tolerance encompasses the development of application domain specific variants and error detection mechanisms. In this regard, this paper presents an analytical design strategy to develop the variants for a fault tolerant real-time control system. This work also presents a generalized error detection mechanism based on the stability performance of a designed controller using the Lyapunov Stability Criterion. The diverse redundant fault tolerance is implemented with an aspect oriented compiler to separate and thus reduce this additional complexity. A Mathematical Model of an Inverted Pendulum System has been used as a case study to demonstrate the proposed design framework.",2010,0, 5049,A new Resampling algorithm for generic particle filters,"This paper is devoted to the resampling problem of particle filters. We firstly demonstrate the performance of classical Resampling algorithm (also called as systematic resampling algorithm) using a novel metaphor, through which the existing defects of Resampling algorithm is vividly reflected simultaneously. In order to avoid these defects, the exquisite resampling (ER) algorithm is induced which involves some exquisite actions such as comparing the weights by stages and generating the new particles based on quasi-Monte Carlo method. Simulations indicate that the proposed ER algorithm can reduce the sample impoverishment effectively and improve the accuracy of estimation evidently, which confirm that ER algorithm is a competitive alternative to Resampling algorithm.",2010,0, 5050,A Modified History Based Weighted Average Voting with Soft-Dynamic Threshold,"Voting is a widely used fault-masking technique for real time systems. Several voting algorithms exist in literature. In this paper, a survey on the few existing voting algorithms is presented and a modified history based weighted average voting algorithm with soft-dynamic threshold value is proposed with two different weight assignment techniques, which combines all the advantages of the surveyed voting algorithms but overcomes their deficiencies. The proposed algorithm with both type of weight assignment techniques, gives better performance compared to the existing history based weighted average voting algorithms in the presence of intermittent errors. In the presence of permanent errors, when all the modules are fault prone, the proposed algorithm with first type of weight assignment technique gives higher availability than all the surveyed voting algorithms. If at least one module is fault free, this algorithm gives almost 100 % safety and also higher range of availability than the other surveyed voting algorithms.",2010,0, 5051,A Dynamic Adjustment Mechanism with Heuristic for Thread Pool in Middleware,"Thread pooling is an important technique of performance optimization in middleware. With the consideration of the features of Internet applications, the configuration of thread pool in middleware needs to be adjusted dynamically on the basis of perceiving the run-time context. However, how to find out effective influencing factors which make the adjustment to have better adaptability remains to be discussed further. The paper firstly presents a thread pool model in context of Web application server based on M/M/1/K/âˆ?FCFS queuing system. A dynamic mechanism which imports certain of heuristic factors for reflecting the context at run-time is studied to adjust the size of thread pool so as to adapt to the changes of resources well. The prototypical experiments verify the effective influence of heuristic factors that exert on adjustment of thread pool size and show that the presented mechanism can be helpful for improving the performance of system.",2010,0, 5052,Business excellence in non-profit company,"The success in achieving positive business results and employees satisfaction, which means overall company satisfaction, success and recognition on market, depends a lot on the quality of integrated business processes, their well estimated optimization and implemented measures for tracking and monitoring the processes. For this reason companies need much more time to dedicate not only to carry out routine jobs and procedures but also to analyze ways of performing them within the company and to note down each step in the jobs execution procedures, i.e. to record business processes, as the first step. The second step would be to analyze recorded processes and to optimize them as much as possible to become more efficient, to dissipate less time and give more satisfaction to employees. And the third step would be to introduce and establish the system for tracking and monitoring optimized processes, and to evaluate the results. This article gives an overview of procedure for introducing the process management, integrated with/within information system in the non-profit company.",2010,0, 5053,Streamlining collection of training samples for object detection and classification in video,"This paper is concerned with object recognition and detection in computer vision. Many promising approaches in the field exploit the knowledge contained in a collection of manually annotated training samples. In the resulting paradigm, the recognition algorithm is automatically constructed by some machine learning technique. It has been shown that the quantity and quality of positive and negative training samples is critical for good performance of such approaches. However, collecting the samples requires tedious manual effort which is expensive in time and prone to error. In this paper we present design and implementation of a software system which addresses these problems. The system supports an iterative approach whereby the current state-of-the-art detection and recognition algorithms are used to streamline the collection of additional training samples. The presented experiments have been performed in the frame of a research project aiming at automatic detection and recognition of traffic signs in video.",2010,0, 5054,Real-time detection and recognition of traffic signs,"Automated recognition of traffic signs is becoming a very interesting area in computer vision with clear possibilities of its application in automotive industry. For example, it would be possible to design a system which could recognize the current speed limit on the road and notify the driver in an appropriate manner. In this paper we deal with methods for automated localization of certain traffic signs, and classification of those signs according to the official designations. We propose two different approaches of determining the current speed limit after the sign was localized. A demo software system was developed to demonstrate the presented methods. Finally, we compare results obtained from the developed software, and discuss the influence of different parameters on recognition performance and quality.",2010,0, 5055,Development on preventive maintenance management system for expressway asphalt pavements,"In view of the status that there was no expressway pavement preventive maintenance management system at home and abroad at present, based on the technology theory obtained by the author and the demands and process of expressway asphalt pavement preventive maintenance management, preventive maintenance management system for expressway asphalt pavement (EPMMS (V1.0)) was developed. The work or functions of expressway maintenance quality evaluating, pavement performance predicting, optimum selecting of preventive maintenance treatment, the optimal timing determining, and post-evaluating could be realized or auxiliary realized, The theoretical foundation, development environment and the whole framework were expounded in the paper, the development and realization process of the five sub-systems was introduced in detail, the system testing and application were briefly introduced. The test results showed that this system could meet the demands of software products register test code. Primary application showed the system was correct and efficient, software support would be provided for expressway pavement preventive maintenance management, the pavement preventive maintenance management would be more standardized and convenient, and also the demands of the expressway maintenance management department could be greatly met by this system.",2010,0, 5056,Design and implementation of a direct RF-to-digital UHF-TV multichannel transceiver,"This manuscript presents the design and implementation of a direct UHF digital transceiver that provides direct sampling of the UHF-TV input signal spectrum. Our SDR-based approach is based on a pair of high-speed ADC/DAC devices along with a channelizer, a dechannelizer and a channel management unit, which is capable of inserting, deleting and/or conmuting individual RF-TV channels. Simulation results and in-lab measurements assess that the proposed system is able to receive, manipulate and retransmit a real UHF-TV spectrum at a negligible quality penalty cost.",2010,0, 5057,Perceptual-based coding mode decision,"The framework of rate-distortion optimization (RDO) has been widely adopted for video coding to achieve a good trade-off between bit-rate and distortion. However, objective distortion metrics such as mean square error traditionally used in this framework are poorly correlated with perceptual video quality. To address this issue, we incorporate the structural similarity index as a quality metric into the framework and develop a predictive Lagrange multiplier selection technique to resolve the chicken-and-egg dilemma of perceptual-based RDO. The resulting perceptual-based RDO is then applied to H.264 intra mode decision as an illustration of the application of the proposed technique. Given a perceptual quality level, 5%-10% bit rate reduction over the JM reference software of H.264 is achieved. Subjective evaluation further confirms that, at the same bit-rate, the proposed perceptual RDO preserves image details and prevents block artifact better than the traditional RDO.",2010,0, 5058,Bit-rate reduction using psychoacoustical masking model in Frequency Domain Linear Prediction based Audio Codec,"Frequency Domain Linear Prediction (FDLP) gives an approximation of the Hilbert envelopes of a signal. FDLP based Codec works with long temporal segments and keeps the information carried by the time-domain envelopes very well. The codec gives good quality of the reconstructed signal, but coding efficiency is not enough. Here Frequency masking is introduced to FDLP based codec to reduce the bit-rate. Frequency masking is a hearing phenomenon that the hearing threshold of a sound will increase if an intense sound exists simultaneously. The psychoacoustics model is used to estimate the hearing threshold and the absolute threshold of hearing (ATH) of the FDLP carrier signals, and bit allocation for frequency sub-bands FDLP carrier signal is calculated according to the threshold and ATH. 5% bit-rate reduction is obtained with the application of the frequency masking. Perceptual Evaluation of Audio Quality (PEAQ) and Multiple Stimuli with Hidden Reference and Anchor (MUSHRA) test are carried out to evaluate the performance.",2010,0, 5059,Using space time coding MIMO system for software-defined radio to improve BER,"The demand for mobile communication systems with global coverage, interfacing with various standards and protocols, high data rates and improved link quality for a variety of applications has dramatically increased in recent years. The Software-Defined Radio (SDR) is the recent proposal to achieve these. In SDR, new concepts and methods, which can optimally exploit the limited resources, are necessary. Multiple antenna system is one of those, which resort to transmit strategies, referred to as Space-Time Codes (STCs). It gives high quality error performance by incorporating spatial and temporal redundancy. The paper discusses the space-time block codes and highlights the trade-off between the number of transmitter and receiver antennas and the bit error rate. We simulate MIMO systems using M-ary Phase Shift Keying (PSK) and compare the results with Single Input Single Output (SISO) systems. This analysis can be helpful in choosing the desired modulation scheme depending upon the Bit error rate (BER) and Signal to noise ratio (SNR) requirements of the service.",2010,0, 5060,Metrics selection for fault-proneness prediction of software modules,"It would be valuable to use metrics to identify the fault-proneness of software modules. It is important to select the most appropriate particular metric subset for fault-proneness prediction. We proposed an approach of metrics selection, which firstly utilized the correlation analysis to eliminate the high the correlation metrics and then ranked the remaining metrics based on the gray relational analysis. Three classifiers, that were logistic regression model, NaiveBayes, and J48, were utilized to empirically investigate the usefulness of selected metrics. Our results, based on a public domain NASA data set, indicate that 1) proposed method for metrics selection is effective, and 2) using 3-4 metrics gets the balanced performance for fault-proneness prediction of software modules.",2010,0, 5061,Design of the pronunciation dictionary for an English CAPT system,"Computer Assisted Pronunciation Training (CAPT) systems can judge the overall pronunciation quality and point out the pronunciation errors by recognition of the user's utterance to improve the users' oral ability. However, the performance of the current error detection systems can't satisfy the users' expectation. This paper carefully designs the pronunciation dictionary of the speech recognition engine of the CAPT system based on observation of the spectral properties of some phonemes. First, the phonetic symbol /R/ utilized in some modern English dictionaries is replaced by two symbols, /R/ (as in word result) and /ER/ (as in word bear), due to the large spectral deviation in these two cases. Moreover, rather than using a consonant sequence to represent a consonant cluster, we use one symbol to represent the whole for some consonant clusters, according to the phonetic properties of the consonants. Finally, we divide the phoneme /l/ into clear /l/ and dark /l/ according to the distinction in the manner of articulation and the place of articulation. So is /n/. As a result, we get a new pronunciation dictionary which is more suitable for the speech recognition engine of the CAPT system. The HMMs are trained by using the TIMIT database, and evaluated on a database involving 40 Chinese undergraduates. Experimental results illustrate the effectiveness of the new pronunciation dictionary.",2010,0, 5062,Remote automatic selection of suitable frequency intervals for improved leak detection,"Constant improvement of leak detection techniques in pipe systems used for liquids transportation is a priority for companies and authorities around the world. If a pipe presents leakage problems, the liquid which is lost generates specific signals which are transmitted in the material of the pipe. Using accelerometers these signals can be recorded and analyzed with the purpose of identifying the location of the leak. An important data analysis tool which helps in the process of leak detection is the Cross Correlation Function (CCF). It is calculated between two simultaneously recorded specific signals with the purpose of time delay estimation (TDE). TDE leads to the identification of the leak position. However, a traditional implementation of the CCF may not be satisfactory. Recorded signals are affected by noise coming from pipe elbows, junctions or traffic. Further improvements are necessary. This paper presents an implementation of an algorithm which improves the quality of the calculated CCF. Being part of a more complex software application, the algorithm is based on the calculation of the coherence function (CF). The application will automatically select two frequency intervals (bands) for filtering the leak signals. After the filtering process, the calculation of the CCF shows an increased quality. The application can be accessed from distance if a remote user needs to study or save the CCF results. No filtering settings are required and the method assures good results.",2010,0, 5063,Reusable Specification of Agent-Based Models,"This paper identifies and characterizes several important impediments to reusable agent-based models. It describes a new class of programming languages that address these problems by allowing abstract specification of incomplete but accurate models and by enabling automated construction of agent-based simulations from models. The primary innovation underlying this approach is the use of property-based types to avoid modeling limitations inherent in the object-oriented paradigm. A property-based approach enables independent validation of constituent models, abstract specification of actors independent of their simulations, and certain modifications to model without revalidation.",2010,0, 5064,Qualitative performance control in supervised IT infrastructures,"Performability control of IT systems still lacks theoretically well-founded approaches that fit well to enterprise system management solutions. We propose a methodology for designing compact qualitative, state-based predictive performability control that use instrumentation provided by typical system monitoring frameworks. We identify the main systemic insufficiencies of current monitoring tools that hinder designing trustworthy fine-granular controls.",2010,0, 5065,Quantifying effectiveness of failure prediction and response in HPC systems: Methodology and example,"Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe context-relevant methodologies for determining the accuracy and cost-benefit of predictors.",2010,0, 5066,Analyzing router performance using network calculus with external measurements,"In this paper we present results from an extensive measurement study of various hardware and (virtualized) software routers using several queueing strategies, i.e. First-Come-First-Served and Fair Queueing. In addition to well-known metrics such as packet forwarding performance, per packet processing time, and jitter, we apply network calculus models for performance analysis. This includes the Guaranteed Rate model for Integrated Services as well as the Packet Scale Rate Guarantee model for Differentiated Services. Using a measurement approach that provides a means to estimate rate and error term of a real node, we propose an interpretation of router performance based on these parameters taking packet queueing and scheduling into account. Such estimated parameters should be used to make the analysis of real networks more accurate. We underpin the applicability of this approach by comparing analytical results of concatenated routers to real world measurements.",2010,0, 5067,Evaluating software reliability: Integration of MCDM and data mining,"Although software reliability can be evaluated by applying data mining techniques in software engineering data to identify software defects or faults, it is difficult to select the best algorithm among the numerous data mining techniques. The goal of this paper is to propose a multiple criteria decision making (MCDM) framework for data mining algorithms selection in software reliability management. Through the application of MCDM method, this paper compares experimentally the performance of several popular data mining algorithms using 13 different performance metrics over 10 public domain software defect datasets from the NASA Metrics Data Program (MDP) repository. The results of the MCDM methods agree on top-ranked classification algorithms and differ about some classifiers for software defect datasets.",2010,0, 5068,An empirical study of the influence of software Trustworthy Attributes to Software Trustworthiness,"Software Trustworthiness is a hotspot problem in software engineering, and Software Trustworthy Attribute is the base of Software Trustworthiness. Software defect is the basic reason that influences Software Trustworthiness. Therefore, in this thesis we will attempt to utilize text classification technology to classify the historical software detects according to Software Trustworthy Attributes, in order to analyze the influence of Software Trustworthy Attributes to Software Trustworthiness, get the pivotal attribute of Software Trustworthy Attributes; and analyze the influence of Software Trustworthy Attributes to Software Trustworthiness in different version of Gentoo Linux.",2010,0, 5069,A data mining based method: Detecting software defects in source code,"With the expansion of software size and complexity, how to detect defects becomes a challenging problem. This paper proposes a defect detection method which applies data mining techniques in source code to detect two types of defects in one process. The two types of defects are rule-violating defects and copy-paste related defects which may include semantic defects. During the process, this method can also extract implicit programming rules without prior knowledge of the software and detect copy-paste segments with different granularities. The method is evaluated with the Linux kernel that contains more than 4 million lines of C code. The result shows that the resulting system can quickly detect many programming rules and violations to the rules. After using the novel pruning techniques, it will greatly reduce the effort of manually checking violations so as a large number of false positives are effectively eliminated. As an illustrative example of its effectiveness, a case study shows that among the top 50 violations reported by the proposed model, 11 defects can be confirmed after examining the source code.",2010,0, 5070,Time series analysis for bug number prediction,"Monitoring and predicting the increasing or decreasing trend of bug number in a software system is of great importance to both software project managers and software end-users. For software managers, accurate prediction of bug number of a software system will assist them in making timely decisions, such as effort investment and resource allocation. For software end-users, knowing possible bug number of their systems will enable them to take timely actions in coping with loss caused by possible system failures. To accomplish this goal, in this paper, we model the bug number data per month as time series and, use time series analysis algorithms as ARIMA and X12 enhanced ARIMA to predict bug number, in comparison with polynomial regression as the baseline. X12 is the widely used seasonal adjustment algorithm proposed by U.S. Census. The case study based on Debian bug data from March 1996 to August 2009 shows that X12 enhanced ARIMA can achieve the best performance in bug number prediction. Moreover, both ARIMA and X12 enhanced ARIMA outperform the baseline as polynomial regression.",2010,0, 5071,GUI test-case generation with macro-event contracts,"To perform a comprehensive GUI testing, a large number of test cases are needed. This paper proposes a GUI test-case generation approach that is suitable for system testing. The key idea is to extend high-level GUI scenarios with contracts and use the contracts to infer the ordering dependencies of the scenarios. From the ordering dependencies, a state machine of the system is constructed and used to generate test cases automatically. A case study is conducted to investigate the quality of the test cases generated by the proposed approach. The results showed that, in comparison to creating test cases manually, the proposed approach can detect more faults with less human effort.",2010,0, 5072,A computer-vision-assisted system for Videodescription scripting,"We present an application of video indexing/summarization to produce Videodescription (VD) for the blinds. Audio and computer vision technologies can automatically detect and recognize many elements that are pertinent to VD which can speed-up the VD production process. We have developed and integrated many of them into a first computer-assisted VD production software. The paper presents the main outcomes of this R&D activity started 5 years ago in our laboratory. Up to now, usability performance on various video and TV series types have shown a reduction of up to 50% in the VD time production process.",2010,0, 5073,Numerical simulation of metal interconnects of power semiconductor devices,"This paper presents a methodology and a software tool - R3D - for extraction, simulations, analysis, and optimization of metal interconnects of power semiconductor devices. This tool allows an automated calculation of large area device Rdson value, to analyze current density and potential distributions, to design sense device, and to optimize a layout to achieve a balanced and optimal design. R3D helps to reduce the probability of a layout error, and drastically speeds up and improves the quality of layout design.",2010,0, 5074,A quality of service framework for adaptive and dependable large scale system-of-systems,"There is growing recognition within industry that for system growth to be sustainable, the way in which existing assets are used must be improved. Future systems are being developed with a desire for dynamic behaviour and a requirement for dependability at mission critical and safety critical levels. These levels of criticality require predictable performance and as such have traditionally not been associated with adaptive systems. The software architecture proposed for such systems is based around a publish/subscribe model, an approach that, while adaptive, does not typically support critical levels of performance. There is, however, the scope for dependability within such architectures through the use of Quality of Service (QoS) methods. QoS is used in systems where the distribution of resources cannot be decided at design time. A QoS based framework is proposed for providing adaptive and dependable behaviour for future large-scale system-of-systems. Initial simulation results are presented to demonstrate the benefits of QoS.",2010,0, 5075,A study of the internal and external effects of concurrency bugs,"Concurrent programming is increasingly important for achieving performance gains in the multi-core era, but it is also a difficult and error-prone task. Concurrency bugs are particularly difficult to avoid and diagnose, and therefore in order to improve methods for handling such bugs, we need a better understanding of their characteristics. In this paper we present a study of concurrency bugs in MySQL, a widely used database server. While previous studies of real-world concurrency bugs exist, they have centered their attention on the causes of these bugs. In this paper we provide a complementary focus on their effects, which is important for understanding how to detect or tolerate such bugs at run-time. Our study uncovered several interesting facts, such as the existence of a significant number of latent concurrency bugs, which silently corrupt data structures and are exposed to the user potentially much later. We also highlight several implications of our findings for the design of reliable concurrent systems.",2010,0, 5076,1st workshop on fault-tolerance for HPC at extreme scale FTXS 2010,"With the emergence of many-core processors, accelerators, and alternative/heterogeneous architectures, the HPC community faces a new challenge: a scaling in number of processing elements that supersedes the historical trend of scaling in processor frequencies. The attendant increase in system complexity has first-order implications for fault tolerance. Mounting evidence invalidates traditional assumptions of HPC fault tolerance: faults are increasingly multiple-point instead of single-point and interdependent instead of independent; silent failures and silent data corruption are no longer rare enough to discount; stabilization time consumes a larger fraction of useful system lifetime, with failure rates projected to exceed one per hour on the largest systems; and application interrupt rates are apparently diverging from system failure rates.",2010,0, 5077,Decoding STAR code for tolerating simultaneous disk failure and silent errors,"As storage systems grow in size and complexity, various hardware and software component failures inevitably occur, resulting in disk malfunction in failures, as well as silent errors. Existing techniques and schemes overcome the failures and silent errors in a separate fashion. In this paper, we advocate using the STAR code as a unified and systematic mechanism to simultaneously tolerate failures on one disk and silent errors on another. By exploring the unique geometric structure of the STAR code, we propose a novel efficient decoding algorithm - EEL. Both theoretical and experimental performance evaluations show that EEL constantly outperforms a naive Try-and-Test approach by large factors in overall decoding throughput.",2010,0, 5078,Assignments acceptance strategy in a Modified PSO Algorithm to elevate local optima in solving class scheduling problems,Local optima in optimization problems describes a state where no small modification of the current best solution will produce a solution that is better. This situation will make the optimization algorithm unable to find a way to global optimum and finally the quality of the generated solution is not as expected. This paper proposes an assignment acceptance strategy in a Modified PSO Algorithm to elevate local optima in solving class scheduling problems. The assignments which reduce the value of objective function will be totally accepted and the assignment which increases or maintains the value of objective function will be accepted based on acceptance probability. Five combinations of acceptance probabilities for both types of assignments were tested in order to see their effect in helping particles moving out from local optima and also their effect towards the final penalty of the solution. The performance of the proposed technique was measured based on percentage penalty reduction (%PR). Five sets of data from International Timetabling Competition were used in the experiment. The experimental results shows that the acceptance probability of 1 for neutral assignment and 0 for negative assignments managed to produce the highest percentage of penalty reduction. This combination of acceptance probability was able to elevate the particle stuck at the local optima which is one of the unwanted situations in solving optimization problems.,2010,0, 5079,A novel approach for Online signature verification using fisher based probabilistic neural network,"The rapid advancements in communication, networking and mobility have entailed an urgency to further develop basic biometric capabilities to face security challenges. Online signature authentication is increasingly gaining interest thanks to the advent of high quality signature devices. In this paper, we propose a new approach for automatic authentication using dynamic signature. The key features consist in using a powerful combination of linear discriminant analysis (LDA) and probabibilistic neural network (PNN) model together with an appropriate decision making process. LDA is used to reduce the dimensionality of the feature space while maintining discrimination between users. Based on its results, a PNN model is constructed and used for matching purposes. Then a decision making process relying on an appropriate decision rule is performed to accept or reject a claimed identity. Data sets from SVC 2004 have been used to assess the performance of the proposed system. The results show that the proposed method competes with and even outperforms existing methods.",2010,0, 5080,Joint throughput and packet loss probability analysis of IEEE 802.11 networks,"Wireless networks have grown their popularity over the past number of years. Usually wireless networks operate according to IEEE 802.11, which specifies protocols of physical and MAC layers. A number of different studies have been conducted on performance of IEEE 802.11 wireless networks. However, questions of QoS control in such networks have not received sufficient attention from the research community. This paper considers modeling of QoS of IEEE 802.11 networks defined in terms of throughput requirements and packet loss probability limitations. An influence of sizes of packets being transmitted through the network on the QoS is investigated. Extensive simulations confirm results obtained from the mathematical model.",2010,0, 5081,Resilient workflows for high-performance simulation platforms,"Workflows systems are considered here to support large-scale multiphysics simulations. Because the use of large distributed and parallel multi-core infrastructures is prone to software and hardware failures, the paper addresses the need for error recovery procedures. A new mechanism based on asymmetric checkpointing is presented. A rule-based implementation for a distributed workflow platform is detailed.",2010,0, 5082,Distributed diagnosis in uncertain environments using Dynamic Bayesian Networks,"Model-based diagnosis for industrial applications have to be efficient, and deal with modeling approximations and measurement noise. This paper presents a distributed diagnosis scheme, based on Dynamic Bayesian Networks (DBNs) that generates globally correct diagnosis results through local analysis, by only communicating a minimal number of measurements among diagnosers. We demonstrate experimentally that our distributed diagnosis scheme is computationally more efficient than its centralized counterpart, and it does not compromise the accuracy of the diagnosis results.",2010,0, 5083,Locality considerations in exploring custom instruction selection algorithms,"This paper explores custom instruction identifying methodologies for critical code segments of embedded applications, considering the locality in selection algorithms. We propose two policies for selection of custom instructions. In the first approach, called local selection, the structurally equal custom instructions are classified into the same group, called template, and the selection of custom instructions is done locally in each template. On the other hand, in the second approach no classification is made for the selection and the exploration is made globally on all the enumerated matches. We describe a design flow to establish the desired performance. We study the effects of locality on the overall speed and area of the system. Our experiments show that, locally selected custom instructions give better results in terms of both performance and performance per area.",2010,0, 5084,Quantification of myocardial perfusion in 3D SPECT images- stress/rest volume differences,"Our attempt consists of modelling the heart left ventricle at stress and rest situation, using the myocardial scintigraphic data and focuses on how to demonstrate differences in obtained 3D stress/rest images. 70 cardiac patients had completed myocardium tests by Tc-99m tetrofosmin and a GE-Starcam - 4000 SPECT gamma - camera. SPECT (Single Photon Emission Computed Tomography) slices were created and used. The myocardial perfusion was estimated by comparing those slices and the suspicion of an ischemia was indicated. 3D myocardium images were reconstructed by GE Volumetrix software in the GE Xeleris processing system by FBP reconstruction method, Hanning frequency 0.8 filter and a ramp filter and transferred in a Dicom format. The Dicom file, for each patient and each phase is imported to MATLAB 7.8 (R2009a). A series of isocontour surfaces were studied, in order to identify the appropriate threshold value, which isolates the myocardium surface from the rest area of the image. Based on the previously calculated threshold value, the myocardium volume was evaluated and be reconstructed in a 3D image. The possible difference relating to the rest and stress data of the 3D images, in voxels, was calculated, using MATLAB image processing analysis; the quantification and analysis of differences was followed. We tried to determine an index of quantification and define the global quantitative defect size as a fraction of the myocardial volume area in 3D images that will give confidence in cardiac perfusion efficiency recognition by SPECT.",2010,0, 5085,A component based approach for modeling expert knowledge in engine testing automation systems,"Test automation systems used for developing combustion engines comprise hardware components and software functionality they depend on. Such systems usually perform similar tasks; they comprise similar hardware and execute similar software. Regarding their details, however, literally no two systems are exactly the same. In order to support such variations, the automation system has to be customized accordingly. Without a tools that properly supports both, customization as well as standardization of functionality, customization can be time consuming and error-prone. In this paper we describe a modeling driven approach that is based on components with hard- and software view that allows defining standard functionality for physical hardware. We show how this way most of the automation system's standard functionality can be generated automatically, while still allowing to add custom functionality.",2010,0, 5086,A hierarchical fault tolerant architecture for component-based service robots,"Due to the benefits of reusability and productivity, component-based approach has become the primary technology in service robot system development. However, because component developer cannot foresee the integration and operating condition of the components, they cannot provide appropriate fault tolerance function, which is crucial for commercial success of service robots. The recently proposed robot software frames such as MSRDS (Microsoft Robotics Developer Studio), RTC (Robot Technology Component), and OPRoS (Open Platform for Robotic Services) are very limited in fault tolerance support. In this paper, we present a hierarchically-structured fault tolerant architecture for component-based robot systems. The framework integrates widely-used, representative fault tolerance measures for fault detection, isolation, and recovery. The system integrators can construct fault tolerance applications from non-fault-aware components, by declaring fault handling rules in configuration descriptors or/and adding simple helper components, considering the constraints of components and the operating environment. To demonstrate the feasibility and benefits, a fault tolerant framework engine and test robot systems are implemented for OPRoS. The experiment results with various simulated fault scenarios validate the feasibility, effectiveness and real-time performance of the proposed approach.",2010,0, 5087,Protection Principles for Future Microgrids,"Realization of future low-voltage (LV) microgrids requires that all technical issues, such as power and energy balance, power quality and protection, are solved. One of the most crucial one is the protection of LV microgrid during normal and island operation. In this paper, protection issues of LV microgrids are presented and extensions to the novel LV-microgrid-protection concept has been developed based on simulations with PSCAD simulation software. Essential in the future-protection concept for LV microgrids will be the utilization of high speed, standard, e.g., IEC-61850-based communication to achieve fast, selective, and reliable operation protection.",2010,0, 5088,A Hybrid Approach for Detection and Correction of Transient Faults in SoCs,"Critical applications based on Systems-on-Chip (SoCs) require suitable techniques that are able to ensure a sufficient level of reliability. Several techniques have been proposed to improve fault detection and correction capabilities of faults affecting SoCs. This paper proposes a hybrid approach able to detect and correct the effects of transient faults in SoC data memories and caches. The proposed solution combines some software modifications, which are easy to automate, with the introduction of a hardware module, which is independent of the specific application. The method is particularly suitable to fit in a typical SoC design flow and is shown to achieve a better trade-off between the achieved results and the required costs than corresponding purely hardware or software techniques. In fact, the proposed approach offers the same fault-detection and -correction capabilities as a purely software-based approach, while it introduces nearly the same low memory and performance overhead of a purely hardware-based one.",2010,0, 5089,A hierarchical mixture model voting system,"It is important to improve voting system in current software fault tolerance research. In this paper, we propose a hierarchical mixture model voting system (HMMVS). This is an application of the hierarchical mixtures of experts (HME) architecture. In HMMVS, individual voting models are used as experts. During the training of HMMVS, an Expectation-Maximizing (EM) algorithm is employed to estimate the parameters for HME architecture. Experiments illustrate that our approach performs quite well after training, and better than single classical voting system. We show that the method can automatically select the most appropriate lower-level model for the data and performances are well in voting procedure.",2010,0, 5090,Combined heading technology in vehicle location and navigation system,"As an important parameter in vehicle location and navigation system, heading measurement requires a higher precision. The paper aims at the research on fusion of heading information obtained through magnetic compass, angular rate gyro and GPS. Firstly, a integrated filter model has been designed to adapt the error characteristics of gyroscope, magnetic compass and GPS. Then, considering loss of GPS, a Preposed Detection Link (PDL) is adopted to inhibit the low-frequency magnetic interference of magnetic compass and to improve the accuracy of information integration. Simulation with gyroscope, magnetic compass and GPS shows that not only the high-frequency magnetic interference and the accumulation of errors can be inhibited, but the low-frequency interference for long time can also be inhibited by the integrated filter, and the PDL-based compensation filter can provide higher accurate heading value for vehicles as an aided plan when GPS signal quality is not good.",2010,0, 5091,Comparative analysis of input parameters using wavelet transform for voltage sag disturbance classification,"Voltage sag is one of the major power quality disturbance today. Significant causes of voltage sag include power system fault, starting of induction motor and energizing of transformer. This paper focus on a comparative analysis carried out to determine the best input parameters to be employed in support vector machine (SVM) method for voltage sag cause classification. Wavelet transform is carried out to extract the feature of these voltage sags which are used as the input to the SVM. Two different types of inputs used, namely the energy level and the min-max values of the wavelet. Comparisons between these input parameters are made to determine the best method which give the best prediction values. Training and testing data based on the standard IEEE 30 bus distribution system are simulated using PSCAD software. The results show that the performance of the min-max wavelet values as the input to the SVM is superior than the energy level input in term of accuracy and computational time.",2010,0, 5092,Queueing models based performance evaluation approach for Video On Demand back office system,"Since queueing is a common behavior in computer systems, analysis of queueing models become a mature theory that is widely applied in performance evaluation. In our work, a queueing model based approach is used to manage the performance attributes of a Video On Demand (VOD) back office system, which has been deployed and put into service on a province-scale cable TV network. Extracted from Software Performance Engineering (SPE) methodology, the approach is more compact and efficient due to its two phases evaluation processes, which are design evaluation used to estimate performance measures for the analysis of design improvement and deployment evaluation used to to determine the hardware requirements for the establishment of high QoS. As a significant research and practice, the approach can be used in any development processes of complex software systems, which have same features of collaboration, distribution and third-party integration as our VOD back office system.",2010,0, 5093,Analyzing the Relationships between some Parameters of Web Services Reputation,"In this paper, we provide an analysis of the impacts of some reputation parameters that an agent-based Web service holds while being active in the environment. To this end, we deploy a reputation model that ranks the Web services with respect to their popularity in the network of users. We model and analyze the arrival of requests and study their impacts on the overall reputation. The Web services may be encouraged to handle the peak loads by gathering to a group. Besides theoretical discussions, we also provide significant results, which elaborate more on the details of the system parameters. We extend the details of these results to empirical results and link the observations of the implemented environment to the results that we theoretically obtain.",2010,0, 5094,Software project schedule variance prediction using Bayesian Network,"The major objective of software engineer is to guarantee to deliver high-quality software on time and within budget. But as the development of software technology and the rapid extension of application areas, the size and complexity of software is increasing so quickly that the cost and schedule is often out of control. However, few groups or researchers have proposed an effective method to help project manager make reasonable project plan, resource allocation and improvement actions. This paper proposes Bayesian Network to solve the problem of predicting and controlling of software schedule in order to achieve proactive management. Firstly, we choose factors influencing software schedule and determine some significant cause-effect relationship between factors. Then we analyze data using statistical analysis and deal with data using data discretization. Thirdly, we construct the Bayesian structure of software schedule and learn the condition probability table of the structure. The structure and condition probability table constitute the model for software schedule variance. The model can be used not only to help project manager predict probability of software schedule variance but also guide software developers to make reasonable improvement actions. At last, an application shows how to use the model and the result proves the validity of the model. In addition, a sensitivity analysis is developed with the model to locate the most important factor of software schedule variance.",2010,0, 5095,The implement of ROFD model: Logical modeling for financial risk data,"Tata modeling has great significance in software development. To know the quality of data modeling, we need to begin with the data model which is the product of the data modeling. Our analysis shows that there are deficiencies in current financial data model. Today's business dada model only shows the assets amount the enterprise holds and the proprietorship of the stockholders, this framework had built on the traditional accounting system which is regarded as golden rule in the commerce. And it could not show the risks of enterprises' operating. With this problem in mind we have established a new data model in concept level to reconstruct the conceptual schema of business database aimed at forecasting and disclosing future information of enterprise's operating. We added a special information, namely risk information, into the present data model, thus we built up a novel data model we call it “risk-oriented financial data modelâ€?(ROFD). This paper gives out the implement of the ROFD model. In this part of work we firstly presents the new ROFD model structure; secondly we identify the risk category and its relationships with others; thirdly introduce the temporal analysis method in our modeling and finally we illustrate the implement of usage of the ROFD model from accounting view. The risk-oriented financial conceptual data model has an advantage of providing the information about the risk of an enterprise to the users.",2010,0, 5096,The design of alarm and control system for electric fire prevention based on MSP430F149,"This paper introduces an electrical fire control system which is composed of stand-alone electrical fire detector and electrical fire monitoring equipment. The detector with fieldbus communication technologies uses MCU to realize the main route leakage protection for low-voltage 3-phase 4-wire system; at the same time, finish the measure, display and control of voltage, current, power, electricity, temperature and other parameters; through the monitoring equipments have been installed the management software written by VC++, the operation of the scene can be monitored and the alarm information can be detected in time. With RS-485 or Ethernet communication, the system can be associated with kinds of standard power monitoring software, circular monitoring and control more than 250 detects, and record 100 types of faults and data with more than 12 months storage time.",2010,0, 5097,Analysis and suppression for impact fault of hybrid machine tool based on preview control,"The impact fault of the hybrid machine tool were considered as the analysis object in this paper. The kinematics control principle of the hybrid machine tool was given as well as the causes of the impact based on the hybrid structure. The impact suppression method was given, which based on the preview-control algorithm. It was confirmed by experiment that the impact was reduced which was account for the application of preview-control algorithm and the performance of the control strategy was improved.",2010,0, 5098,Design and implementation of MVB protocol analyzer,"The Train Communication Network (TCN) is a high-performance local area network for information transfer of modern train control, state detection, fault diagnosis and passenger service and so on. The Multifunction Vehicle Bus (MVB) is a critical part of the TCN and the design of the MVB's protocol analysis tool plays a very important function. Supported by the friendly Windows graphics user interface, a powerful MVB's protocol analysis tool is developed in this paper and this tool fills in a vacancy of the technical application presently. Starting from the MVB protocol, the frame construction of the MVB protocol analyzer and the function design of each module of up level computer software are introduced. Finally this protocol analyzer is applied to the developing and debugging process of state detection, fault diagnosis and vehicle equipment of the TCN.",2010,0, 5099,Realization of the ground resistance detector based on different frequency signals in DC system,"Several of the conventional ground resistance measurement methods such as AC method, DC leakage method etc. and their shortages are discussed. A new kind of ground resistance detector in DC system is presented. The detector, with C8051F041 as kernel, adopts measurement method basing on different frequency signals. The proposed method can reduce impact of branch distributing capacitor on ground fault location and improve the measurement sensitivity and precision of ground resistance. The principles, hardware and software design are introduced emphatically. The experimental results and practical operations show that the new method for insulation detection and ground fault location are very valid and reliable.",2010,0, 5100,A Service-Oriented Framework for GNU Octave-Based Performance Prediction,"Cloud/Grid environments are characterized by a diverse set of technologies used for communication, execution and management. Service Providers, in this context, need to be equipped with an automated process in order to optimize service provisioning through advanced performance prediction methods. Furthermore, existing software solutions such as GNU Octave offer a wide range of possibilities for implementing these methods. However, their automated use as services in the distributed computing paradigm includes a number of challenges from a design and implementation point of view. In this paper, a loosely coupled service-oriented implementation is presented, for taking advantage of software like Octave in the process of creating and using prediction models during the service lifecycle of a SOI. In this framework, every method is applied as an Octave script in a plug-in fashion. The design and implementation of the approach is validated through a case study application which involves the transcoding of raw video to MPEG4.",2010,0, 5101,Time and Prediction based Software Rejuvenation Policy,"Operational software systems often experience an “agingâ€?phenomenon, characterized by progressive performance degradation and a sudden hang/crash failure. Software rejuvenation is a proactive fault-tolerance strategy aimed to prevent unexpected outages due to aging. Existing rejuvenation approaches can be largely classified into two types: Purely Time based software rejuvenation policy (PTSRP) and Purely Prediction based Software rejuvenation policy (PPSRP). In this paper, combining the merits of these two policies, a new rejuvenation policy, Time and Prediction based Software Rejuvenation Policy (TPSRP), is proposed. In this policy, time based rejuvenation policy is performed from the system's start or restart, during which prediction based policy is also employed. To evaluate the effectiveness of this new policy, system availability and Downtime cost is adopted and a stochastic reward net model is built. Numerical results show that under the same conditions, TPSRP can achieve higher availability and lower downtime cost than both the PTSRP and PPSRP.",2010,0, 5102,Computational Intelligent Gait-Phase Detection System to Identify Pathological Gait,"An intelligent gait-phase detection algorithm based on kinematic and kinetic parameters is presented in this paper. The gait parameters do not vary distinctly for each gait phase; therefore, it is complex to differentiate gait phases with respect to a threshold value. To overcome this intricacy, the concept of fuzzy logic was applied to detect gait phases with respect to fuzzy membership values. A real-time data-acquisition system was developed consisting of four force-sensitive resistors and two inertial sensors to obtain foot-pressure patterns and knee flexion/extension angle, respectively. The detected gait phases could be further analyzed to identify abnormality occurrences, and hence, is applicable to determine accurate timing for feedback. The large amount of data required for quality gait analysis necessitates the utilization of information technology to store, manage, and extract required information. Therefore, a software application was developed for real-time acquisition of sensor data, data processing, database management, and a user-friendly graphical-user interface as a tool to simplify the task of clinicians. The experiments carried out to validate the proposed system are presented along with the results analysis for normal and pathological walking patterns.",2010,0, 5103,Wireless Smart Grid Design for Monitoring and Optimizing Electric Transmission in India,"Electricity losses in India during transmission and distribution are extremely high and vary between 30 to 45%. Wireless network based architecture is proposed in this paper, for monitoring and optimizing the electric transmission and distribution system in India. The system consists of multiple smart wireless transformer sensor node, smart controlling station, smart transmission line sensor node, and smart wireless consumer sensor node. The proposed software module also incorporates different data aggregation algorithms needed for the different pathways of the electricity distribution system. This design incorporates effective solutions for multiple problems faced by India's electricity distribution system such as varying voltage levels experienced due to the varying electrical consumption, power theft, manual billing system, and transmission line fault. The proposed architecture is designed for single phase electricity distribution system, and this design can be implemented for three phase system of electricity distribution with minor modifications. The implementation of this system will save large amount of electricity, and thereby electricity will be available for more number of consumers than earlier, in a highly populated country such as India. The proposed smart grid architecture developed for Indian scenario, delivers continuous real-time monitoring of energy utilization, efficient energy consumption, minimum energy loss, power theft detection, line fault detection, and automated billing.",2010,0, 5104,Notice of Retraction
Material inner defect detection by a vibration spectrum analysis,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

In this paper is described possibility of non-destructive diagnostics of solid objects by software analysis of vibration spectrum. With using platform MATLAB, we can process and evaluate information from accelerometer, which is placed on the measured object. The analog signal is digitized by special I/O device dSAPCE CP-1104 and then processed offline with FFT (Fast Fourier Transform). The power spectrum is then examined by developed evaluating procedures and individual results are displayed in bar graph. When we take a look on results, there is an evidently correlation between spectrum and examining object (inner defects).",2010,0, 5105,A novel feature selection technique for highly imbalanced data,"Two challenges often encountered in data mining are the presence of excessive features in a data set and unequal numbers of examples in the two classes in a binary classification problem. In this paper, we propose a novel approach to feature selection for imbalanced data in the context of software quality engineering. This technique consists of a repetitive process of data sampling followed by feature ranking and finally aggregating the results generated during the repetitive process. This repetitive feature selection method is compared with two other approaches: one uses a filter-based feature ranking technique alone on the original data, while the other uses the data sampling and feature ranking techniques together only once. The empirical validation is carried out on two groups of software data sets. The results demonstrate that our proposed repetitive feature selection method performs on average significantly better than the other two approaches, especially when the data set is highly imbalanced.",2010,0, 5106,A Neural network based approach for modeling of severity of defects in function based software systems,"There is lot of work done in prediction of the fault proneness of the software systems. But, it is the severity of the faults that is more important than number of faults existing in the developed system as the major faults matters most for a developer and those major faults needs immediate attention. As, Neural networks, which have been already applied in software engineering applications to build reliability growth models predict the gross change or reusability metrics. Neural networks are non-linear sophisticated modeling techniques that are able to model complex functions. Neural network techniques are used when exact nature of input and outputs is not known. A key feature is that they learn the relationship between input and output through training. In this paper, five Neural Network Based techniques are explored and comparative analysis is performed for the modeling of severity of faults present in function based software systems. The NASA's public domain defect dataset is used for the modeling. The comparison of different algorithms is made on the basis of Mean Absolute Error, Root Mean Square Error and Accuracy Values. It is concluded that out of the five neural network based techniques Resilient Backpropagation algorithm based Neural Network is the best for modeling of the software components into different level of severity of the faults. Hence, the proposed algorithm can be used to identify modules that have major faults and require immediate attention.",2010,0, 5107,A Density Based Clustering approach for early detection of fault prone modules,"Quality of a software component can be measured in terms of fault proneness of data. Quality estimations are made using fault proneness data available from previously developed similar type of projects and the training data consisting of software measurements. To predict fault-proneness of modules different techniques have been proposed which includes statistical methods, machine learning techniques, neural network techniques and clustering techniques. The aim of proposed approach is to investigate that whether metrics available in the early lifecycle (i.e. requirement metrics), metrics available in the late lifecycle (i.e. code metrics) and metrics available in the early lifecycle (i.e. requirement metrics) combined with metrics available in the late lifecycle (i.e. code metrics) can be used to identify fault prone modules by using Density Based Clustering technique. This approach has been tested with real time defect datasets of NASA software projects named as PC1. Predicting faults early in the software life cycle can be used to achieve high software quality. The results show that the fusion of requirement and code metric is the best prediction model for detecting the faults as compared with mostly used code based model.",2010,0, 5108,A software-based self-test methodology for in-system testing of processor cache tag arrays,"Software-Based Self-Test (SBST) has emerged as an effective alternative for processor manufacturing and in-system testing. For small memory arrays that lack BIST circuitry such as cache tag arrays, SBST can be a flexible and low-cost solution for March test application and thus a viable supplement to hardware approaches. In this paper, a generic SBST program development methodology is proposed for periodic in-system (on-line) testing of L1 data and instruction cache memory tag arrays (both for direct mapped and set associative organization) based on contemporary March test algorithms. The proposed SBST methodology utilizes existing special performance instructions and performance monitoring mechanisms of modern processors to overcome cache tag testability challenges. Experimental results on OpenRISC 1200 processor core demonstrate that high test quality of contemporary March test algorithms is preserved while low-cost in-system testing in terms of test duration and test code size is achieved.",2010,0, 5109,Checkpointing virtual machines against transient errors,"This paper proposes VM-μCheckpoint, a lightweight software mechanism for high-frequency checkpointing and rapid recovery of virtual machines. VM-μCheckpoint minimizes checkpoint overhead and speeds up recovery by saving incremental checkpoints in volatile memory and by employing copy-on-write, dirty-page prediction, and in-place recovery. In our approach, knowledge of fault/error latency is used to explicitly address checkpoint corruption, a critical problem, especially when checkpoint frequency is high. We designed and implemented VM-μCheckpoint in the Xen VMM. The evaluation results demonstrate that VM-μCheckpoint incurs an average of 6.3% execution-time overhead for 50ms checkpoint intervals when executing the SPEC CINT 2006 benchmark.",2010,0, 5110,SBST for on-line detection of hard faults in multiprocessor applications under energy constraints,"Software-Based Self-Test (SBST) has emerged as an effective method for on-line testing of processors integrated in non safety-critical systems. However, especially for multi-core processors, the notion of dependability encompasses not only high quality on-line tests with minimum performance overhead but also methods for preventing the generation of excessive power and heat that exacerbate silicon aging mechanisms and can cause long term reliability problems. In this paper, we initially extend the capabilities of a multiprocessor simulator in order to evaluate the overhead in the execution of the useful application load in terms of both performance and energy consumption. We utilize the derived power evaluation framework to assess the overhead of SBST implemented as a test thread in a multiprocessor environment. A range of typical processor configurations is considered. The application load consists of some representative SPEC benchmarks, and various scenarios for the execution of the test thread are studied (sporadic or continuous execution). Finally, we apply in a multiprocessor context an energy optimization methodology that was originally proposed to increase battery life for battery-powered devices. The methodology reduces significantly the energy and performance overhead without affecting the test coverage of the SBST routines.",2010,0, 5111,Identifying effective software engineering (SE) team personality types composition using rough set approach,"This paper presents an application of rough sets in identifying effective personality-type composition in software engineering (SE) teams. Identifying effective personality composition in teams is important for determining software project success. It was shown that a balance of the personality types Sensing (S), Intuitive (N), Thinking (T) and Feeling (F) assisted teams in achieving higher software quality. In addition, Extroverted (E) members also had an impact on team performance. Even though the size of empirical data was too small, the rough-set technique allows the generation of significant personality-type composition rules to assist decision makers in forming effective teams. Future works will include more empirical data in order to develop predicting model of teams' performance based on personality types.",2010,0, 5112,A proposed reusability attribute model for aspect oriented software product line components,"Reusability assessment is vital for software product line due to reusable nature of its core components. Reusability being a high level quality attribute is more relevant to the software product lines as the entire set of individual products depend on the software product line core assets. Recent research proposes the use of aspect oriented techniques for product line development to provide better separation of concerns, treatment of crosscutting concerns and variability management. There are quality models available which relate the software reusability to its attributes. These models are intended to assess the reusability of software or a software component. The assessment of aspect oriented software and a core asset differs from the traditional software or component. There is need to develop a reusability model to relate the reusability attributes of aspect oriented software product line assets. This research work is an effort towards the development of reusability attribute model for software product line development using aspect oriented techniques.",2010,0, 5113,Establishing a defect prediction model using a combination of product metrics as predictors via Six Sigma methodology,"Defect prediction is an important aspect of the Product Development Life Cycle. The rationale in knowing predicted number of functional defects earlier on in the lifecycle, rather than to just find as many defects as possible during testing phase is to determine when to stop testing and ensure all the in-phase defects have been found in-phase before a product is delivered to the intended end user. It also ensures that wider test coverage is put in place to discover the predicted defects. This research is aimed to achieve zero known post release defects of the software delivered to the end user by MIMOS Berhad. To achieve the target, the research effort focuses on establishing a test defect prediction model using Design for Six Sigma methodology in a controlled environment where all the factors contributing to the defects of the product is within MIMOS Berhad. It identifies the requirements for the prediction model and how the model can benefit them. It also outlines the possible predictors associated with defect discovery in the testing phase. Analysis of the repeatability and capability of test engineers in finding defects are demonstrated. This research also describes the process of identifying characteristics of data that need to be collected and how to obtain them. Relationship of customer needs with the technical requirements of the proposed model is then clearly analyzed and explained. Finally, the proposed test defect prediction model is demonstrated via multiple regression analysis. This is achieved by incorporating testing metrics and development-related metrics as the predictors. The achievement of the whole research effort is described at the end of this study together with challenges faced and recommendation for future research work.",2010,0, 5114,Text-based classification incoming maintenance requests to maintenance type,"Classifying maintenance request is one of the important task in the large software system, yet often in large software system are not well classified. This is due to difficult in classifying by software maintainer. The categorization of maintenance type is effect on determine the corrective, adaptive, perfective, and preventive which are important to determine various quality factors of the system. In this paper we found that the requests for maintenance support could be classified correctly into corrective and adaptive. We used two different machine learning techniques alternatively Naïve Bayesian and Decision tree to classify issues into two type. Machine learning approach used the features that could be effective in increasing the accuracy of the system. We used 10-fold cross validation to evaluate the system performance. 1700 issues from shipment monitoring system were used to asses the accuracy of the system.",2010,0, 5115,Improved Adaptation of Web Service Composition Based on Change Impact Probability,"Due to the dynamic nature of environment in service computing, it becomes more important to make the Web service composition able to self-adapt to changes in its environment. Achieving this goal is a challenging task, as the performance of a composite Web service will be decreased if the adaptation happens frequently in runtime. In this paper, we improve adaptation of Web service composition by predicting the probability that a change will actual affect the running composite service, which is named change impact probability (CIP), and provide a methodology based on a comprehensive QoS model of a QoS change and an execution state model of an executing composite service for computing the CIP. To bring the proposed approach to fruition, we develop a prototype system and apply the approach to a loan rate query service for illustration.",2010,0, 5116,FTDIS: A Fault Tolerant Dynamic Instruction Scheduling,"In this work, we target the robustness for controller scheduler of type Tomasulo for SEU faults model. The proposed fault-tolerant dynamic scheduling unit is named FTDIS, in which critical control data of scheduler is protected from driving to an unwanted stage using Triple Modular Redundancy and majority voting approaches. Moreover, the feedbacks in voters produce recovery capability for detected faults in the FTDIS, enabling both fault mask and recovery for system. As the results of analytical evaluations demonstrate, the implemented FTDIS unit has over 99% fault detection coverage in the condition of existing less than 4 faults in critical bits. Furthermore, based on experiments, the FTDIS has a 200% hardware overhead comparing to the primitive dynamic scheduling control unit and about 50% overhead in comparision to a full CPU core. The proposed unit also has no performance penalty during simulation. In addition, the experiments show that FTDIS consumes 98% more power than the primitive unit.",2010,0, 5117,A Grouping-Based Strategy to Improve the Effectiveness of Fault Localization Techniques,"Fault localization is one of the most expensive activities of program debugging, which is why the recent years have witnessed the development of many different fault localization techniques. This paper proposes a grouping-based strategy that can be applied to various techniques in order to boost their fault localization effectiveness. The applicability of the strategy is assessed over - Tarantula and a radial basis function neural network-based technique; across three different sets of programs (the Siemens suite, grep and gzip). Results are suggestive that the grouping-based strategy is capable of significantly improving the fault localization effectiveness and is not limited to any particular fault localization technique. The proposed strategy does not require any additional information than what was already collected as input to the fault localization technique, and does not require the technique to be modified in any way.",2010,0, 5118,Mining Performance Regression Testing Repositories for Automated Performance Analysis,"Performance regression testing detects performance regressions in a system under load. Such regressions refer to situations where software performance degrades compared to previous releases, although the new version behaves correctly. In current practice, performance analysts must manually analyze performance regression testing data to uncover performance regressions. This process is both time-consuming and error-prone due to the large volume of metrics collected, the absence of formal performance objectives and the subjectivity of individual performance analysts. In this paper, we present an automated approach to detect potential performance regressions in a performance regression test. Our approach compares new test results against correlations pre-computed performance metrics extracted from performance regression testing repositories. Case studies show that our approach scales well to large industrial systems, and detects performance problems that are often overlooked by performance analysts.",2010,0, 5119,A Simulation Study on Some Search Algorithms for Regression Test Case Prioritization,"Test case prioritization is an approach aiming at increasing the rate of faults detection during the testing phase, by reordering test case execution. Many techniques for regression test case prioritization have been proposed. In this paper, we perform a simulation experiment to study five search algorithms for test case prioritization and compare the performance of these algorithms. The target of the study is to have an in-depth investigation and improve the generality of the comparison results. The simulation study provides two useful guidelines: (1) Two search algorithms, Additional Greedy Algorithm and 2-Optimal Greedy Algorithm, outperform the other three search algorithms in most cases. (2) The performance of the five search algorithms will be affected by the overlap of test cases with regard to test requirements.",2010,0, 5120,Software Quality Prediction Models Compared,"Numerous empirical studies confirm that many software metrics aggregated in software quality prediction models are valid predictors for qualities of general interest like maintainability and correctness. Even these general quality models differ quite a bit, which raises the question: Do the differences matter? The goal of our study is to answer this question for a selection of quality models that have previously been published in empirical studies. We compare these quality models statistically by applying them to the same set of software systems, i.e., to altogether 328 versions of 11 open-source software systems. Finally, we draw conclusions from quality assessment using the different quality models, i.e., we calculate a quality trend and compare these conclusions statistically. We identify significant differences among the quality models. Hence, the selection of the quality model has influence on the quality assessment of software based on software metrics.",2010,0, 5121,An Improved Regression Test Selection Technique by Clustering Execution Profiles,"In order to improve the efficiency of regression testing, many test selection techniques have been proposed to extract a small subset from a huge test suite, which can approximate the fault detection capability of the original test suite for the modified code. This paper presents a new regression test selection technique by clustering the execution profiles of modification-traversing test cases. Cluster analysis can group program executions that have similar features, so that program behaviors can be well understood and test cases can be selected in a proper way to reduce the test suite effectively. An experiment with some real programs is designed and implemented. The experiment results show that our approach can produce a smaller test suite with most fault-revealing test cases in comparison with existing selection techniques.",2010,0, 5122,Data Unpredictability in Software Defect-Fixing Effort Prediction,"The prediction of software defect-fixing effort is important for strategic resource allocation and software quality management. Machine learning techniques have become very popular in addressing this problem and many related prediction models have been proposed. However, almost every model today faces a challenging issue of demonstrating satisfactory prediction accuracy and meaningful prediction results. In this paper, we investigate what makes high-precision prediction of defect-fixing effort so hard from the perspective of the characteristics of defect dataset. We develop a method using a metric to quantitatively analyze the unpredictability of a defect dataset and carry out case studies on two defect datasets. The results show that data unpredictability is a key factor for unsatisfactory prediction accuracy and our approach can explain why high-precision prediction for some defect datasets is hard to achieve inherently. We also provide some suggestions on how to collect highly predictable defect data.",2010,0, 5123,A Methodology for Continuos Quality Assessment of Software Artefacts,"Although some methodologies for evaluating the quality of software artifacts exist, all of these are isolated proposals, which focus on specific artifacts and apply specific evaluation techniques. There is no generic and flexible methodology that allows quality evaluation of any kind of software artifact, regardless of type, much less a tool that supports this. To tackle that problem in this paper, we propose the CQA Environment, consisting of a methodology (CQA-Meth) and a tool that implements it (CQA-Tool). We began applying this environment in the evaluation of the quality of UML models (use cases, class and statechart diagrams). To do so, we have connected CQA-Tool to the different tools needed to assess the quality of models, which we also built ourselves. CQA-Tool, apart from implementing the methodology, provides the capacity for building a catalogue of evaluation techniques that integrates the evaluation techniques (e.g. metrics, checklists, modeling conventions, guidelines, etc.) which are available for each software artifact. CQA Environment is suitable for use by companies that offer software quality evaluation services, especially for clients who are software development organizations and who are outsourcing software construction. They will obtain an independent quality evaluation of the software products they acquire. Software development organizations that perform their own evaluation will be able to use it as well.",2010,0, 5124,Performance Prediction of WS-CDL Based Service Composition,"In this paper, we propose a translation-based approach for performance prediction of composite service built on WS-CDL. To translate a composite service into a state-transition model for quantitative analysis, we first give a set of translation rules to map WS-CDL elements into general-stochastic-petri-nets (GSPN). Based on the GSPN representation, we introduce the prediction algorithm to calculate the expected-process-normal-completion-time of WS-CDL processes. We also validate the accuracy of the approach in the experimental study by showing 95% confidence intervals obtained from experimental performance results cover corresponding theoretical prediction values.",2010,0, 5125,A Fault-Tolerant Strategy for Improving the Reliability of Service Composition,"Service composition is an important means for integrating the individual Web services for creating new value added systems that satisfy complex demands. Since, Web services exist in the heterogeneous environments on the Internet, study on how to guarantee the reliability of service composition in a distributed, dynamic and complex environment becomes more and more important. This paper proposes a service composition net(SCN) and fault-tolerant strategy to improve the reliability of service composition. The strategy consists of static strategy, dynamic strategy and exception handling mechanism, which can be used to dynamically adjust component service for achieving good reliability as well as good overall performance. SCN is adopted to model different components of service composition. The fault detection and fault recovery mechanisms are also considered. Based on the constructed model, theories of Petri nets help prove the consistency of processing states and the effectiveness of the strategy. A case study of Export Service illustrates the feasibility of proposed method.",2010,0, 5126,Active Monitoring for Control Systems under Anticipatory Semantics,"As the increment of software complexity, traditional software analysis, verification and testing techniques can not fully guarantee the faultlessness of deployed systems. Therefore, runtime verification has been developed to continuously monitor the running system. Typically, runtime verification can detect property violations but cannot predict them, and consequently cannot prevent the failures from occurring. To remedy this weakness, active monitoring is proposed in this paper. Its purpose is not repairing the faults after failures have occurred, but predicting the possible faults in advance and triggering the necessary steering actions to prevent the software from violating the property. Anticipatory semantics of linear temporal logic is adopted in monitor construction here, and the information of system model is used for successful steering and prevention. The prediction and prevention will form a closed-loop feedback based on control theory. The approach can be regarded as an effective complement of traditional testing and verification techniques.",2010,0, 5127,Design of BDI Agent for Adaptive Performance Testing of Web Services,"As services are dynamic discovered and bound in the open Internet environment, testing has to be exercised continuously and online to verify and validate the continuous changes and to ensure the quality of the integrated service-based system. During this process, testing strategies have to be adapted in accordance to the changes in the environment and target systems. Software agents are characterized by context awareness, autonomous decision making and social collaboration capabilities. The paper introduces the design of BDI (Believe-Decision-Intention) agents to facilitate adaptive performance testing of Web Services. The BDI model specifies the necessary test knowledge, test goal and action plan to carry out test and adaptive schedule. Performance testing is defined as a scheduling problem to select the workload and test cases in order to achieve the goal of performance abnormal detection. A two-level control architecture is built. At the TR (Test Runner) level, the BDI agents control the workload of concurrent requests. At the TC (Test Coordinator) level, the BDI agents control the complexity of test cases. Agents communicate and collaborate with each other to share knowledge and test plan. The paper introduces the design of the BDI model, the adaptation rules and the control architecture. Case study is exercised to illustrate the adaptive testing process based on the design of BDI agents.",2010,0, 5128,Leveraging Performance and Power Savings for Embedded Systems Using Multiple Target Deadlines,"Tasks running on embedded systems are often associated with deadlines. While it is important to complete tasks before their associated deadlines, performance and energy consumption also play important roles in many usages of embedded systems. To address these issues, we explore the use of Dynamic Voltage and Frequency Scaling (DVFS), a standard feature available on many modern processors for embedded systems. Previous studies often focus on frequency assignment for energy savings and meeting definite task deadlines. In this paper, we present a heuristic algorithm based on convex optimization techniques to compute energy-efficient processor frequencies for soft real-time tasks. Our novel approach provides performance improvements by allowing definitions of multiple target deadlines for each task. We simulate two versions of our algorithm in MATLAB and evaluate their performance and efficiency. The experimental results show that our strategy leverages performance and energy savings, and can be customized to suit practical applications.",2010,0, 5129,Information flow metrics and complexity measurement,"Complexity metrics takes an important role to predict fault density and failure rate in the software deployment. Information flow represents the flow of data in collective procedures in the processes of a concrete system. The present static analysis of source code and the ability of metrics are incompetent to predict the actual amount of information flow complexity in the modules. In this paper, we propose metrics IF-C, F-(I+O), (C-L) and (P-C) focuses on the improved information flow complexity estimation method, which is used to evaluate the static measure of the source code and facilitates the limitation of the existing work. Using the IF-C method, the framework of information flow metrics can be significantly combined with external flow of the active procedure, which depends on various levels of procedure call in the code.",2010,0, 5130,Notice of Retraction
Accurate and efficient reliability Markov model analysis of predictive hybrid m-out-of-n systems,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

In recent years we perceive many researches on new techniques for improving the performance of fault-tolerant control systems. Prediction of correct system output is one of these techniques which can calculate and predict the probable correct system output when the ordinary techniques are incapacitated to make decision. In this paper, a performance model of predictive m-out-of-n hybrid redundancy is introduced and analyzed. The results of equations and mathematical relations based on Markov model demonstrated that this approach can improve the system reliability in comparison with traditional m-out-of-n system.",2010,0, 5131,Research on automatic detection for defect on bearing cylindrical surface,"At present, it is still by using manual method to detect the defects on micro bearing surface. The method is laborious and time consuming. Moreover, it has a low efficiency and high miss-detection rate. Due to the fact, an on-line automatic detection system is developed using linear CCD. The proposed system is composed of three subsystems: the detection environment setting subsystem, the automatic detection subsystem, and the data management subsystem. In order to lead the above subsystems to cooperate with each other, control software is developed with LabVIEW 8.5 as a platform. Experimental results indicate that the system realizes the predefined functions and caters to the requirements of stability, real-time performance, and accuracy. Thus it can be applied for actual production.",2010,0, 5132,AM2301-based temperature and humidity tester of solar battery system,"A tester of environmental parameters of solar energy based on ARM microprocessor of S3C2440A was introduced in this paper, the design of software and hardware and implementation methods were detailed expatiated. The environmental parameters which included temperature, humidity and illumination could be measured and displayed real-timely and adjusted the azimuth of cell panel. The data collected by the tester of would be transferred to the host computer through the industry bus of rs485 which would be Analyzed further and storage. A new single all-calibrated warm humidity sensor AM2301 which included a capacitive humidity sensing element and a NTC thermostat was introduced in this article. The sensor was widely used in the automatic control and intelligent detection field because of remarkable quality and fast response and higher performance-price ratio.",2010,0, 5133,An improved method to simplify software metric models constructed with incomplete data samples,"Software metric models are useful in predicting the target software metric(s) for any future software project based on the project's predictor metric(s). Obviously, the construction of such a model makes use of a data sample of such metrics from analogous past projects. However, incomplete data often appear in such data samples. Worse still, the necessity to include a particular continuous predictor metric or a particular category for a certain categorical predictor metric is most likely based on an experience-related intuition that the continuous predictor metric or the category matters to the target metric. However, in the presence of incomplete data, this intuition is traditionally not verifiable “retrospectivelyâ€?after the model is constructed, leading to redundant continuous predictor metric(s) and/or excessive categorization for categorical predictor metrics. As an improvement of the author's previous work to solve all these problems, this paper proposes a methodology incorporating the k-nearest neighbors (k-NN) multiple imputation method, kernel smoothing, Monte Carlo simulation, and stepwise regression. This paper documents this methodology and one experiment on it.",2010,0, 5134,The discovery of the fault location in NIGS,"A new method is discovered for calculating the fault distance of the overhead line of the Neutral Indirect Grounded System (NIGS) in power distribution networks, in which the single phase to ground fault point or distance is difficult to detect, because the zero sequence current is in lower value. It is found that the information of the fault distance is kept in the zero sequence voltage vector which may be measured at the tail terminal of the questioned line by digging the data. Then an algorithm to calculate the fault location on the overhead lines is proposed by considering that the zero sequence voltage vector at the tail terminal. The value of the zero sequence voltage is determined by the fault location, and the phase angle also contains the distance traveled by the load current to the fault point. The system analysis for parameters is conducted for the NIGS by considering line is actual and by the two terminals' parameters.",2010,0, 5135,Testing as a Service over Cloud,"Testing-as-a-service (TaaS) is a new model to provide testing capabilities to end users. Users save the cost of complicated maintenance and upgrade effort, and service providers can upgrade their services without impact on the end-users. Due to uneven volumes of concurrent requests, it is important to address the elasticity of TaaS platform in a cloud environment. Scheduling and dispatching algorithms are developed to improve the utilization of computing resources. We develop a prototype of TaaS over cloud, and evaluate the scalability of the platform by increasing the test task load, analyze the distribution of computing time on test task scheduling and test task processing over the cloud, and examine the performance of proposed algorithms by comparing others.",2010,0, 5136,GOS: A Global Optimal Selection Approach for QoS-Aware Web Services Composition,"Services composition technology provides a promising way to create a new service in services-oriented architecture (SOA). However, some challenges are hindering the application of services composition. One of the greatest challenges for composite service developer is how to select a set of services to instantiate composite service with quality of service (QoS) assurance across different autonomous region (e.g. organization or business). To solve QoS-aware Web service composition problem, this paper proposes a global optimization selection (GOS) based on prediction mechanism for QoS values of local services. The GOS includes two parts. First, local service selection algorithm can be used to predict the change of service quality information. Second, GOS aims at enhancing the run-time performance of global selection by reducing to aggregation operation of QoS. The simulation results show that the GOS has lower execution cost than existing approaches.",2010,0, 5137,Fault detection for high availability RAID system,"Designing storage systems to provide high availability in the face of failures needs the use of various data protection techniques, such as dual-controller RAID. The failure of controller may cause data inconsistencies of RAID storage system. Heartbeat is used to detect controllers whether survival. So, the heartbeat cycle's impact on the high availability of a dual-controller hot-standby system has become the key of current research. To address the problem of fixed setting heartbeat in building high availability system currently, an adaptive heartbeat fault detection model of dual controller, which can adjust heartbeat cycle based on the frequency of data read-write request, is designed to improve the high availability of dual-controller RAID storage system. Additionally, this heartbeat mechanism can be used for other applications in distributed settings such as detecting node failures, performance monitoring, and query optimization. Based on this model, the high availability stochastic Petri net model of fault detection was established and used to evaluate the effect of the availability. In addition, we define a AHA (Adaptive Heart Ability) parameter to scale the ability of system heartbeat cycle to adapt to the environment which is changing. The results show that, relatively speaking with fixed configuration, the design is valid and effective, and can enhance dual controller RAID system high availability.",2010,0, 5138,Two-dimensional bar code mobile commerce - Implementation and performance analysis,"Two-dimensional bar code is called in Japan, QR Code (Quick Response Code, quick response codes). Holding a camera phone in front of this small black box according to what, after the phone software to identify, quickly turned into a line on the site, connected to the site, you can get the information you need. Of course, not necessarily a website, small black box may also be represented by e-mail, image data or text data. The study of this system in different two-dimensional bar code Shoujizuoye. The Implementation and performance analysis, in which experiments can see, network quality of service is subject to various factors Ying Xiang, to send Therefore, in the Internet video O'clock, must consider the status of the network, to the decision to adopt the GOP pattern; on streams in the network, the packet error rate will change the packet loss probability, affect the image quality.",2010,0, 5139,Using search-based metric selection and oversampling to predict fault prone modules,"Predictive models can be used in the detection of fault prone modules using source code metrics as inputs for the classifier. However, there exist numerous structural measures that capture different aspects of size, coupling and complexity. Identifying a metric subset that enhances the performance for the predictive objective would not only improve the model but also provide insights into the structural properties that lead to problematic modules. Another difficulty in building predictive models comes from unbalanced datasets, which are common in empirical software engineering as a majority of the modules are not likely to be faulty. Oversampling attempts to overcome this deficiency by generating new training instances from the faulty modules. We present the results of applying search-based metric selection and oversampling to three NASA datasets. For these datasets, oversampling results in the largest improvement. Metric subset selection was able to reduce up to 52% of the metrics without decreasing the predictive performance gained with oversampling.",2010,0, 5140,A Comparative Study of Threshold-Based Feature Selection Techniques,"Given high-dimensional software measurement data, researchers and practitioners often use feature (metric) selection techniques to improve the performance of software quality classification models. This paper presents our newly proposed threshold-based feature selection techniques, comparing the performance of these techniques by building classification models using five commonly used classifiers. In order to evaluate the effectiveness of different feature selection techniques, the models are evaluated using eight different performance metrics separately since a given performance metric usually captures only one aspect of the classification performance. All experiments are conducted on three Eclipse data sets with different levels of class imbalance. The experiments demonstrate that the choice of a performance metric may significantly influence the results. In this study, we have found four distinct patterns when utilizing eight performance metrics to order 11 threshold-based feature selection techniques. Moreover, performances of the software quality models either improve or remain unchanged despite the removal of over 96% of the software metrics (attributes).",2010,0, 5141,Gait Symmetry Analysis Based on Affinity Propagation Clustering,"When we are conducting an investigation in gait symmetry analysis, we usually grouped the test objects by decade intervals. However, this artificial division method has a large defect that does not truly reflect the human relationship between age and physical condition. Therefore, we found a new grouping method while clustering on the values of the difference in gait symmetry. The reason for using the affinity propagation clustering algorithm was that it can do clustering on data while passing the original information not the random values to processing clustering. And meanwhile this algorithm has good performance and efficiency. It will help us to do gait analysis more effectively, and also more reasonable to explain the relationship of human gait and age, or gait and other characteristics.",2010,0, 5142,Towards Estimating Physical Properties of Embedded Systems using Software Quality Metrics,"The complexity of embedded devices poses new challenges to embedded software development in addition to the traditional physical requirements. Therefore, the evaluation of the quality of embedded software and its impact on these traditional properties becomes increasingly relevant. Concepts such as reuse, abstraction, cohesion, coupling, and other software attributes have been used as quality metrics in the software engineering domain. However, they have not been used in the embedded software domain. In embedded systems development, another set of tools is used to estimate physical properties such as power consumption, memory footprint, and performance. These tools usually require costly synthesis-and-simulation design cycles. In current complex embedded devices, one must rely on tools that can help design space exploration at the highest possible level, identifying a solution that represents the best design strategy in terms of software quality, while simultaneously meeting physical requirements. We present an analysis of the cross-correlation between software quality metrics, which can be extracted before the final system is synthesized, and physical metrics for embedded software. Using a neural network, we investigate the use of these cross-correlations to predict the impact that a given modification on the software solution will have on embedded software physical metrics. This estimation can be used to guide design decisions towards improving physical properties of embedded systems, while maintaining an adequate trade-off regarding software quality.",2010,0, 5143,Impacts of Inaccurate Information on Resource Allocation for Multi-Core Embedded Systems,"Multi-core technologies are widely used in embedded systems. Stochastic resource allocation can guarantee certain quality of the services (QoS). In heterogeneous embedded system resource allocation, execution time distributions of different tasks on cores are predicted before scheduling. The difference between actual execution time and estimated execution time may lead to allocations which are not robust. In this paper, we present an evaluation of impacts of inaccurate information on resource allocation. We propose a systematic measure of the robustness degradation and evaluate how inaccurate probability parameters affect the robustness of resource allocations. Furthermore, we compare the performance of three widely used greedy heuristics when using the inaccurate information.",2010,0, 5144,Service Level Agreements in a Rental-based System,"In this paper, we investigate how Service Level Agreeements (SLAs) can be incorporated as part of the system's scheduling and rental decisions to satisfy the different performance promises of high performance computing (HPC) applications. Such SLAs are contracts which specify a set of application-driven requirements such as the estimated total load, contract duration, total utility value and the estimated total number of generated jobs. We present several scheduling and rental based policies that make use of these SLA parameters and demonstrate the effectiveness of such policies to accurately predict and plan for resource levels in a rental-based system.",2010,0, 5145,Quality Assurance Testing for Magnetization Quality Assessment of BLDC Motors Used in Compressors,"Quality assurance (QA) testing of mass-produced electrical appliances is critical for the reputation of manufacturer since defective units will have a negative impact on safety, reliability, efficiency, and performance of the end product. It has been observed at a brushless dc (BLDC) compressor motor manufacturing facility that improper magnetization of the rotor permanent magnet is one of the leading causes of motor defects. A new technique for postmanufacturing assessment of the magnetization quality of concentrated winding BLDC compressor motors for QA is proposed in this paper. The new method evaluates the data acquired during the test runs performed after motor assembly to observe anomalies in the zero-crossing pattern of the back-EMF voltages for screening motor units with defective magnetization. An experimental study on healthy and defective 250-W BLDC compressor motor units shows that the proposed technique provides sensitive detection of magnetization defects that existing tests were not capable of finding. The proposed algorithm does not require additional hardware for implementation since it can be added to the existing test-run inverter of the QA system as a software algorithm.",2010,0,4629 5146,Exploring FPGA Routing Architecture Stochastically,"This paper proposes a systematic strategy to efficiently explore the design space of field-programmable gate array (FPGA) routing architectures. The key idea is to use stochastic methods to quickly locate near-optimal solutions in designing FPGA routing architectures without exhaustively enumerating all design points. The main objective of this paper is not as much about the specific numerical results obtained, as it is to show the applicability and effectiveness of the proposed optimization approach. To demonstrate the utility of the proposed stochastic approach, we developed the tool for optimizing routing architecture (TORCH) software based on the versatile place and route tool. Given FPGA architecture parameters and a set of benchmark designs, TORCH simultaneously optimizes the routing channel segmentation and switch box patterns using the performance metric of average interconnect power-delay product estimated from placed and routed benchmark designs. Special techniques - such as incremental routing, infrequent placement, multi-modal move selection, and parallelized metric evaluation - are developed to reduce the overall run time and improve the quality of results. Our experimental results have shown that the stochastic design strategy is quite effective in co-optimizing both routing channel segmentation and switch patterns. With the optimized routing architecture, relative to the performance of our chosen architecture baseline, TORCH can achieve average improvements of 24% and 15% in delay and power consumption for the 20 largest Microelectronics Center of North Carolina benchmark designs, and 27% and 21% for the eight benchmark designs synthesized with the Altera Quartus II University Interface Program tool. Additionally, we found that the average segment length in an FPGA routing channel should decrease with technology scaling. Finally, we demonstrate the versatility of TORCH by illustrating how TORCH can be used to optimize other aspects of the routing arch- - itecture in an FPGA.",2010,0, 5147,Using Content and Text Classification Methods to Characterize Team Performance,"Because of the critical role that communication plays in a team's ability to coordinate action, the measurement and analysis of online transcripts in order to predict team performance is becoming increasingly important in domains such as global software development. Current approaches rely on human experts to classify and compare groups according to some prescribed categories, resulting in a laborious and error-prone process. To address some of these issues, the authors compared and evaluated two methods for analyzing content generated by student groups engaged in a software development project. A content analysis and semi-automated text classification methods were applied to the communication data from a global software student project involving students from the US, Panama, and Turkey. Both methods were evaluated in terms of the ability to predict team performance. Application of the communication analysis' methods revealed that high performing teams develop consistent patterns of communicating which can be contrasted to lower performing teams.",2010,0, 5148,Effect of Replica Placement on the Reliability of Large-Scale Data Storage Systems,"Replication is a widely used method to protect large-scale data storage systems from data loss when storage nodes fail. It is well known that the placement of replicas of the different data blocks across the nodes affects the time to rebuild. Several systems described in the literature are designed based on the premise that minimizing the rebuild times maximizes the system reliability. Our results however indicate that the reliability is essentially unaffected by the replica placement scheme. We show that, for a replication factor of two, all possible placement schemes have mean times to data loss (MTTDLs) within a factor of two for practical values of the failure rate, storage capacity, and rebuild bandwidth of a storage node. The theoretical results are confirmed by means of event-driven simulation. For higher replication factors, an analytical derivation of MTTDL becomes intractable for a general placement scheme. We therefore use one of the alternate measures of reliability that have been proposed in the literature, namely, the probability of data loss during rebuild in the critical mode of the system. Whereas for a replication factor of two this measure can be directly translated into MTTDL, it is only speculative of the MTTDL behavior for higher replication factors. This measure of reliability is shown to lie within a factor of two for all possible placement schemes and any replication factor. We also show that for any replication factor, the clustered placement scheme has the lowest probability of data loss during rebuild in critical mode among all possible placement schemes, whereas the declustered placement scheme has the highest probability. Simulation results reveal however that these properties do not hold for the corresponding MTTDLs for a replication factor greater than two. This indicates that some alternate measures of reliability may not be appropriate for comparing the MTTDL of different placement schemes.",2010,0, 5149,A vector-based approach to digital image scaling,"Image and video capture devices are often designed with different capabilities and end-users in mind. As a result, very often images are captured at one resolution and need to be displayed at another resolution. Resizing thus continues to be an important research area in today's milieu. Well-known pixel-based methods (like bicubic interpolation) have traditionally been used for the purpose. However, these methods are often found lacking in quality from a perceptual standpoint as they tend to soften edges and blur details. Recently, vectorization based approaches have been explored because of their inherent property of artifact free scaling. However, these methods fail to faithfully reproduce textures and fine details in an image. In this work, we present a novel, layered approach of combining image vectorization with a pixel based interpolation technique to achieve high quality scaling of digital images. The proposed method decomposes an image into two layers: a `coarse' layer capturing the visually important structure of the image, and a `fine' layer containing the details. We vectorize only the coarse layer and render at the desired scale, while using classical interpolation on the fine layer. The final scaled image is composed by blending the independently scaled layers. We compare the performance of the proposed method with several state-of-the-art vectorization and scaling methods, and report comparably better performance.",2010,0, 5150,Intra coding and refresh based on video epitomic analysis,"In video coding, intra coding plays an important role in both coding efficiency and error resilience. In this paper, a new intra coding method based on video epitomic analysis is proposed. Image epitome suitable for coding is extracted through video epitomic analysis, and is used to generate the prediction for each block. The mapping between the original image and image epitome as well as image epitome should be coded and transmitted to the decoder side. Due to the independence of adjacent reconstructed blocks, the proposed intra coding method can also be applied to improve error resilience of intra refresh. Experimental results show that, the proposed intra coding method can significantly improve the performance of intra coding by 0.7dB in terms of PSNR. The simulation results under packet loss environment show that the proposed intra refresh can out-perform random intra refresh (RIR) by up to 1dB, and better subjective quality can also be obtained.",2010,0, 5151,Neural-network-based water quality monitoring for wastewater treatment processes,"Wastewater treatment processes are typical complex dynamic processes. They are characterized by severe random disturbance, strong nonlinearity, time-variant properties and uncertainty. Sensors which have higher reliability and adaptability are needed in such systems. Sensors currently available for wastewater treatment processes are limited both in sorts and reliability, and most of the sensors are expensive. Software sensors can give estimation to unmeasured state variables according to the measured information provided by online measuring instruments available in the system. Soft measurement technology based on artificial neural network has an obvious advantage in solving high nonlinearity and uncertainty. BP neural network was used to construct a soft measurement approach to monitor the effluent COD and BOD in wastewater treatment processes in this paper. Simulation results show that the soft measurement approach based on neural network can estimate the state variables accurately and can be used in real-time monitoring of biochemical wastewater treatment processes.",2010,0, 5152,Predicting mechanical properties of hot-rolling steel by using RBF network method based on complex network theory,"Recently, producing high-precision and high-quality steel products becomes the major aim of the large-scale iron and steel enterprises. Because of the internal multiplex components of products and complex changes in the production process, it is too difficult to achieve precise control in hot rolling production process. In this paper, radial basis function neural network is used to complete performance prediction. It has the advantage of fast training and high accuracy, and overcomes shortcomings of BP neural network used previously, such as local minimum. When determining the center of radial basis function we make use of complex network visualization which can clearly figure out the relationship between input vectors and receive the center and width according to the relationship of the nodes. Experiments show that the method that is combining community discovery algorithm and RBF enjoy high stability, small training time which means to be suitable to analysis large-scale data. More importantly, it can reach high accuracy.",2010,0, 5153,A Quality Framework to check the applicability of engineering and statistical assumptions for automated gauges,"In high-volume part manufacturing, interactions between program data and program flow can depart significantly from the initial statistical assumptions used during software development. This is a particular challenge for industrial gauging systems used in automotive part production where the applicability of statistical models affects system correctness. This paper uses a Quality Framework to track high-level engineering and statistical assumptions during development. Statistical Process Control (SPC) metrics define an “in-controlâ€?region where the statistical assumptions apply, and an outlier region where they do not apply. The gauge is monitored on-line to verify that production corresponds to the area of the operation where the gauge algorithms are known to work. If outliers are detected in the on-line manufacturing process, then parts can be quarantined, improved gauging algorithms selected, and/or process improvement activities can be initiated.",2010,0, 5154,Fuzzy clustering and relevance ranking of web search results with differentiating cluster label generation,"This paper introduces a prototype web search results clustering engine that enhances search results by performing fuzzy clustering on web documents returned by conventional search engines, as well as ranking the results and labeling the resulting clusters. This is done using a fuzzy transduction-based clustering algorithm (FTCA), which employs a transduction-based relevance model (TRM) to generate document relevance values. These relevance values are used to cluster similar documents, rank them, and facilitate a term frequency based label generator. The membership degrees of documents to fuzzy clusters also facilitates effective detection and removal of overly similar clusters. FTCA is compared against two other established web document clustering algorithms: Suffix Tree Clustering (STC) and Lingo, which are provided by the free open source Carrot2 Document Clustering Workbench. To measure cluster quality, an extended version of the classic precision measurement is used to take into account relevance and fuzzy clustering, along with recall and F1 score. Results from testing on five different datasets show a considerable clustering quality and performance advantage over STC and Lingo in most cases.",2010,0, 5155,Cooperative Co-evolution for large scale optimization through more frequent random grouping,"In this paper we propose three techniques to improve the performance of one of the major algorithms for large scale continuous global function optimization. Multilevel Cooperative Co-evolution (MLCC) is based on a Cooperative Co-evolutionary framework and employs a technique called random grouping in order to group interacting variables in one subcomponent. It also uses another technique called adaptive weighting for co-adaptation of subcomponents. We prove that the probability of grouping interacting variables in one subcomponent using random grouping drops significantly as the number of interacting variables increases. This calls for more frequent random grouping of variables. We show how to increase the frequency of random grouping without increasing the number of fitness evaluations. We also show that adaptive weighting is ineffective and in most cases fails to improve the quality of found solution, and hence wastes considerable amount of CPU time by extra evaluations of objective function. Finally we propose a new technique for self-adaptation of the subcomponent sizes in CC. We demonstrate how a substantial improvement can be gained by applying these three techniques.",2010,0, 5156,The jMetal framework for multi-objective optimization: Design and architecture,"jMetal is a Java-based framework for multi-objective optimization using metaheuristics. It is a flexible, extensible, and easy-to-use software package that has been used in a wide range of applications. In this paper, we describe the design issues underlying jMetal, focusing mainly on its internal architecture, with the aim of offering a comprehensive view of its main features to interested researchers. Among the covered topics, we detail the basic components facilitating the implementation of multi-objective metaheuristics (solution representations, operators, problems, density estimators, archives), the included quality indicators to assess the performance of the algorithms, and jMetal's support to carry out full experimental studies.",2010,0, 5157,Achieving high robustness and performance in performing QoS-aware route planning for IPTV networks,QoS-aware network planning becomes increasingly important for network operators and ISP alike as the number and heterogeneity of applications supported by networks continue to increase. We have proposed an architecture and methodology to support QoS-aware route planning for IPTV networks. The route planning problem was formulated as a residual bandwidth optimization problem and solved using GA-VNS hybrid algorithm. The QoS guarantee is provided by performing route allocation for a QoS-aware demand matrix which is generated using efficient and accurate estimation of QoS requirements using empirical effective bandwidth estimations. This paper revisits our formerly proposed approach to perform residual bandwidth optimization with QoS guarantee in IPTV networks and proposes alterations and parameter settings to best suit the problem in terms of robustness and performance. This paper proposes the use of two cost functions to meet the above requirements and the use of a variation of the original VNS algorithm besides recommending the use of specific operators and general parameter settings. The proposals and recommendations are substantiated by the results of performance evaluations and analysis. The results indicate dramatic improvement when the proposed approaches are taken on board.,2010,0, 5158,Enhancing perceptual quality of watermarked high-definition video through composite mask,"This paper proposes a composite mask based watermarking method, which improves perceptual quality of videos watermarked by traditional video watermarking scheme. The composite mask including noise visibility function (NVF) mask, adaptive dithering mask, and contour mask considers human visual system (HVS). The adaptive dithering mask solves the block artifact problem caused by expanding the watermark basic pattern for the robustness against various downscaling attacks. The contour mask makes up for the disadvantage of NVF mask, which cannot handle separately high textured regions and contour or edge regions. Extensive experiments prove that the watermarking method based on the proposed composite mask satisfies imperceptibility, robustness against downscaling as well as common video processing, and real-time performance.",2010,0, 5159,A design approach for numerical libraries in large scale distributed systems,"Nowadays, large scale distributed systems gather thousands of nodes with hierarchical memory models. They are heterogeneous, volatile and geographically distributed. The efficient exploitation of such systems requires the conception and adaptation of appropriate numerical methods, the definition of new programming paradigms, new metrics for performance prediction, etc. The modern hybrid numerical methods are well adapted to this kind of systems. This is particularly because of their multi-level parallelism and fault tolerance property. However the programming of these methods for these architectures requires concurrent reuse of sequential and parallel code. But the currently existing numerical libraries aren't able to exploit the multi-level parallelism offered by theses methods. A few linear algebra numerical libraries make use of object oriented approach allowing modularity and extensibility. Nevertheless, those which offer modularity,sequential and parallel code reuse are almost non-existent. In this paper, we analyze the lacks in existing libraries and propose a design based on a component approach and the strict separation between computation operations, data management and communication control of an application. We present then an application of this design using YML scientific workflow environment (http://yml.prism.uvsq.fr/) jointly with the object oriented LAKe (Linear Algebra Kernel) library. Some numerical experiments on GRID5000 platform validate our approach and show its efficiency.",2010,0, 5160,A testbed for the evaluation of link quality estimators in wireless sensor networks,"Link quality estimation is a fundamental building block for the design of several different mechanisms and protocols in wireless sensor networks. The accuracy of link quality estimation greatly impacts the efficiency of these protocols. Therefore, a thorough experimental evaluation of link quality estimators (LQEs) is mandatory. This motivated us to build a benchmarking testbed-RadiaLE, that automates LQEs evaluation by analyzing their statistical properties. Our testbed includes (i.) hardware components that represent the WSN under test and (ii.) a software tool for setting up and controlling the experiments and also for analyzing the collected data, allowing for LQEs evaluation. To demonstrate the usefulness of RadiaLE, we carried out a comparative performance study of a set of well-known LQEs.",2010,0, 5161,Performance analysis of distributed software systems: A model-driven approach,"The design of complex software systems is a challenging task because it involves a wide range of quality attributes such as security, performance, reliability, to name a few. Dealing with each of these attributes requires specific set of skills, which quite often, involves making various trade-offs. This paper proposes a novel Model-Driven Software Performance Engineering (MDSPE) process that can be used for performance analysis requirements of distributed software systems. An example assessment is given to illustrate how our MDSPE process can comply with well-known performance models to assess the performance measures.",2010,0, 5162,Power quality analysis of a Synchronous Static Series Compensator (SSSC) connected to a high voltage transmission network,"This paper presents the power quality analysis of a Synchronous Static Series Compensator (SSSC) connected to a high-voltage transmission network. The analysis employs a simplified harmonic equivalent model of the network that is described in the paper. To facilitate the calculations, a software toolset was developed to measure the voltage harmonics content introduced by the SSSC at the Point of Common Coupling (PCC). These harmonics are then evaluated against the stringent power quality legislation set on the Spanish transmission network. Subsequently, the compliance of an SSSC based on two types of Voltage Sourced Converters (VSC): a 48-pulse and a PWM one, usual in this type of applications, is assessed. Finally, a parameter sensitivity analysis is presented, which looks at the influence of parameters such as the line length and short circuit impedances on the PCC voltage harmonics.",2010,0, 5163,A Clustering Algorithm Use SOM and K-Means in Intrusion Detection,"Improving detection definition is a pivotal problem for intrusion detection. Many intelligent algorithms were used to improve the detection rate and reduce the false rate. Traditional SOM cannot provide the precise clustering results to us, while traditional K-Means depends on the initial value serious and it is difficult to find the center of cluster easily. Therefore, in this paper we introduce a new algorithm, first, we use SOM gained roughly clusters and center of clusters, then, using K-Means refine the clustering in the SOM stage. At last of this paper we take KDD CUP-99 dataset to test the performance of the new algorithm. The new algorithm overcomes the defects of traditional algorithms effectively. Experimental results show that the new algorithm has a good stability of efficiency and clustering accuracy.",2010,0, 5164,Research on the Copy Detection Algorithm for Source Code Based on Program Organizational Structure Tree Matching,Code plagiarism is an ubiquitous phenomenon in the teaching of Programming Language. A large number of source code can be automatically detected and uses the similarity value to determine whether the copy is present. It can greatly improve the efficiency of teachers and promote teaching quality. A algorithm is provided that firstly match program organizational structure tree and then process the methods of program to calculate the similarity value. It not only applies to process-oriented programming languages but also applies to object-oriented programming language.,2010,0, 5165,A Study on Defect Density of Open Source Software,"Open source software (OSS) development is considered an effective approach to ensuring acceptable levels of software quality. One facet of quality improvement involves the detection of potential relationship between defect density and other open source software metrics. This paper presents an empirical study of the relationship between defect density and download number, software size and developer number as three popular repository metrics. This relationship is explored by examining forty-four randomly selected open source software projects retrieved from SourceForge.net. By applying simple and multiple linear regression analysis, the results reveal a statistically significant relationship between defect density and number of developers and software size jointly. However, despite theoretical expectations, no significant relationship was found between defect density and number of downloads in OSS projects.",2010,0, 5166,A Structured Framework for Software Metrics Based on Three Primary Metrics,"This paper presents a structured unifying framework for software metrics (numerical software measurements), based on the three ""primary metrics"" of function points (FP), person-months (PM), and lines of code (LOC). The framework is based on a layered model, with the three primary metrics constituting the lowest layer. An important property of the primary metrics, referred to as the ""convertibility property"" is that a primary metric can easily be converted to another primary metric. Time is also included in this layer as a fundamental (not necessarily software) primary metric. The second layer consists of general-purpose metrics such as productivity measures, which are computed from the primary metrics, and the third layer consists of specialpurpose metrics such as reliability and quality measures. This third layer is inherently extensible. The framework readily lends itself for use in both instructional and practitioner environments.",2010,0, 5167,Application and Study on Test Data Analysis in Software Reliability Evaluation Model Selection,"In the software reliability evaluation, selection of model directly affect the reliability evaluation and accuracy of the forecast result. So testing data input to any models, impact of its quality and completeness on the reliability of measurement results is much larger than impact of suppose the quality and evaluation ability of the model. For example, such as actual data set, study on selection and processing of testing data and kinds of data analysis techniques in selection process of reliability model, is useful to help select more effective and rational evaluating models and improve accuracy of the forecast in software reliability evaluation.",2010,0, 5168,Cost Efficient Software Review in an E-Business Software Development Project,Many software projects face the trouble to satisfy both cost constraints and high quality requirement. Software review turns out to be an efficient approach to reduce the cost and early remove the defects. This paper classifies the software review methods and proposes a cost efficient framework for different maturity level of software review. Case study in an E-business software development project approved that proposed framework was efficient in cost controlling and benefits the defect removal.,2010,0, 5169,Shape-DNA: Effective Character Restoration and Enhancement for Arabic Text Documents,"We present a novel learning-based image restoration and enhancement technique for improving character recognition performance of OCR products for degraded documents or documents/text captured with mobile devices such as camera-phones. The proposed technique is language independent and can simultaneously increase the effective resolution and restore broken characters with artifacts due to image capturing device such as a low quality/low resolution camera, or due to previous pre-processing such as extracting text region from the document image. The proposed technique develops a predictive relationship between high-resolution training images and their low-resolution/degraded counterparts, and exploits this relationship in a probabilistic scheme to generate a high resolution image from a low quality, low-resolution text image. We present a fast and scalable implementation of the proposed character restoration algorithm to improve the text recognition for document/text images captured by mobile phones. Experimental results demonstrate that the system effectively increases OCR performance for documents captured by mobile imaging devices, from levels of 50% to levels of over 80% for non-latin document/scene text images at 120dpi.",2010,0, 5170,Unifying quality metrics for reservoir networks,"Several metrics for the quality of reservoirs have been proposed and linked to reservoir performance in Echo State Networks and Liquid State Machines. A method to visualize the quality of a reservoir, called the separation ratio graph, is developed from these existing metrics leading to a generalized approach to measuring reservoir quality. Separation ratio provides a method for estimating the components of a reservoir's separation and visualizing the transition from stable to chaotic behavior. This new approach does not require a prior classification of input samples and can be applied to reservoirs trained with unsupervised learning. It can also be used to analyze systems made up of multiple reservoirs to determine performance between any two points in the system.",2010,0, 5171,A method to build a representation using a classifier and its use in a K Nearest Neighbors-based deployment,"The K Nearest Neighbors (KNN) is strongly dependent on the quality of the distance metric used. For supervised classification problems, the aim of metric learning is to learn a distance metric for the input data space from a given collection of pair of similar/dissimilar points. A crucial point is the distance metric used to measure the closeness of instances. In the industrial context of this paper the key point is that a very interesting source of knowledge is available : a classifier to be deployed. The knowledge incorporated in this classifier is used to guide the choice (or the construction) of a distance adapted to the situation Then a KNN-based deployment is elaborated to speed up the deployment of the classifier compared to a direct deployment.",2010,0, 5172,Software defect prediction using static code metrics underestimates defect-proneness,"Many studies have been carried out to predict the presence of software code defects using static code metrics. Such studies typically report how a classifier performs with real world data, but usually no analysis of the predictions is carried out. An analysis of this kind may be worthwhile as it can illuminate the motivation behind the predictions and the severity of the misclassifications. This investigation involves a manual analysis of the predictions made by Support Vector Machine classifiers using data from the NASA Metrics Data Program repository. The findings show that the predictions are generally well motivated and that the classifiers were, on average, more “confidentâ€?in the predictions they made which were correct.",2010,0, 5173,Performance Evaluation Tools for Zone Segmentation and Classification (PETS),"This paper describes a set of Performance Evaluation Tools (PETS) for document image zone segmentation and classification. The tools allow researchers and developers to evaluate, optimize and compare their algorithms by providing a variety of quantitative performance metrics. The evaluation of segmentation quality is based on the pixel-based overlaps between two sets of zones proposed by Randriamasy and Vincent. PETS extends the approach by providing a set of metrics for overlap analysis, RLE and polygonal representation of zones and introduces type-matching to evaluate zone classification. The software is available for research use.",2010,0, 5174,Quality Models for Free/Libre Open Source Software Towards the “Silver Bullet”?,"Selecting the right software is of crucial importance for businesses. Free/Libre Open Source Software (FLOSS) quality models can ease this decision-making. This paper introduces a distinction between first and second generation quality models. The former are based on relatively few metrics, require deep insights into the assessed software, relying strongly on subjective human perception and manual labour. Second generation quality models strive to replace the human factor by relying on tools and a multitude of metrics. The key question this paper addresses is whether the emerging FLOSS quality models provide the “silver bulletâ€?overcoming the shortcomings of first generation models. In order to answer this question, OpenBRR, a first generation quality model, and QualOSS, a second generation quality model, are used for a comparative assessment of Asterisk, a FLOSS implementation of a telephone private branch exchange. Results indicate significant progress, but apparently the “silver bulletâ€?has not yet been found.",2010,0, 5175,Managing Quality Requirements: A Systematic Review,"It is commonly acknowledged that the management of quality requirements is an important and difficult part of the requirements engineering process, which plays a critical role in software product development. In order to identify current research about quality requirements, a systematic literature review was performed. This paper identifies available empirical studies of quality requirements. A database and manual search identified 1,560 studies, of which 18 were found to be empirical research studies of high quality, and relevant to the research questions. The review investigates what is currently known about the benefits and limitations of methods of quality requirements. In addition, the state of research is presented for five identified areas: elicitation, dependencies, quality requirements metrics, cost estimations, and prioritization.",2010,0, 5176,"An Analysis of the ""Inconclusive' Change Report Category in OSS Assisted by a Program Slicing Metric","In this paper, we investigate the Barcode open-source system (OSS) using one of Weiser's original slice-based metrics (Tightness) as a basis. In previous work, low numerical values of this slice-based metric were found to indicate fault-free (as opposed to fault-prone) functions. In the same work, we deliberately excluded from our analysis a category comprising 221 of the 775 observations representing `inconclusive' log reports extracted from the OSS change logs. These represented OSS change log descriptions where it was not entirely clear whether a fault had occurred or not in a function and, for that reason, could not reasonably be incorporated into our analysis. In this paper we present a methodology through which we can draw conclusions about that category of report.",2010,0, 5177,AI-Based Models for Software Effort Estimation,"Decision making under uncertainty is a critical problem in the field of software engineering. Predicting the software quality or the cost/ effort requires high level expertise. AI based predictor models, on the other hand, are useful decision making tools that learn from past projects' data. In this study, we have built an effort estimation model for a multinational bank to predict the effort prior to projects' development lifecycle. We have collected process, product and resource metrics from past projects together with the effort values distributed among software life cycle phases, i.e. analysis & test, design & development. We have used Clustering approach to form consistent project groups and Support Vector Regression (SVR) to predict the effort. Our results validate the benefits of using AI methods in real life problems. We attain Pred(25) values as high as 78% in predicting future projects.",2010,0, 5178,A survey on Software Reusability,"Software Reusability is primary attribute of software quality. In the literature, there are metrics for identifying the quality of reusable components but there is very less work on the framework that makes use of these metrics to find reusability of software components. The quality of the software if identified in the design phase or even in the coding phase can help us to reduce the rework by improving quality of reuse of the component and hence improve the productivity due to probabilistic increase in the reuse level. In this paper, different models or measures that are discussed in literature for software reusability prediction or evaluation of software components are summarized.",2010,0, 5179,Checkpointing vs. Migration for Post-Petascale Supercomputers,"An alternative to classical fault-tolerant approaches for large-scale clusters is failure avoidance, by which the occurrence of a fault is predicted and a preventive measure is taken. We develop analytical performance models for two types of preventive measures: preventive checkpointing and preventive migration. We also develop an analytical model of the performance of a standard periodic checkpoint fault-tolerant approach. We instantiate these models for platform scenarios representative of current and future technology trends. We find that preventive migration is the better approach in the short term by orders of magnitude. However, in the longer term, both approaches have comparable merit with a marginal advantage for preventive checkpointing. We also find that standard non-prediction-based fault tolerance achieves poor scaling when compared to prediction-based failure avoidance, thereby demonstrating the importance of failure prediction capabilities. Finally, our results show that achieving good utilization in truly large-scale machines (e.g., 220 nodes) for parallel workloads will require more than the failure avoidance techniques evaluated in this work.",2010,0, 5180,Use of accelerometers for material inner defect detection,"This paper deals with a possibility of non-destructive diagnostics of solid objects by software analysis of vibration spectrum by accelerometers. By a use of MATLAB platform, a processing and information evaluation from accelerometer is possible. Accelerometer is placed on the measured object. The analog signal needs to be digitized by a special I/O device to be processed offline with FFT (Fast Fourier Transformation). The power spectrum is then examined by developed evaluating procedures.",2010,0, 5181,Application of statistical learning theory to predict corrosion rate of injecting water pipeline,"Support Vector Machines (SVM) represents a new and very promising approach to pattern recognition based on small dataset. The approach is systematic and properly motivated by Statistical Learning Theory (SLT). Training involves separating the classes with a surface that maximizes the margin between them. An interesting property of this approach is that it is an approximate implementation of Structural Risk Minimization (SRM) induction principle, therefore, SVM is more generalized performance and accurate as compared to artificial neural network which embodies the Embodies Risk Minimization (ERM) principle. In this paper, according to corrosion rate complicated reflection relation with influence factors, we studied the theory and method of Support Vector Machines based the statistical learning theory and proposed a pattern recognition method based Support Vector Machine to predict corrosion rate of injecting water pipeline. The outline of the method is as follows: First, we researched the injecting water quality corrosion influence factors in given experimental zones with Gray correlation method; then we used the LibSVM software based Support Vector Machine to study the relationship of those injecting water quality corrosion influence factors, and set up the mode to predict corrosion rate of injecting water pipeline. Application and analysis of the experimental results in Shengli oilfield proved that SVM could achieve greater accuracy than the BP neural network do, which also proved that application of SVM to predict corrosion rate of injecting water pipeline, even to the other theme in petroleum engineering, is reliable, adaptable, precise and easy to operate.",2010,0, 5182,Enforcing SLAs in Scientific Clouds,"Software as a Service (SaaS) providers enable the on-demand use of software, which is an intriguing concept for business and scientific applications. Typically, service level agreements (SLAs) are specified between the provider and the user, defining the required quality of service (QoS). Today SLA aware solutions only exist for business applications. We present a general SaaS architecture for scientific software that offers an easy-to-use web interface. Scientists define their problem description, the QoS requirements and can access the results through this portal. Our algorithms autonomously test the feasibility of the SLA and, if accepted, guarantee its fulfillment. This approach is independent of the underlying cloud infrastructure and successfully deals with performance fluctuations of cloud instances. Experiments are done with a scientific application in private and public clouds and we also present the implementation of a high-performance computing (HPC) cloud dedicated for scientific applications.",2010,0, 5183,AWPS - Simulation Based Automated Web Performance Analysis and Prediction,"The growing demand for quality and performance has become a discriminating factor in the field of web applications. Advances in software technology and project management have increased the amount of web applications which are developed under these aspects. At the same time the research in web performance simulation and prediction is focusing on complex modeling methods and techniques rather than providing easy to use tools to test and ensure the performance requirements of today's and tomorrow's web applications. As a tool for these tasks, the automatic web performance simulation and prediction system (AWPS) will be described. The AWPS is capable of (a) automatically creating an online web performance simulation and (b) conducting trend analysis of the system under test.",2010,0, 5184,Testing the Verticality of Slow-Axis of the Dual Optical Fiber Based on Image Processing,"The polarization dual fiber collimator is one of the important passive devices, which could be used to transform divergent light from the fiber into parallel beam or focus parallel beam into the fiber so as to improve the coupling efficiency of fiber device. However, the quality of double-pigtail fiber used for assembling collimator directly impacts on the performance of fiber collimator. Therefore, it is necessary to detect the quality of double-pigtail fiber before collimators has been packaged to ensure the quality of fiber collimator. The paper has pointed out that the verticality of two slow-axis of the double-pigtail fiber is the major factor to affect the quality of fiber collimator. With ""Panda""-type double-pigtail fiber as the research object, a novel method to detect the verticality of slow-axis is proposed. First, with the red light of LED as background light source, the clear images of the cross-section end surface of double-pigtail fiber has been obtained by micro-imaging systems; Then after a series of image pre-processing operations, the coordinate information of ""cat's eye"" light heart can been extracted with the image centroid algorithm; Finally, the angle between the two slow axes of the double-pigtail fiber can be calculated quickly and accurately according to pre-established mathematical model. The detection system mainly consists of CCD, microscopic imaging device, image board and image processing software developed with VC++. The experimental results prove that the system is both practical and effective and can satisfy the collimator assembly precision demands.",2010,0, 5185,Research on Subjective Trust Routing Algorithm for Mobile Ad Hoc Networks,"The routing behavior of Mobile Ad Hoc Network is a kind of cooperation model in multi mobile nodes. The traditional routing methods cannot solve the problem of malicious behavior and build the trusted transfers route between nodes. Based on the character of trust routing, the subjective confidence for the routing behavior has transform into trust evaluation with the probability model to solve the trust measurement and forecast. For compare the trust routing policy, some trust routing algorithms have proposed to depict the trust characteristic and complete different trust routing. The results of simulation show the trust routing algorithms of mobile ad hoc network is effective, which can guarantee the routing performance and quality of transmission.",2010,0, 5186,Design and Implementation of the Earthquake Precursor Network Running Monitoring Software Based on C/S Structure,"First, we introduce the design principles of the function of Tianjin earthquake precursor network running monitoring software; second, we have developed the software by using the precursor instrument communication technology, the distributed database read-write technology and data quality monitoring methods. The software regularly and automatically monitor network equipment status, data report situation, and abnormal data. It achieves the automatic detection and alarm function for the precursor instruments and observation data. The application of the software improves data quality and work efficiency.",2010,0, 5187,Effective Static Analysis to Find Concurrency Bugs in Java,"Multithreading and concurrency are core features of the Java language. However, writing a correct concurrent program is notoriously difficult and error prone. Therefore, developing effective techniques to find concurrency bugs is very important. Existing static analysis techniques for finding concurrency bugs either sacrifice precision for performance, leading to many false positives, or require sophisticated analysis that incur significant overhead. In this paper, we present a precise and efficient static concurrency bugs detector building upon the Eclipse JDT and the open source WALA toolkit (which provides advanced static analysis capabilities). Our detector uses different implementation strategies to consider different types of concurrency bugs. We either utilize JDT to syntactically examine source code, or leverage WALA to perform interprocedural data flow analysis. We describe a variety of novel heuristics and enhancements to existing analysis techniques which make our detector more practical, in terms of accuracy and performance. We also present an effective approach to create inter-procedural data flow analysis using WALA for complex analysis. Finally we justify our claims by presenting the results of applying our detector to a range of real-world applications and comparing our detector with other tools.",2010,0, 5188,Interval-Based Uncertainty Handling in Model-Based Prediction of System Quality,"Our earlier research indicates feasibility of applying the PREDIQT method for model-based prediction of impacts of architecture design changes on system quality. The PREDIQT method develops and makes use of the so called prediction models, a central part of which are the ""Dependency Views"" (DVs) - weighted trees representing the relationships between system design and the quality notions. The values assigned to the DV parameters originate from domain expert judgments and measurements on the system. However fine grained, the DVs contain a certain degree of uncertainty due to lack and inaccuracy of empirical input. This paper proposes an approach to the representation, propagation and analysis of uncertainties in DVs. Such an approach is essential to facilitate model fitting, identify the kinds of architecture design changes which can be handled by the prediction models, and indicate the value of added information. Based on a set of criteria, we argue analytically and empirically, that our approach is comprehensible, sound, practically useful and better than any other approach we are aware of.",2010,0, 5189,Arranging software test cases through an optimization method,"During the software testing process, the customers would be invited to review or inspect an ongoing software product. This phase is called the “in-plantâ€?test, often known as an “alphaâ€?test. Typically, this test phase lasts for a very short period of time in which the software test engineers or software quality engineers rush to execute a list of software test cases in the test suite with customers. Because of the time constraint, the test cases have to be arranged in terms of test case severities, estimated test time, and customers' demands. As important as the test case arrangement is, this process is mostly performed manually by the project managers and software test engineers together. As the software systems are getting more sophisticated and complex, a greater volume of test cases have to be generated, and the manual arrangement approach may not be the most efficient way to handle this. In this paper, we propose a framework for automating the process of test case arrangement and management through an optimization method. We believe that this framework will help software test engineers facing with the challenges of prioritizing test cases.",2010,0, 5190,The ATLAS Calorimeter Trigger Commissioning Using Cosmic Ray Data,"While waiting for the first LHC collisions, the ATLAS detector is undergoing integration exercises performing cosmic ray data taking runs. The ATLAS calorimeter trigger software uses a common framework capable of retrieving, decoding and providing data to different particle identification algorithms. This work reports on the results obtained during the first data-taking period of 2009 with the calorimeter trigger. Experts from detector areas such as calorimetry, data flow management, data quality monitoring and the trigger worked together to reach stable data taking conditions. Issues like noise readout channels were addressed and the global trigger robustness was evaluated.",2010,0,4852 5191,A combined acoustic and optical instrument for fisheries studies,The AOS system was developed through the merging of acoustic and optical technologies to further the quality of fisheries stock assessment. The system has been deployed from several ships (40 m to 75 m in length) over the past three years providing improved accuracy of acoustic target strength measures for commercial fish species and delivering biomass estimates of fish stocks. It has proven to be robust withstanding the harsh environment of pressure and temperature to depths of 1000 meters. The platform must also cope with exposure to significant impacts and vibration during deployment and recovery as the trawl net moves up and down the stern ramp of commercial trawlers. The system continues to be incrementally improved to gain better performance and to streamline post survey data analysis. These improvements come as a direct result of new measurement challenges identified through working with data sets collected by the instrument.,2010,0, 5192,Benchmarking IP blacklists for financial botnet detection,"Every day, hundreds or even thousands of computers are infected with financial malware (i.e. Zeus) that forces them to become zombies or drones, capable of joining massive financial botnets that can be hired by well-organized cyber-criminals in order to steal online banking customers' credentials. Despite the fact that detection and mitigation mechanisms for spam and DDoS-related botnets have been widely researched and developed, it is true that the passive nature (i.e. low network traffic, fewer connections) of financial botnets greatly hinder their countermeasures. Therefore, cyber-criminals are still obtaining high economical profits at relatively low risk with financial botnets. In this paper we propose the use of publicly available IP blacklists to detect both drones and Command & Control nodes that are part of financial botnets. To prove this hypothesis we have developed a formal framework capable of evaluating the quality of a blacklist by comparing it versus a baseline and taking into account different metrics. The contributed framework has been tested with approximately 500 million IP addresses, retrieved during a one-month period from seven different well-known blacklist providers. Our experimental results showed that these IP blacklists are able to detect both drones and C&C related with the Zeus botnet and most important, that it is possible to assign different quality scores to each blacklist based on our metrics. Finally, we introduce the basics of a high-performance IP reputation system that uses the previously obtained blacklists' quality scores, in order to reply almost in real-time whether a certain IP is a member of a financial botnet or not. Our belief is that such a system can be easily integrated into e-banking anti-fraud systems.",2010,0, 5193,Perceptual Rate-Distortion Optimization Using Structural Similarity Index as Quality Metric,"The rate-distortion optimization (RDO) framework for video coding achieves a tradeoff between bit-rate and quality. However, objective distortion metrics such as mean squared error traditionally used in this framework are poorly correlated with perceptual quality. We address this issue by proposing an approach that incorporates the structural similarity index as a quality metric into the framework. In particular, we develop a predictive Lagrange multiplier estimation method to resolve the chicken and egg dilemma of perceptual-based RDO and apply it to H.264 intra and inter mode decision. Given a perceptual quality level, the resulting video encoder achieves on the average 9% bit-rate reduction for intra-frame coding and 11% for inter-frame coding over the JM reference software. Subjective test further confirms that, at the same bit-rate, the proposed perceptual RDO indeed preserves image details and prevents block artifact better than traditional RDO.",2010,0, 5194,Physical layer monitoring in 8-branched PON-based i-FTTH,This paper reports a novel physical layer monitoring technique called Centralized Failure Detection System (CFDS) based on the use of an optical time domain reflectometry (OTDR) and integrated with inexpensive additional optical devices at the middle of the network system for monitoring the fiber plant of a wide range of passive optical network (PON) based intelligent fiber-to-the-home (i-FTTH) in the background during network operation and determining the faulty branch as well as the fault parameters. Access Control System (ACS) has been developed to handle the centralized line detection for diverting the OTDR pulse signal from central office (CO) to bypass the filter and connect to each corresponding fiber link in drop section. The design and implementation of this integrated hardware/software system enable each lines' status can be observed and monitored at static point where the OTDR injecting the signal. All OTDR measurements are transmitted into remote workstation for centralized monitoring and advanced analyzing.,2010,0, 5195,Click Fraud Prevention via multimodal evidence fusion by Dempster-Shafer theory,"We address the problem of combining information from diversified sources in a coherent fashion. A generalized evidence processing theory and an architecture for data fusion that accommodates diversified sources of information are presented. Different levels at which data fusion may take place such as the level of dynamics, the level of attributes, and the level of evidence are discussed. A multi-level fusion architecture based Collaborative Click Fraud Detection and Prevention (CCFDP) system for real time click fraud detection and prevention is proposed and its performance is compared with a traditional rule based click fraud detection system. Both systems are tested with real world data from an actual ad campaign. Results show that use of multi-level data fusion improves the quality of click fraud analysis.",2010,0, 5196,Network Coding of Rateless Video in Streaming Overlays,"We present a system for collaborative video streaming in wired overlay networks. We propose a scheme that builds on both rateless codes and network coding in order to improve the system throughput and the video quality at clients. Our hybrid coding algorithm permits to efficiently exploit the available source and path diversity without the need for expensive routing nor scheduling algorithms. We consider specifically an architecture where multiple streaming servers simultaneously deliver video information to a set of clients. The servers apply Raptor coding on the video packets for error resiliency, and the overlay nodes selectively combine the Raptor coded video packets in order to increase the packet diversity in the system. We analyze the performance of selective network coding and describe its application to practical video streaming systems. We further compute an effective source and channel rate allocation in our collaborative streaming system. We estimate the expected symbol diversity at clients with respect to the coding choices. Then we cast a minmax quality optimization problem that is solved by a low-cost bisection based method. The experimental evaluation demonstrates that our system typically outperforms Raptor video streaming systems that do not use network coding as well as systems that perform decoding and encoding in the network nodes. Finally, our solution has a low complexity and only requires small buffers in the network coding nodes, which are certainly two important advantages toward deployment in practical streaming systems.",2010,0, 5197,Cycle accurate simulator generator for NoGap,"Application Specific Instruction-set Processors (ASIPs) are needed to handle the future demand of flexible yet high performance computation in mobile devices. However designing an ASIP is complicated by the fact that not only the processor but, also tools such as assemblers, simulators, and compilers have to be designed. Novel Generator of Accelerators And Processors (NoGap), is a design automation tool for ASIP design that imposes very few limitations on the designer. Yet NoGap supports the designer by automating much of the tedious and error prone tasks associated with ASIP design. This paper will present the techniques used to generate a stand alone software simulator for a processor designed with NoGap. The focus will be on the core algorithms used. Two main problems had to be solved, simulation of a data path graph and simulation of leaf functional units. The concept of sequentialization is introduced and the algorithms used to perform both the leaf unit sequentialization and data path sequentialization is presented. A key component of the sequentialization process is the Micro Architecture Generation Essentials (Mage) dependency graph. The mage dependency graph and the algorithm used for its generation are also presented in this paper. A NoGap simulator was generated for a simple processor and the results were verified.",2010,0, 5198,Multiview Video Coding extension of the H.264/AVC standard,An overview of the new Multiview Video Coding (MVC) extension of the H.264/AVC standard is given in this paper. The benefits of multiview video coding are examined by encoding two data sets captured with multiple cameras. These sequences were encoded using MVC reference software and the results are compared with references obtained by encoding multiple views independently with H.264/AVC reference software. Experimental quality comparison of coding efficiency is given. The peak signal-to-noise ration (PSNR) and Structural Similarity (SSIM) Index are used as objective quality metrics.,2010,0, 5199,Fault Slip Through measurement in software development process,"The pressure to improve software development process is not new, but in today's competitive environment there is even greater emphasis on delivering a better service at lower cost. In market-driven development where time-to-market is of crucial importance, software development companies seek improvements that can decrease the lead-time and improve the delivery precision. One way to achieve this is by analyzing the test process since rework commonly accounts for more than half of the development time. A large reason for high rework costs is fault slippage from earlier phases where they are cheaper to find and remove. As an input to improvements, this article introduces a measure that can quantify this relationship. That is, a measure called faults-slip-through, which determines the faults that would have been more cost-effective to find in an earlier phase.",2010,0, 5200,Fault tolerant amplifier system using evolvable hardware,"This paper proposes the use evolvable hardware (EHW) for providing fault tolerance to an amplifier system in a signal-conditioning environment. The system has to maintain a given gain despite the presence of faults, without direct human intervention. The hardware setup includes a reconfigurable system on chip device and an external computer where a genetic algorithm is running. For detecting a gain fault, we propose a software-based built-in self-test strategy that establishes the actual values of gain achievable by the system. The performance evaluation of the fault tolerance strategy proposed is made by adopting two different types of fault-models. The fault simulation results show that the technique is robust and that the genetic algorithm finds the target gain with low error.",2010,0, 5201,Solution for the power quality improvement in a transportation system,The paper proposes a solution for improving the power quality in a transportation system. The experimental determinations revealed a high level of harmonics and a high consumption of reactive power. The simulation using MATLAB/Simulink software was conceived and validated. A solution using passive filters is proposed and analyzed.,2010,0, 5202,A Measurement Quality Factor for Swath Bathymetry Sounders,"The quality estimation associated with individual soundings measured and computed by swath bathymetry sonars is a paramount issue which is most often imperfectly addressed today by sonar manufacturers. In this paper, a unified definition is proposed for a quality factor usable for all swath bathymetry sonars; the depth-relative error is directly estimated from the signal characteristics used in the sounding computation. The basic algorithms are presented for both phase-difference (oblique incidence) and amplitude (normal incidence) detection, and can be readily implemented in any swath bathymetry system using these detection principles. This approach gives a direct access to the objective bathymetric performance of individual soundings, and hence avoids the explicit estimation of intermediate quantities such as the signal-to-noise ratio (SNR). It makes possible intercomparisons between different systems (multibeam sounders versus interferometric sonars) and processing principles (phase difference versus amplitude); it is expected to provide a useful input to bathymetry postprocessing software using quality estimation of individual soundings.",2010,0, 5203,Live memory migration with matrix bitmap algorithm,"Live migration of OS instance, a powerful instrument to facilitate system maintenance, load balancing, fault tolerance, has become the central issue of virtualization, while the memory Migration has always been the bottleneck. However, exiting algorithms suffer two problems. First, when facing those frequently modified pages, the performance would degrade profoundly. Second, though they do perform more preferable through some certain algorithms, the accessional resources are requisite, which might extraordinary scarce under heavy load conditions. These imperfections will lead to striking performance degradation of virtual machine services. Therefore, based on the “Program Locality Principleâ€? this paper presents the “matrix bitmap algorithmâ€?that collects the dirty page information for many times before deciding whether to transfer the page or not. It provides a more reasonable approach to obtain the determination. Experiments demonstrate that when under heavy load conditions, if sufficient dirty page information is collected, the decrease in total migration time will achieve 50% without increasing the system burden of the original domain.",2010,0, 5204,Low-cost embedded system for the IM fault detection using neural networks,"This paper presents a realization of a low-cost, portable measurement system for induction motor fault diagnosis. The system is composed of a four-channel data acquisition recorder and laptop computer with specialized software. The diagnostic software takes advantage of the motor current signature analysis method (MCSA) for detection of rotor faults, voltage unbalance and mechanical misalignment. For diagnostics purposes the neural network technique is applied.",2010,0, 5205,Assessment and optimization of induction electric motors aiming energy efficiency in industrial applications,"A methodology for assessment and optimization of induction electric motors, devised to achieve energy conservation, is described. This methodology works by means of replacing high-efficiency motors for older ones. The electric motor is an end-use equipment which is strongly present in industry, being subject to replacements that may have satisfactory results when the diagnosis is carried out in a consistent way, in accordance to coherent validation procedures. The adopted motor replacement methodology included an initial study, by means of measurements of electrical parameters. Afterwards, by using a particular simulation software, operating conditions of the electric motor and the expected savings accruing from the use of high-efficiency motors were estimated. When necessary, the motor resizing analysis provides the best rated power for the drive. The motor substitutions carried out in this motor drives efficiency improvement program resulted in yearly savings amounting to 720 MWh.",2010,0, 5206,"Modeling, analysis and detection of rotor field winding faults in synchronous generators","The paper presents an approach to modeling of shorted turns in rotor field winding of synchronous generator using finite element method. It enables detailed analysis of magnetic field at several operating conditions under healthy and faulty states which are difficult or even impossible to carry out by available measurement methods in industrial environment. Modeling of field winding faults are performed for both typical generator designs - turbo and hydro, and analysis reveals some differences, which are significant for practical use in fault detection procedures. It is confirmed that an extensive analysis should be performed to assure accurate healthy/faulty state predictions, since the level of diagnostic signal is considerably influenced by a combination of many machine and operating parameters. The scheme of developed on-line diagnostic system with its hardware and software concept is also presented and discussed.",2010,0, 5207,Performance enhancement of software process models,"Software Process Improvement (SPI) is a systematic approach and continuous improvement of software producing organization's ability to produce and deliver quality software within time and budget constraints [1]. In the course of this project, the researcher developed, verified and validated such a model. This paper work presents a fit for use approach to software process improvement model for improve the process overall. This paper work concentrates on improvement in process as well as organization. This model is based on experience on project of software companies. This paper work is covered assessment, software process improvement, factor that influence on software process improvement. The ultimate aim was to develop a model which would be useful in practice for software development companies. This paper work describes the principle of software process improvement and improvement models. The purposed model is a generic model which is beneficial for small firm as well as large firm.",2010,0, 5208,Software fault detection for reliability using recurrent neural network modeling,"Software fault detection is an important factor for quantitatively characterizing software quality. One of the proposed methods for software fault detection is neural networks. Fault detection is actually a pattern recognition task. Faulty and fault free data are different patterns which must be recognized. In this paper we propose a new framework for modeling software testing and fault detection in applications. Recurrent neural network architecture is used to improve performance of the system. Based on the experiments performed on the software reliability data obtained from middle-sized application software, it is observed that the non-linear RNN can be effective and efficient for software faults detection.",2010,0, 5209,An analytical model for performance evaluation of software architectural styles,"Software architecture is an abstract model that gives syntactic and semantic information about the components of a software system and the relationship among them. The success of the software depends on whether the system can satisfy the quality attributes. One of the most critical aspects of the quality attributes of a software system is its performance. Performance analysis can be useful for assessing whether a proposed architecture can meet the desired performance specifications and whether it can help in making key architectural decisions. An architecture style is a set of principles which an architect uses in designing software architecture. Since software architectural styles have frequently been used by architects, these styles have a specific effect on quality attributes. If this effect is measurable for each existing style, it will enable the architect to evaluate and make architectural decisions more easily and precisely. In this paper an effort has been made to introduce a model for investigating this attributes in architectural styles. So, our approach initially models the system as Discrete Time Markov Chain or DTMC, and then extracts the parameters to predict the response time of the system.",2010,0, 5210,Exploratory failure analysis of open source software,"Reliability growth modeling in software system plays an important role in measuring and controlling software quality during software development. One main approach to reliability growth modeling is based on the statistical correlation of observed failure intensities versus estimated ones by the use of statistical models. Although there are a number of statistical models in the literature, this research concentrates on the following seven models: Weibull, Gamma, S-curve, Exponential, Lognormal, Cubic, and Schneidewind. The failure data collected are from five popular open source software (OSS) products. The objective is to determine which of the seven models best fits the failure data of the selected OSS products as well as predicting the future failure pattern based on partial failure history. The outcome reveals that the best model fitting the failure data is not necessarily the best predictor model.",2010,0, 5211,Research on fast mode selection and frame algorithm of H.264,"The H.264 video coding standard adopts many new coding tools, such as variable block size, multiple reference frames, quarter-pixel-accuracy motion estimation, intra prediction, loop filter, etc. Using these coding tools, H.264 achieves significant performance. Especially for reference frame, block size, and predication direction, it can adaptively do the selection. However, the encoding complexity increases tremendously. Among these tools, the macro block modes selection and the motion estimation contributes most to total encoding complexity. This paper focuses on complexity reduction in macro block modes selection. On the basis of analyzing some fast mode selection algorithms, a new algorithm is presented, in order to reduce the time of encoding, in the precondition of reducing the quality of encoding obviously.",2010,0, 5212,Revisiting common bug prediction findings using effort-aware models,"Bug prediction models are often used to help allocate software quality assurance efforts (e.g. testing and code reviews). Mende and Koschke have recently proposed bug prediction models that are effort-aware. These models factor in the effort needed to review or test code when evaluating the effectiveness of prediction models, leading to more realistic performance evaluations. In this paper, we revisit two common findings in the bug prediction literature: 1) Process metrics (e.g., change history) outperform product metrics (e.g., LOC), 2) Package-level predictions outperform file-level predictions. Through a case study on three projects from the Eclipse Foundation, we find that the first finding holds when effort is considered, while the second finding does not hold. These findings validate the practical significance of prior findings in the bug prediction literature and encourage their adoption in practice.",2010,0, 5213,Studying the impact of dependency network measures on software quality,"Dependency network measures capture various facets of the dependencies among software modules. For example, betweenness centrality measures how much information flows through a module compared to the rest of the network. Prior studies have shown that these measures are good predictors of post-release failures. However, these studies did not explore the causes for such good performance and did not provide guidance for practitioners to avoid future bugs. In this paper, we closely examine the causes for such performance by replicating prior studies using data from the Eclipse project. Our study shows that a small subset of dependency network measures have a large impact on post-release failure, while other network measures have a very limited impact. We also analyze the benefit of bug prediction in reducing testing cost. Finally, we explore the practical implications of the important network measures.",2010,0, 5214,SQUANER: A framework for monitoring the quality of software systems,"Despite the large number of quality models and publicly available quality assessment tools like PMD, Checkstyle, or FindBugs, very few studies have investigated the use of quality models by developers in their daily activities. One reason for this lack of studies is the absence of integrated environments for monitoring the evolution of software quality. We propose SQUANER (Software QUality ANalyzER), a framework for monitoring the evolution of the quality of object-oriented systems. SQUANER connects directly to the SVN of a system, extracts the source code, and perform quality evaluations and faults predictions every time a commit is made by a developer. After quality analysis, a feedback is provided to developers with instructions on how to improve their code.",2010,0, 5215,A human study of fault localization accuracy,"Localizing and repairing defects are critical software engineering activities. Not all programs and not all bugs are equally easy to debug, however. We present formal models, backed by a human study involving 65 participants (from both academia and industry) and 1830 total judgments, relating various software- and defect-related features to human accuracy at locating errors. Our study involves example code from Java textbooks, helping us to control for both readability and complexity. We find that certain types of defects are much harder for humans to locate accurately. For example, humans are over five times more accurate at locating “extra statementsâ€?than “missing statementsâ€?based on experimental observation. We also find that, independent of the type of defect involved, certain code contexts are harder to debug than others. For example, humans are over three times more accurate at finding defects in code that provides an array abstraction than in code that provides a tree abstraction. We identify and analyze code features that are predictive of human fault localization accuracy. Finally, we present a formal model of debugging accuracy based on those source code features that have a statistically significant correlation with human performance.",2010,0, 5216,An approach to improving software inspections performance,Software inspections allow finding and removing defects close to their point of injection and are considered a cheap and effective way to detect and remove defects. A lot of research work has focused on understanding the sources of variability and improving software inspections performance. In this paper we studied the impact of inspection review rate in process performance. The study was carried out in an industrial context effort of bridging the gap from CMMI level 3 to level 5. We supported a decision for process change and improvement based on statistical significant information. Study results led us to conclude that review rate is an important factor affecting code inspections performance and that the applicability of statistical methods was useful in modeling and predicting process performance.,2010,0, 5217,Physical and conceptual identifier dispersion: Measures and relation to fault proneness,"Poorly-chosen identifiers have been reported in the literature as misleading and increasing the program comprehension effort. Identifiers are composed of terms, which can be dictionary words, acronyms, contractions, or simple strings. We conjecture that the use of identical terms in different contexts may increase the risk of faults. We investigate our conjecture using a measure combining term entropy and term context coverage to study whether certain terms increase the odds ratios of methods to be fault-prone. Entropy measures the physical dispersion of terms in a program: the higher the entropy, the more scattered across the program the terms. Context coverage measures the conceptual dispersion of terms: the higher their context coverage, the more unrelated the methods using them. We compute term entropy and context coverage of terms extracted from identifiers in Rhino 1.4R3 and ArgoUML 0.16. We show statistically that methods containing terms with high entropy and context coverage are more fault-prone than others.",2010,0, 5218,Influences of different excitation parameters upon PEC testing for deep-layered defect detection with rectangular sensor,"In pulsed eddy current testing, repetitive excitation signals with different parameters: duty-cycle, frequency and amplitude have different response representations. This work studies the influences of different excitation parameters on pulsed eddy current testing for deep-layered defects detection of stratified samples with rectangular sensor. The sensor had been proved to be superior in quantification and classification of defects in multi-layered structures compared with traditional circular ones. Experimental results show necessities to optimize the parameters of pulsed excitation signal, and advantages of obtaining better performances to enhance the POD of PEC testing.",2010,0, 5219,FTA-based assessment methodology for adaptability of system under electromagnetic environment,"In order to develop a methodology for assessing the adaptability of system under electromagnetic environment especially RF environment, a Fault-Tree-Analysis-based method is proposed considering the possible performance degradation, disfunction and fault. A component-level and a system-level assessment are performed. An approach to describing logical and hypotactic relation of nodes in Fault-Tree Analysis is designed and illustrated in detail. A software platform is developed to implement the system-level assessment.",2010,0, 5220,Intelligent and innovative monitoring of water treatment plant in large Gas Refinery,"A PLC can communicate through its inputs and output signals with the plant. The capabilities of PLCs have developed over the years, with reliability, performance, and operational. Developing monitoring in Automation Control System is a major industrial concern since those systems are more and more complex and involved in many safety critical application fields. The Automation system is being widely used in Power, Oil and Gas, and Petrochemicals for monitoring, advance process control and various other functions. Automation system is operated either in single mode and or it is interfaced with other systems such as PLC, SCADA, and Etc for enhanced functionality. Water treatment plants have to provide good water quality and at the same time low operational costs. Owing to various physical, chemical and biological interactions water treatment processes are often difficult to handle and reliable predictions for the course of processes are difficult to obtain. This paper focuses on an innovative and intelligent monitoring system for Water Treatment Plant that has been successfully designed, implemented and commissioned for large Gas Refinery after case study in international standards.",2010,0, 5221,A .NET framework for an integrated fault diagnosis and failure prognosis architecture,"This paper presents a .NET framework as the integrating software platform linking all constituent modules of the fault diagnosis and failure prognosis architecture. The inherent characteristics of the .NET framework provide the proposed system with a generic architecture for fault diagnosis and failure prognosis for a variety of applications. Functioning as data processing, feature extraction, fault diagnosis and failure prognosis, the corresponding modules in the system are built as .NET components that are developed separately and independently in any of the .NET languages. With the use of Bayesian estimation theory, a generic particle-filtering-based framework is integrated in the system for fault diagnosis and failure prognosis. The system is tested in two different applications - bearing spalling fault diagnosis and failure prognosis and brushless DC motor turn-to-turn winding fault diagnosis. The results suggest that the system is capable of meeting performance requirements specified by both the developer and the user for a variety of engineering systems.",2010,0, 5222,Simulation and measurement of back side etched inductors,Deep-silicon etching was applied to an inductor in a 0.25 μm SiGe:C BiCMOS process. Significant increase of quality factor was achieved. Different simulations of the inductor were done with a planar EM simulator. The way how to handle non-homogenous dielectrics in a planar EM simulator is shown. A good agreement between measurement and simulation can be seen. The effect of back side etching is well predicted by EM simulation.,2010,0, 5223,Preprocessing of Slovak Blog Articles for Clustering,"Web content clustering is very important part of topic detection and tracking issue. In our paper we focus on pre-processing phase of web content clustering. We focus on blog articles published in Slovak language. We evaluate the impact of different data pre-processing methods on success of blog clustering. We found out that applying various text data manipulation techniques in preprocessing can improve the quality of clusters. The quality of clusters is measured by traditional clustering metrics like precision, recall and F-measure.",2010,0, 5224,An Anomaly Detection Framework for Autonomic Management of Compute Cloud Systems,"In large-scale compute cloud systems, component failures become norms instead of exceptions. Failure occurrence as well as its impact on system performance and operation costs are becoming an increasingly important concern to system designers and administrators. When a system fails to function properly, health-related data are valuable for troubleshooting. However, it is challenging to effectively detect anomalies from the voluminous amount of noisy, high-dimensional data. The traditional manual approach is time-consuming, error-prone, and not scalable. In this paper, we present an autonomic mechanism for anomaly detection in compute cloud systems. A set of techniques is presented to automatically analyze collected data: data transformation to construct a uniform data format for data analysis, feature extraction to reduce data size, and unsupervised learning to detect the nodes acting differently from others. We evaluate our prototype implementation on an institute-wide compute cloud environment. The results show that our mechanism can effectively detect faulty nodes with high accuracy and low computation overhead.",2010,0, 5225,System Level Hardening by Computing with Matrices,"Continuous advances in transistor manufacturing have enabled technology scaling along the years, sustaining Moore's law. As transistors sizes rapidly shrink, and voltage scales, the amount of charge in a node also rapidly decreases. A particle hitting the core will probably cause a transient fault to spam over several clock cycles. In this scenario, embedded systems using state-of-the-art technologies will face the challenge of operating in an environment susceptible to multiple errors, but with restricted resources available to deploy fault-tolerance, as these techniques severely increase power consumption. One possible solution to this problem is the adoption of software based fault-tolerance at the system level, aiming at reduced energy levels to ensure reliability and low energy dissipation. In this paper, we claim the detection and correction of errors on generic data structures at system level by using matrices to encode any program and algorithm. With such encoding, it is possible to employ established techniques of detection and correction of errors occurring in matrices, running with inexpressive overhead of power and energy. We evaluated this proposal using two case studies significant for the embedded system domain. Using the proposed approach, we observed in some cases an overhead of only 5% in performance and 8% in program size.",2010,0, 5226,Simulation of High-Performance Memory Allocators,"Current general-purpose memory allocators do not provide sufficient speed or flexibility for modern high-performance applications. To optimize metrics like performance, memory usage and energy consumption, software engineers often write custom allocators from scratch, which is a difficult and error-prone process. In this paper, we present a flexible and efficient simulator to study Dynamic Memory Managers (DMMs), a composition of one or more memory allocators. This novel approach allows programmers to simulate custom and general DMMs, which can be composed without incurring any additional runtime overhead or additional programming cost. We show that this infrastructure simplifies DMM construction, mainly because the target application does not need to be compiled every time a new DMM must be evaluated. Within a search procedure, the system designer can choose the ""best"" allocator by simulation for a particular target application. In our evaluation, we show that our scheme will deliver better performance, less memory usage and less energy consumption than single memory allocators.",2010,0, 5227,Software Fault Prediction Models for Web Applications,"Our daily life increasingly relies on Web applications. Web applications provide us with abundant services to support our everyday activities. As a result, quality assurance for Web applications is becoming important and has gained much attention from software engineering community. In recent years, in order to enhance software quality, many software fault prediction models have been constructed to predict which software modules are likely to be faulty during operations. Such models can be utilized to raise the effectiveness of software testing activities and reduce project risks. Although current fault prediction models can be applied to predict faulty modules of Web applications, one limitation of them is that they do not consider particular characteristics of Web applications. In this paper, we try to build fault prediction models aiming for Web applications after analyzing major characteristics which may impact on their quality. The experimental study shows that our approach achieves very promising results.",2010,0, 5228,The Right Tool for the Right Job: Assessing Model Transformation Quality,"Model-Driven Engineering (MDE) is a software engineering discipline in which models play a central role. One of the key concepts of MDE is model transformations. Because of the crucial role of model transformations in MDE, they have to be treated in a similar way as traditional software artifacts. They have to be used by multiple developers, they have to be maintained according to changing requirements and they should preferably be reused. It is therefore necessary to define and assess their quality. In this paper, we give two definitions for two different views on the quality of model transformations. We will also give some examples of quality assessment techniques for model transformations. The paper concludes with an argument about which type of quality assessment technique is most suitable for either of the views on model transformation quality.",2010,0, 5229,A Parallel Implementation Strategy of Adaptive Testing,"Software testing is an important part of software development and can account for more than 50% of the development cost. In order to shorten the testing time and reduce cost, parallel mechanism is introduced to handle multiple testing tasks on the same or different software under tests (SUTs) simultaneously. On the other hand, advanced testing techniques, such as Adaptive Testing (AT) have been proposed to improve the efficiency of traditional random/partition testing. Inspired by the parallel computing technique, we present a parallel implementation of the Adaptive Testing techniques, namely the AT-P strategy. Experiments on the Space program are conducted and data indicates that the AT-P strategy can notably improve the defect detection efficiency of the original adaptive testing technique, whereas providing stable performance enhancement over multiple testing runs.",2010,0, 5230,Influence of Software-Based NIDS on FEC-Protected Uncompressed Video Transmissions,"This study investigates the tradeoff between security and the usability or performance of multimedia services in the case of a software-based network intrusion detection system and uncompressed video transmissions. Uncompressed video transmissions are especially interesting in this context as they are not subjected to jitter and delays caused by compression mechanisms and can therefore be expected to be more robust when it comes to added delay variations and subsequent distortions stemming from security devices. The paper focuses on the special case where video is protected by Forward Error Correction mechanisms, which allow reordering of the original packet sequence after it may have been altered by network impairments. Various Forward Error Correction mechanisms are applied in the presence of a network intrusion detection system and its impact on the resulting video quality is investigated.",2010,0, 5231,Simulation and measurement of back side etched inductors,Deep-silicon etching was applied to an inductor in a 0.25 μm SiGe:C BiCMOS process. Significant increase of quality factor was achieved. Different simulations of the inductor were done with a planar EM simulator. The way how to handle non-homogenous dielectrics in a planar EM simulator is shown. A good agreement between measurement and simulation can be seen. The effect of back side etching is well predicted by EM simulation.,2010,0,5222 5232,The SQALE Analysis Model: An Analysis Model Compliant with the Representation Condition for Assessing the Quality of Software Source Code,This paper presents the analysis model of the assessment method of software source code SQALE (Software Quality Assessment Based on Lifecycle Expectations). We explain what brought us to develop consolidation rules based in remediation indices. We describe how the analysis model can be implemented in practice.,2010,0, 5233,The study of Harmonic suppression strategies based on permanent-magnet synchronous servo system,"The Harmonics will make the motor generate additional losses, reduce energy efficiency and power factor, affect the normal operation of equipments, and also make the motor generate an electrical fault such as mechanical vibrations and noise, and also interfere with the nearby devices, and reduce their communication quality and EMC, EMI performances. In order to solve the existence of harmonic problems in AC servo motor control system, this paper proposes a low-cost harmonics detection method based on the DSP TMS320F2812, which also uses the corresponding harmonics suppression strategy to achieve purified control voltage and improves the stability and power efficiency of the motor.",2010,0, 5234,An improved tone mapping algorithm for High Dynamic Range images,"Real world scenes contain a large range of light intensities. To adapt to display device, High Dynamic Range (HDR) image should be converted into Low Dynamic Range (LDR) image. A common task of tone mapping algorithms is to reproduce high dynamic range images on low dynamic range display devices. In this paper, a new tone mapping algorithm is proposed for high dynamic range images. Based on the probabilistic model is proposed for high dynamic image's tone reproduction, the proposed method uses a logarithmic normal distribution instead of normal distribution. Therefore, the algorithm can preserve visibility and contrast impression of high dynamic range scenes in the common display devices. Experimental results show the superior performance of the app roach in terms of visual quality.",2010,0, 5235,Agent-Oriented Designs for a Self Healing Smart Grid,"Electrical grids are highly complex and dynamic systems that can be unreliable, insecure, and inefficient in serving end consumers. The promise of Smart Grids lies in the architecting and developing of intelligent distributed and networked systems for automated monitoring and controlling of the grid to improve performance. We have designed an agent-oriented architecture for a simulation which can help in understanding Smart Grid issues and in identifying ways to improve the electrical grid. We focus primarily on the self-healing problem, which concerns methodologies for activating control solutions to take preventative actions or to handle problems after they occur. We present software design issues that must be considered in producing a system that is flexible, adaptable and scalable. Agent-based systems provide a paradigm for conceptualizing, designing, and implementing software systems. Agents are sophisticated computer programs that can act autonomously and communicate with each other across open and distributed environments. We present design issues that are appropriate in developing a Multi-agent System (MAS) for the grid. Our MAS is implemented in the Java Agent Development Framework (JADE). Our Smart Grid Simulation uses many types of agents to acquire and monitor data, support decision making, and represent devices, controls, alternative power sources, the environment, management functions, and user interfaces.",2010,0, 5236,Comparison research of two typical UML-class-diagram metrics: Experimental software engineering,"Measuring UML class diagram complexity can help developers select one with lowest complexity from a variety of different designs with the same functionality; also provide guidance for developing high quality class diagrams. This paper compared the advantages and disadvantages of two typical class-diagram complexity metrics based on statistics and entropy-distance respectively from the view of newly experimental software engineering. 27 class diagrams related to the banking system were classified and predicted their understandability, analyzability and maintainability by means of algorithm C5.0 in well-known software SPSS Clementine. Results showed that UML class diagrams complexity metric based on statistics has higher classification accuracy than that based on entropy-distance.",2010,0, 5237,Development of a real-time machine vision system for detecting defeats of cord fabrics,"Automatic detection techniques based on machine vision can be used in fabric industry for quality control, which constantly pursues intelligent methods to replace human inspections of product. This work introduces the principle components of a real-time machine vision system for defeat detection of cord fabrics, which is usually a challenging task in practice. The work aims at solving some difficulties usually incurring in such kind of tasks. The design and implementation of the algorithm, software and hardware are introduced. Based on the Gabor wavelet techniques, the system can automatically detect regular texture defects. Our experiments show the proposed algorithm is favorably suited for detecting several types of cord fabric defects. The system testing has been carried in both on-line and off-line situations. The corresponding results show the system has good performance with high detection accuracy, quick response and strong robustness.",2010,0, 5238,LMI-based aircraft engine sensor fault diagnosis using a bank of robust Hâˆ?/inf> filters,"This paper proposes a sensor diagnostic approach based on a bank of Hâˆ?/sub> filters for aircraft gas turbine engines, taking into account real-time and anti-disturbance requirements to run on an aircraft on-board computer. First of all, by defining an Hâˆ?/sub> performance index to describe the disturbance attenuation performance of a dynamic system, the robust Hâˆ?/sub> filter design problem has been formulated in the linear matrix inequality (LMI) framework and been efficiently solved using off-the-shelf software. Then we make use of a bank of Hâˆ?/sub> filters for the sensor fault detection and isolation based on the logic status of a group of residuals. Finally, an illustrative simulation has been given to verify the effectiveness of the proposed design method. The benefit of this paper is to bridge the gap between the Hâˆ?/sub> filter theory and engineering applications for reliable diagnostics of aircraft engines.",2010,0, 5239,The study of virtual testing design and analysis method on warhead,"Virtual experiment is an approach to utilize numerical computation to study the complex physics processes, and is used widely in the areas of weapon system development and performance evaluation. The virtual experiment for warhead aims at studying various conditions of damage on target by warhead, analyzing the factors of warhead power, and provide the basis for the warhead design, with high quality cluster systems and high precise FEA software. This paper is based on virtual experiment over the shaped-charge warhead, and study the virtual experiment design and analysis methods.",2010,0, 5240,Practical Aspects in Analyzing and Sharing the Results of Experimental Evaluation,"Dependability evaluation techniques such as the ones based on testing, or on the analysis of field data on computer faults, are a fundamental process in assessing complex and critical systems. Recently a new approach has been proposed consisting in collecting the row data produced in the experimental evaluation and store it in a multidimensional data structure. This paper reports the work in progress activities of the entire process of collecting, storing and analyzing the experimental data in order to perform a sound experimental evaluation. This is done through describing the various steps on a running example.",2010,0, 5241,Quantifying Resiliency of IaaS Cloud,"Cloud based services may experience changes - internal, external, large, small - at any time. Predicting and quantifying the effects on the quality-of-service during and after a change are important in the resiliency assessment of a cloud based service. In this paper, we quantify the resiliency of infrastructure-as-a-service (IaaS) cloud when subject to changes in demand and available capacity. Using a stochastic reward net based model for provisioning and servicing requests in a IaaS cloud, we quantify the resiliency of IaaS cloud w.r.t. two key performance measures - job rejection rate and provisioning response delay.",2010,0, 5242,A Multi-step Simulation Approach toward Secure Fault Tolerant System Evaluation,"As new techniques of fault tolerance and security emerge, so does the need for suitable tools to evaluate them. Generally, the security of a system can be estimated and verified via logical test cases, but the performance overhead of security algorithms on a system needs to be numerically analyzed. The diversity in security methods and design of fault tolerant systems make it impossible for researchers to come up with a standard, affordable and openly available simulation tool, evaluation framework or an experimental test-bed. Therefore, researchers choose from a wide range of available modeling-based, implementation-based or simulation-based approaches in order to evaluate their designs. All of these approaches have certain merits and several drawbacks. For instance, development of a system prototype provides a more accurate system analysis but unlike simulation, it is not highly scalable. This paper presents a multi-step, simulation-based performance evaluation methodology for secure fault tolerant systems. We use a divide-and-conquer approach to model the entire secure system in a way that allows the use of different analytical tools at different levels of granularity. This evaluation procedure tries to strike a balance between the efficiency, effort, cost and accuracy of a system's performance analysis. We demonstrate this approach in a step-by-step manner by analyzing the performance of a secure and fault tolerant system using a JAVA implementation in conjunction with the ARENA simulation.",2010,0, 5243,CCDA: Correcting control-flow and data errors automatically,"This paper presents an efficient software technique to detect and correct control-flow errors through addition of redundant codes in a given program. The key innovation performed in the proposed technique is detection and correction of the control-flow errors using both control-flow graph and data-flow graph. Using this technique, most of control-flow errors in the program are detected first, and next corrected, automatically; so, both errors in the control-flow and program data which is caused by control-flow errors can be corrected. In order to evaluate the proposed technique, a post compiler is used, so that the technique can be applied to every 80×86 binaries, transparently. Three benchmarks quick sort, matrix multiplication and linked list are used, and a total of 5000 transient faults are injected on several executable points in each program. The experimental results demonstrate that at least 93% of the control-flow errors can be detected and corrected by the proposed technique automatically without any data error generation. Moreover, the performance and memory overheads of the technique are noticeably less than traditional techniques.",2010,0, 5244,Measuring web service interfaces,"The following short paper describes a tool supported method for measuring web service interfaces. The goal is to assess the complexity and quality of these interfaces as well as to determine their size for estimating evolution and testing effort. Besides the metrics for quantity, quality and complexity, rules are defined for ensuring maintainability. In the end a tool - WSDAudit - is described which the author has developed for the static analysis of web service definitions. The WSDL schemas are automatically audited and measured for quality assurance and cost estimation. Work is underway to verify them against the BPEL procedures from which they are invoked.",2010,0, 5245,Routing metric estimation in multi-hop wireless networks with fading channel,"In this paper we describe a statistical analysis of a typical routing metric, Expected Transmission Time (ETT), used for path evaluation in multi-hop wireless networks. The need of statistics description is linked with the presence of network parameters uncertainty due to both a fading channel and ambiguity of nodes position. This analysis led us to define a statistical estimator, which allows controlling all these effects, in order to improve the quality of route choice during routing decisions.",2010,0, 5246,Quality of signaling (QoSg) metrics for evaluating SIP transaction performance,"Service quality and user-perceived experience has been a networking research topic for many years. Whereas most related work concentrates on the quality of media, only very few papers, however, discuss the performance of the signaling for those media. This paper addresses the performance of the Session Initiation Protocol (SIP) which is very popular with Next Generation Networks (NGN). A collection of metrics, the so-called Quality of Signalling (QoSg), is introduced for measuring SIP signaling delays. In contrast to traditional end-to-end performance metrics, QoSg aims at evaluating individual SIP transactions. Therefore, this approach can be applied anywhere in the SIP network and allows for additional insight, for instance regarding congestion detection or SLA monitoring.",2010,0, 5247,Application of a new fast-responding estimation approach in DVR to mitigate voltage sags in harmonic distortion conditions,"The main duty of a Dynamic Voltage Restorer (DVR) is protection of sensitive loads against voltage disturbances, especially voltage sags and swells. DVR should possess necessary response time in restoring load bus voltage to avoid load interruptions. One of the most important factors that affects the performance of DVR is the estimation approach which is used in control unit of this compensator. This paper presents a new fast-responding algorithm for the on-line estimation of symmetrical components in harmonic distortion conditions, which provides adequate response time and accuracy for dynamic voltage restorer. Furthermore, performance of this approach would be compared with some well-known estimation techniques. Application of the proposed estimation approach is realized in DVR by representing a new control unit for compensation of different voltage sags and swells. Finally, the simulation results, which obtained using PSCAD/EMTDC software package confirm efficiency of the suggested control unit.",2010,0, 5248,"Monitoring and unbalance mitigation of harmonic current symmetrical components, using double Complex-ADALINE algorithm","Harmonics tracking as well as symmetrical components identification are of the great importance in Power Quality (PQ) evaluation and improvement. In this paper, a new approach for fast and accurate estimation of fundamental and harmonic symmetrical components is presented. Firstly, the three-phase input signals are simplified into a real-imaginary system using Park vector. Then, the resulted signal is fed to a Complex-ADALINE (C-ADALINE) structure for the estimation of symmetrical components of both fundamental and harmonics of the input. Also, zero sequences of the harmonics are extracted by another C-ADALINE unit. For simplicity, both of the C-ADALINE structures are organized in one computational system, which is called Double C-ADALINE. Finally, two case studies are conducted on MATLAB software package to investigate the performance of the suggested technique and compare it with conventional ADALINE. The simulation results indicate that the proposed system shows an excellent performance in monitoring the harmonics and symmetrical components in condition of dynamic changes of artificial three-phase signals, as well as control strategy in a D-STATCOM to harmonic cancellation, unbalance mitigation and reactive power compensation in a test distribution network.",2010,0, 5249,"New phasor estimator in the presence of harmonics, dc offset, and interharmonics","This paper proposes the use of Artificial Neural Networks (ANN) to estimate the magnitude and phase of fundamental component of sinusoidal signals in the presence of harmonics, sub-harmonics and DC offset. The proposed methodology uses a preprocessing that is able to generate a signal that represents the influence of sub-harmonics and DC offset in the fundamental component. This signal is used as input of ANN, which estimate the influence of that signal in quadrature components. Using this information, corrections can be made in quadrature components then the real value of phasor of the fundamental component is estimated. The performance of the proposed algorithm was compared with classical methods such as DFT, using one and two cycles, and LES. The results showed that the proposed method is accurate and fast. The methodology can be used as a phasor estimator of system with poor Power Quality indices for monitoring, control and protection applications.",2010,0, 5250,Motor function assessment using wearable inertial sensors,"We present an approach to wearable sensor-based assessment of motor function in individuals post stroke. We make use of one on-body inertial measurement unit (IMU) to automate the functional ability (FA) scoring of the Wolf Motor Function Test (WMFT). WMFT is an assessment instrument used to determine the functional motor capabilities of individuals post stroke. It is comprised of 17 tasks, 15 of which are rated according to performance time and quality of motion. We present signal processing and machine learning tools to estimate the WMFT FA scores of the 15 tasks using IMU data. We treat this as a classification problem in multidimensional feature space and use a supervised learning approach.",2010,0, 5251,Automatic detection of schwalbe's line in the anterior chamber angle of the eye using HD-OCT images,"Angle-closure glaucoma is a major cause of blindness in Asia and could be detected by measuring the anterior chamber angle (ACA) using gonioscopy, ultrasound biomicroscopy or anterior segment (AS) optical coherence tomography (OCT). The current software in the VisanteTM OCT system by Zeiss is based on manual labeling of the scleral spur, cornea and iris and is a tedious process for ophthalmologists. Furthermore, the scleral spur can not be identified in about 20% to 30% of OCT images and thus measurements of the ACA are not reliable. However, high definition (HD) OCT has identified a more consistent landmark: Schwalbe's line. This paper presents a novel algorithm which automatically detects Schwalbe's line in HD-OCT scans. The average deviation between the values detected using our algorithm and those labeled by the ophthalmologist is less than 0.5% and 0.35% in the horizontal and vertical image dimension, respectively. Furthermore, we propose a new measurement to quantify ACA which is defined as Schwalbe's line bounded area (SLBA).",2010,0, 5252,Quality factors of business value and service level measurement for SOA,"This paper presents the quality factors of web services with definition, classification, and sub-factors for business and service level measurement. Web services, main enabler for SOA, usually have characteristics distinguished from general stand-alone software. They are provided in different ownership domain by network. They can also adopt various binding mechanism and be loosely-coupled, platform independent, and use standard protocols. As a result, a web service system requires its own quality factors unlike installation-based software. For instance, as the quality of web services can be altered in real-time according to the change of the service provider, considering real-time property of web services is very meaningful in describing the web services quality. This paper focuses especially to the two quality factors: business value and service level measurement, affecting real world business activities and a service performance.",2010,0,5369 5253,Automatic code generation for solvers of cardiac cellular membrane dynamics in GPUs,"The modeling of the electrical activity of the heart is of great medical and scientific interest, as it provides a way to get a better understanding of the related biophysical phenomena, allows the development of new techniques for diagnoses and serves as a platform for drug tests. However, due to the multi-scale nature of the underlying processes, the simulations of the cardiac bioelectric activity are still a computational challenge. In addition to that, the implementation of these computer models is a time consuming and error prone process. In this work we present a tool for prototyping ordinary differential equations (ODEs) in the area of cardiac modeling that aim to provide the automatic generation of high performance solvers tailored to the new hardware architecture of the graphic processing units (GPUs). The performance of these automatic solvers was evaluated using four different cardiac myocyte models. The GPU version of the solvers were between 75 and 290 times faster than the CPU versions.",2010,0, 5254,SeyeS - support system for preventing the development of ocular disabilities in leprosy,"Leprosy is an infectious disease caused by Mycobacterium Leprae, and generally compromises neural fibers, leading to the development of disabilities. These limit daily activities or social life. In leprosy, the study of disability considered functional (physical) and activity limitations; and social participation. These are measured respectively by EHF and SALSA scales; by and PARTICIPATION SCALE. The objective of this work was to propose a support system, SeyeS, to eyes disabilities development and progression identification, applying Bayesians network - BN's. It is expected that the proposed system be applied in monitoring the patient during treatment and after therapeutic cure of leprosy. SeyeS presented specificity 1 and sensitivity 0.6 in the identification of ocular disabilities development. With Seyes was discovered that the presence of trichiasis and lagophthalmos, tend to increase the probability of developing more disabilities. Otherwise, characteristics as cataracts tend to decrease development of other disabilities, considering that medical interventions could reduce it. The more import of this system is to indicate what should be monitored, and which elements needs interventions to not increasing patient's ocular disabilities.",2010,0, 5255,An Adaptive Failure Detection Method with Controllable QoS for Distributed Storage System,"According to the situation that different application asks for different requirement of storage node's performance, an adaptive failure detection method with controllable QoS for distributed storage system is proposed. The delay characteristic of heartbeat message is summarized. The relational expressions of neighbor heartbeat message's arrival time in the different network state are established. The QoS parameters formulas of different network state are proposed. The controllability of QoS parameters is analyzed. The algorithm of the method is designed. The experimental results show that the method realizes the balance control between detection speed and detection accuracy by setting safe remaining time with a certain value, and has adaptability that it can have a high detection speed but fewer mistakes.",2010,0, 5256,Hybrid Probabilistic Relational Models for System Quality Analysis,"The formalism Probabilistic Relational Models (PRM) couples discrete Bayesian Networks with a modeling formalism similar to UML class diagrams and has been used for architecture analysis. PRMs are well-suited to perform architecture analysis with respect to system qualities since they support both modeling and analysis within the same formalism. A particular strength of PRMs is the ability to perform meaningful analysis of domains where there is a high level of uncertainty, as is often the case when performing system quality analysis. However, the use of discrete Bayesian networks in PRMs complicates the analysis of continuous phenomena. The main contribution of this paper is the Hybrid Probabilistic Relational Models (HPRM) formalism which extends PRMs to enable continuous analysis thus extending the applicability for architecture analysis and especially for trade-off analysis of system qualities. HPRMs use hybrid Bayesian networks which allow combinations of discrete and continuous variables. In addition to presenting the HPRM formalism, the paper contains an example which details the use of HPRMs for architecture trade-off analysis.",2010,0, 5257,Novel Quality Measures for Image Fusion Based on Structural Similarity and Visual Attention Mechanism,"A novel objective quality for image fusion based on structural similarity and visual attention mechanism (VAM) is presented. By giving higher weight to the salient areas in the input images, the quality measure can estimate how much visual meaningful information is preserved in the fused image. The correlation analysis between objective measure and subjective evaluation showed that our measures are more consistent with human subjective evaluation.",2010,0, 5258,Predicting Faults in High Assurance Software,"Reducing the number of latent software defects is a development goal that is particularly applicable to high assurance software systems. For such systems, the software measurement and defect data is highly skewed toward the not-fault-prone program modules, i.e., the number of fault-prone modules is relatively very small. The skewed data problem, also known as class imbalance, poses a unique challenge when training a software quality estimation model. However, practitioners and researchers often build defect prediction models without regard to the skewed data problem. In high assurance systems, the class imbalance problem must be addressed when building defect predictors. This study investigates the roughly balanced bagging (RBBag) algorithm for building software quality models with data sets that suffer from class imbalance. The algorithm combines bagging and data sampling into one technique. A case study of 15 software measurement data sets from different real-world high assurance systems is used in our investigation of the RBBag algorithm. Two commonly used classification algorithms in the software engineering domain, Naive Bayes and C4.5 decision tree, are combined with RBBag for building the software quality models. The results demonstrate that defect prediction models based on the RBBag algorithm significantly outperform models built without any bagging or data sampling. The RBBag algorithm provides the analyst with a tool for effectively addressing class imbalance when training defect predictors during high assurance software development.",2010,0, 5259,A Framework for Qualitative and Quantitative Formal Model-Based Safety Analysis,"In model-based safety analysis both qualitative aspects i.e. what must go wrong for a system failure) and quantitative aspects (i.e. how probable is a system failure) are very important. For both aspects methods and tools are available. However, until now for each aspect new and independent models must be built for analysis. This paper proposes the SAML framework as a formal foundation for both qualitative and quantitative formal model-based safety analysis. The main advantage of SAML is the combination of qualitative and quantitative formal semantics which allows different analyses on the same model. This increases the confidence in the analysis results, simplifies modeling and is less error-prone. The SAML framework is tool-independent. As proof-of-concept, we present sound transformation of the formalism into two state of the art model-checking notations. Prototypical tool support for the sound transformation of SAML into PRISM and MRMC for probabilistic analysis as well as different variants of the SMV model checker for qualitative analysis is currently being developed.",2010,0, 5260,Request Path Driven Model for Performance Fault Diagnoses,"Locating and diagnosing performance faults in distributed systems is crucial but challenging. Distributed systems are increasingly complex, full of various correlation and dependency, and exhibit dramatic dynamics. All these made traditional approaches prone to high false alarms. In this paper, we propose a novel system modeling technique, which encodes component's dynamic dependencies and behavior characteristics into system's meta-model and takes it as a unifying framework to deploy component's sub-models. We propose an automatic analyze approach to distill, from request travel paths, request path signatures, the essential information of component's dynamic behaviors, and use it to induce metamodel with Bayesian network, and then use the model to make fault location and diagnoses. We take up fault-injection experiments with RUBiS, a TPCW alike benchmark, simulating eBay.com. The results indicate that our model approach provides effective problem diagnosis, i.e., Bayesian network technique is effective for fault detecting and pinpointing, in terms of request tracing context. Moreover, meta-model induced with request paths, provides an effective guidance for learning statistical correlations among metrics across the system, which effectively avoid 'false alarms' in fault pinpointing. As a case study, we construct a proactive recovery framework, which integrate our system modeling technique with software rejuvenation technique to guarantee system's quality of services.",2010,0, 5261,Fault Diagnosis System of Locomotive Axle's Track Based on Virtual Instrument,"Railway vehicle is usually working in hazardous environment. The operational status of axle, which is the railway vehicle's main components to run, affects the safe operation of the railway vehicle directly. Over the years, the railway of our country has remained low equipment-rate, high using-rate and high intensity-transport. In addition, because of the need of the rapid development of economy, our country has raised the speed of the railway vehicle, which leads to the railway vehicle's axle wearing and tearing seriously, the life shortened and accidents increasing significantly. So, it is very necessary and urgent to develop the detection system because axle is related to the safety of the railway vehicle. Compared with traditional instrument, the virtual instrument has characteristics of openness, easy using and high cost performance. It has already been widely used in detection systems at present. Based on LabVIEW which is a developing platform software of NI Company, the paper has designed a kind of fault diagnosis system on locomotive axle's track.",2010,0, 5262,Empirical Evaluation of Factors Affecting Distinction between Failing and Passing Executions,"Information captured in software execution profiles can benefit verification activities by supporting more cost-effective fault localization and execution classification. This paper proposes an experimental design which utilizes execution information to quantify the effect of factors such as different programs and fault inclusions on the distinction between passed and failed execution profiles. For this controlled experiment we use well-known, benchmark-like programs. In addition to experimentation, our empirical evaluation includes case studies of open source programs having more complex fault models. The results show that metrics reflecting distinction between failing and passing executions are affected more by program than by faults included.",2010,0, 5263,The Impact of Coupling on the Fault-Proneness of Aspect-Oriented Programs: An Empirical Study,"Coupling in software applications is often used as an indicator of external quality attributes such as fault-proneness. In fact, the correlation of coupling metrics and faults in object oriented programs has been widely studied. However, there is very limited knowledge about which coupling properties in aspect-oriented programming (AOP) are effective indicators of faults in modules. Existing coupling metrics do not take into account the specificities of AOP mechanisms. As a result, these metrics are unlikely to provide optimal predictions of pivotal quality attributes such as fault-proneness. This impacts further by restraining the assessments of AOP empirical studies. To address these issues, this paper presents an empirical study to evaluate the impact of coupling sourced from AOP-specific mechanisms. We utilise a novel set of coupling metrics to predict fault occurrences in aspect-oriented programs. We also compare these new metrics against previously proposed metrics for AOP. More specifically, we analyse faults from several releases of three AspectJ applications and perform statistical analyses to reveal the effectiveness of these metrics when predicting faults. Our study shows that a particular set of fine-grained directed coupling metrics have the potential to help create better fault prediction models for AO programs.",2010,0, 5264,On the Round Trip Path Testing Strategy,"A number of techniques have been proposed for state-based testing. One well-known technique (criterion) is to traverse the graph representing the state machine and generate a so-called transition tree, in an attempt to exercise round trip paths, i.e., paths that start and end in the same state without any other repeating state. Several hypotheses are made when one uses this criterion: exercising paths in the tree, which do not always trigger complete round trip paths, is equivalent to covering round-trip paths; different traversal algorithms are equivalent. In this paper we investigate whether these assumptions hold in practice, and if they do not, we investigate their consequences. Results also lead us to propose a new transition tree construction algorithm that results in higher efficiency and lower cost.",2010,0, 5265,Improving the Precision of Dependence-Based Defect Mining by Supervised Learning of Rule and Violation Graphs,"Previous work has shown that application of graph mining techniques to system dependence graphs improves the precision of automatic defect discovery by revealing subgraphs corresponding to implicit programming rules and to rule violations. However, developers must still confirm, edit, or discard reported rules and violations, which is both costly and error-prone. In order to reduce developer effort and further improve precision, we investigate the use of supervised learning models for classifying and ranking rule and violation subgraphs. In particular, we present and evaluate logistic regression models for rules and violations, respectively, which are based on general dependence-graph features. Our empirical results indicate that (i) use of these models can significantly improve the precision and recall of defect discovery, and (ii) our approach is superior to existing heuristic approaches to rule and violation ranking and to an existing static-warning classifier, and (iii) accurate models can be learned using only a few labeled examples.",2010,0, 5266,A Multi-factor Software Reliability Model Based on Logistic Regression,"This paper proposes a multi-factor software reliability model based on logistic regression and its effective statistical parameter estimation method. The proposed parameter estimation algorithm is composed of the algorithm used in the logistic regression and the EM (expectation-maximization) algorithm for discrete-time software reliability models. The multi-factor model deals with the metrics observed in testing phase (testing environmental factors), such as test coverage and the number of test workers, to predict the number of residual faults and other reliability measures. In general, the multi-factor model outperforms the traditional software reliability growth model like discrete-time non-homogeneous models in terms of data-fitting and prediction abilities. However, since it has a number of parameters, there is the problem in estimating model parameters. Our modeling framework and its estimation method are quite simpler than the existing methods, and are promising for expanding the applicability of multi-factor software reliability model. In numerical experiments, we examine data-fitting ability of the proposed model by comparing with the existing multi-factor models. The proposed method provides the similar fitting ability to existing multi-factor models, although the computation effort of parameter estimation is low.",2010,0, 5267,"Flexible, Any-Time Fault Tree Analysis with Component Logic Models","This article presents a novel approach to facilitating fault tree analysis during the development of software-controlled systems. Based on a component-oriented system model, it combines second-order probabilistic analysis and automatically generated default failure models with a level-of-detail concept to ensure early and continuous analysability of system failure behaviour with optimal effort, even in the presence of incomplete information and dissimilar levels of detail in different parts of an evolving system model. The viability and validity of the method are demonstrated by means of an experiment.",2010,0, 5268,Towards a Bayesian Approach in Modeling the Disclosure of Unique Security Faults in Open Source Projects,"Software security has both an objective and a subjective component. A lot of the information available about that today is focused on the security vulnerabilities and their disclosure. It is less frequent that security breaches and failures rates are reported, even in open source projects. Disclosure of security problems can take several forms. A disclosure can be accompanied by a release of the fix for the problem, or not. The latter category can be further divided into ”voluntaryâ€?and ”involuntaryâ€?security issues. In widely used software there is also considerable variability in the operational profile under which the software is used. This profile is further modified by attacks on the software that may be triggered by security disclosures. Therefore a comprehensive model of software security qualities of a product needs to incorporate both objective measures, such as security problem disclosure, repair and, failure rates, as well as less objective metrics such as implied variability in the operational profile, influence of attacks, and subjective impressions of exposure and severity of the problems, etc. We show how a classical Bayesian model can be adapted for use in the security context. The model is discussed and assessed using data from three open source software projects. Our results show that the model is suitable for use with a certain subset of disclosed security faults, but that additional work will be needed to identify appropriate shape and scaling functions that would accurately reflect end-user perceptions associated with security problems.",2010,0, 5269,A Consistency Check Algorithm for Component-Based Refinements of Fault Trees,"The number of embedded systems in our daily lives that are distributed, hidden, and ubiquitous continues to increase. Many of them are safety-critical. To provide additional or better functionalities, they are becoming more and more complex, which makes it difficult to guarantee safety. It is undisputed that safety must be considered before the start of development, continue until decommissioning, and is particularly important during the design of the system and software architecture. An architecture must be able to avoid, detect, or mitigate all dangerous failures to a sufficient degree. For this purpose, the architectural design must be guided and verified by safety analyses. However, state-of-the-art component-oriented or model-based architectural design approaches use different levels of abstraction to handle complexity. So, safety analyses must also be applied on different levels of abstraction, and it must be checked and guaranteed that they are consistent with each other, which is not supported by standard safety analyses. In this paper, we present a consistency check for CFTs that automatically detects commonalities and inconsistencies between fault trees of different levels of abstraction. This facilitates the application of safety analyses in top-down architectural designs and reduces effort.",2010,0, 5270,Using Search Methods for Selecting and Combining Software Sensors to Improve Fault Detection in Autonomic Systems,"Fault-detection approaches in autonomic systems typically rely on runtime software sensors to compute metrics for CPU utilization, memory usage, network throughput, and so on. One detection approach uses data collected by the runtime sensors to construct a convex-hull geometric object whose interior represents the normal execution of the monitored application. The approach detects faults by classifying the current application state as being either inside or outside of the convex hull. However, due to the computational complexity of creating a convex hull in multi-dimensional space, the convex-hull approach is limited to a few metrics. Therefore, not all sensors can be used to detect faults and so some must be dropped or combined with others. This paper compares the effectiveness of genetic-programming, genetic-algorithm, and random-search approaches in solving the problem of selecting sensors and combining them into metrics. These techniques are used to find 8 metrics that are derived from a set of 21 available sensors. The metrics are used to detect faults during the execution of a Java-based HTTP web server. The results of the search techniques are compared to two hand-crafted solutions specified by experts.",2010,0, 5271,A Quantitative Approach to Software Maintainability Prediction,"Software maintainability is one important aspect in the evaluation of software evolution of a software product. Due to the complexity of tracking maintenance behaviors, it is difficult to accurately predict the cost and risk of maintenance after delivery of software products. In an attempt to address this issue quantitatively, software maintainability is viewed as an inevitable evolution process driven by maintenance behaviors, given a health index at the time when a software product are delivered. A Hidden Markov Model (HMM) is used to simulate the maintenance behaviors shown as their possible occurrence probabilities. And software metrics is the measurement of the quality of a software product and its measurement results of a product being delivered are combined to form the health index of the product. The health index works as a weight on the process of maintenance behavior over time. When the occurrence probabilities of maintenance behaviors reach certain number which is reckoned as the indication of the deterioration status of a software product, the product can be regarded as being obsolete. Longer the time, better the maintainability would be.",2010,0, 5272,Search-based Prediction of Fault-slip-through in Large Software Projects,"A large percentage of the cost of rework can be avoided by finding more faults earlier in a software testing process. Therefore, determination of which software testing phases to focus improvements work on, has considerable industrial interest. This paper evaluates the use of five different techniques, namely particle swarm optimization based artificial neural networks (PSO-ANN), artificial immune recognition systems (AIRS), gene expression programming (GEP), genetic programming (GP) and multiple regression (MR), for predicting the number of faults slipping through unit, function, integration and system testing phases. The objective is to quantify improvement potential in different testing phases by striving towards finding the right faults in the right phase. We have conducted an empirical study of two large projects from a telecommunication company developing mobile platforms and wireless semiconductors. The results are compared using simple residuals, goodness of fit and absolute relative error measures. They indicate that the four search-based techniques (PSO-ANN, AIRS, GEP, GP) perform better than multiple regression for predicting the fault-slip-through for each of the four testing phases. At the unit and function testing phases, AIRS and PSO-ANN performed better while GP performed better at integration and system testing phases. The study concludes that a variety of search-based techniques are applicable for predicting the improvement potential in different testing phases with GP showing more consistent performance across two of the four test phases.",2010,0, 5273,A Self-Adaptive and Fast Motion Estimation Search Method for H.264/AVC,"H.264/AVC is the outstanding and significant video compression standard developed by ITU-T/ISO/IEC Joint Video Team. Motion estimation (ME) plays a key role in H.264/AVC, it concerns greatly on computational complexity especially when using the full search (FS) algorithm. Although many fast ME algorithms have been proposed to reduce the huge calculation complexity instead of FS, the ME still can not satisfy the critical real-time application's needs. In this paper, a fast integer pixel variable block ME algorithm based on JVT which accepted UMHexagonS algorithm is presented for H.264/AVC encoder. With special considerations on the motion activity of the current macro-block, several adaptive search strategies have been utilized to significantly improve the video coding performance. The simulation results shows that the proposed algorithm maintains an unnoticeable quality loss in terms of PSNR on average compared with FS and reduces nearly 20% ME time compared with UMHexagonS while maintaining coding efficiency.",2010,0, 5274,A Three-Phases Byzantine Fault Tolerance Mechanism for HLA-Based Simulation,"A large scale HLA-based simulation (federation) is composed of a large number of simulation components (federates), which may be developed by different participants and executed at different locations. Byzantine failures, caused by malicious attacks and software/hardware bugs, might happen to federates and propagate in the federation execution. In this paper, a three-phases (i.e., failure detection, failure location, and failure recovery) Byzantine Fault Tolerance (BFT) mechanism is proposed based on the decoupled federate architecture. By combining the replication, check pointing and message logging techniques, some redundant executions of federate replicas are avoided. The BFT mechanism is implemented using both Barrier and No-Barrier federate replication structures. Protocols are also developed to remove the epidemic effect caused by Byzantine failures. As the experiment results show, the BFT mechanism using No-Barrier replication outperforms that using Barrier replication significantly in the case that federate replicas have different runtime performance.",2010,0, 5275,Rapid design optimisation of microwave structures through automated tuning space mapping,"Tuning space mapping (TSM) is one of the latest developments in space mapping technology. TSM algorithms offer a remarkably fast design optimisation with satisfactory results obtained after one or two iterations, which amounts to just a few electromagnetic simulations of the optimised microwave structure. The TSM algorithms (as exemplified by `Type-1` tuning) could be simply implemented manually. The approach may require interaction between various electromagnetic-based and circuit models, as well as handling different sets of design variables and control parameters. As a result, certain TSM algorithms (especially so-called `Type-0` tuning) may be tedious, thus, error-prone to implement. Here, we present a fully automated tuning space mapping implementation that exploits the functionality of our user-friendly space mapping software, the SMF system. The operation and performance of our new implementation is illustrated through the design of a box-section Chebyshev bandpass filter and a capacitively coupled dual-behaviour resonator filter.",2010,0, 5276,A clustering algorithm for software fault prediction,"Software metrics are used for predicting whether modules of software project are faulty or fault free. Timely prediction of faults especially accuracy or computation faults improve software quality and hence its reliability. As we can apply various distance measures on traditional K-means clustering algorithm to predict faulty or fault free modules. Here, in this paper we have proposed K-Sorensen-means clustering that uses Sorensen distance for calculating cluster distance to predict faults in software projects. Proposed algorithm is then trained and tested using three datasets namely, JM1, PCI and CM1 collected from NASA MDP. From these three datasets requirement metrics, static code metrics and alliance metrics (combining both requirement metrics and static code metrics) have been built and then K-Sorensen-means applied on all datasets to predict results. Alliance metric model is found to be the best prediction model among three models. Results of K-Sorensen-means clustering shown and corresponding ROC curve has been drawn. Results of K-Sorensen-means are then compared with K-Canberra-means clustering that uses other distance measure for evaluating cluster distance.",2010,0, 5277,A stochastic model for performance evaluation and bottleneck discovering on SOA-based systems,"The Service-Oriented Architecture (SOA) has become a unifying technical architecture that may be embodied through Web Service technologies. Predicting the variable behavior of SOA systems can mean a way to improve the quality of the business transactions. This paper proposes a simulation modeling approach based on stochastic Petri nets to estimate the performance of SOA-applications. Using the proposed model is possible to predict resource consumption and service levels degradation in scenarios with different compositions and workloads, even before developing the application. A case study was conducted in order to validate our approach, comparing its accuracy with the results from an analytical model existent in the literature.",2010,0, 5278,Designing building automation systems using evolutionary algorithms with semi-directed variations,"In the building automation domain, many prefabricated devices from different manufacturers available in the market realize building automation functions by preprogrammed software components. For given design requirements, the existence of a high number of devices that realize the required functions leads to a combinatorial explosion of design alternatives at different price and quality levels. Finding optimal design alternatives is a hard problem to which we approach with a multi-objective evolutionary algorithm. By integrating problem-specific knowledge into variation operations, a promisingly high optimization performance can be achieved. To realize this, diverse variation operations related to goals are defined upon a classification for the exploration and convergence behavior, and applied in different strategies.",2010,0, 5279,Dynamic parameter control of interactive local search in UML software design,"User-centered Interactive Evolutionary Computation (IEC) has been applied to a wide variety of areas, including UML software design. The performance of evolutionary search is important as user interaction fatigue remains an on-going challenge in IEC. However, to obtain optimal search performance, it is usually necessary to “tuneâ€?evolutionary control parameters manually, although tuning control parameters can be time-consuming and error-prone. To address this issue in other fields of evolutionary computation, dynamic parameter control including deterministic, adaptive and self-adaptive mechanisms have been applied extensively to real-valued representations. This paper postulates that dynamic parameter control may be highly beneficial to IEC in general, and UML software design in particular, wherein a novel object-based solution representation is used. Three software design problems from differing design domains and of differing scale have been investigated with mutation probabilities modified by simulated annealing, the Rechenberg â€?/5 success ruleâ€?and self-adaptation within local search. Results indicate that self-adaptation appears to be the most robust and scalable mutation probability modification mechanism. The use of self-adaption with an object-based representation is novel, and results indicate that dynamic parameter control offers great potential within IEC.",2010,0, 5280,Reverse Engineering Utility Functions Using Genetic Programming to Detect Anomalous Behavior in Software,"Recent studies have shown the promise of using utility functions to detect anomalous behavior in software systems at runtime. However, it remains a challenge for software engineers to hand-craft a utility function that achieves both a high precision (i.e., few false alarms) and a high recall (i.e., few undetected faults). This paper describes a technique that uses genetic programming to automatically evolve a utility function for a specific system, set of resource usage metrics, and precision/recall preference. These metrics are computed using sensor values that monitor a variety of system resources (e.g., memory usage, processor usage, thread count). The technique allows users to specify the relative importance of precision and recall, and builds a utility function to meet those requirements. We evaluated the technique on the open source Jigsaw web server using ten resource usage metrics and five anomalous behaviors in the form of injected faults in the Jigsaw code and a security attack. To assess the effectiveness of the technique, the precision and recall of the evolved utility function was compared to that of a hand-crafted utility function that uses a simple thresholding scheme. The results show that the evolved function outperformed the hand-crafted function by 10 percent.",2010,0, 5281,"Design, Modeling, and Evaluation of a Scalable Multi-level Checkpointing System","High-performance computing (HPC) systems are growing more powerful by utilizing more hardware components. As the system mean-time-before-failure correspondingly drops, applications must checkpoint more frequently to make progress. However, as the system memory sizes grow faster than the bandwidth to the parallel file system, the cost of checkpointing begins to dominate application run times. Multi-level checkpointing potentially solves this problem through multiple types of checkpoints with different costs and different levels of resiliency in a single run. This solution employs lightweight checkpoints to handle the most common failure modes and relies on more expensive checkpoints for less common, but more severe failures. This theoretically promising approach has not been fully evaluated in a large- scale, production system context. We have designed the Scalable Checkpoint/Restart (SCR) library, a multi-level checkpoint system that writes checkpoints to RAM, Flash, or disk on the compute nodes in addition to the parallel file system. We present the performance and reliability properties of SCR as well as a probabilistic Markov model that predicts its performance on current and future systems. We show that multi-level checkpointing improves efficiency on existing large-scale systems and that this benefit increases as the system size grows. In particular, we developed low-cost checkpoint schemes that are 100x-1000x faster than the parallel file system and effective against 85% of our system failures. This leads to a gain in machine efficiency of up to 35%, and it reduces the the load on the parallel file system by a factor of two on current and future systems.",2010,0, 5282,Linguistic Driven Refactoring of Source Code Identifiers,"Identifiers are an important source of information during program understanding and maintenance. Programmers often use identifiers to build their mental models of the software artifacts. We have performed a preliminary study to examine the relation between the terms in identifiers, their spread in entities, and fault proneness. We introduced term entropy and context-coverage to measure how scattered terms are across program entities and how unrelated are the methods and attributes containing these terms. Our results showed that methods and attributes containing terms with high entropy and context-coverage are more fault-prone. We plan to build on this study by extracting linguistic information form methods and classes. Using this information, we plan to establish traceability link from domain concepts to source code, and to propose linguistic based refactoring.",2010,0, 5283,Improving System Testability and Testing with Microarchitectures,"Testing is essential to ensure software reliability and dependability. however, testing activities are very expensive and complex. Micro architectures, such as design patterns and anti-patterns, widely exist in object oriented systems and are recognized as influencing many software quality attributes. Our goal is to identify their impact on system testability and analyse how they can be leveraged to reduce the testing effort while increasing the system reliability and dependability. The proposed research aims at contributing to reduce complexity and increase testing efficiency by using micro architectures. We will base our work on the rich existing tools of micro architectures detection and code reverse-engineering.",2010,0, 5284,Discrete wavelet and neural network for transmission line fault classification,This paper presents an efficient wavelet and neural network (WNN) based algorithm for fault classification in single circuit transmission line. The first level discrete wavelet transform is applied to decompose the post fault current signals of the transmission line into a series of coefficient components (approximation and detail). The values of the approximation coefficients obtained can accurately discriminate between all types of fault in transmission line and reduce the number of data feeding to the ANN. These coefficients are further used to train an Artificial Neural Network (ANN) fitting function. The trained FFNN clearly distinguishes and classify very accurate and very fast the fault type. A typical generation system connected by single circuit transmission line to many nodes of lodes at the receiving end was simulated using MATLAB simulation software using only one node to do this work. The generated data were used by the MATLAB software to test the performance of the proposed technique. The simulation results obtained show that the new algorithm is more reliable and accurate.,2010,0, 5285,Notice of Violation of IEEE Publication Principles
Network performance management by using stable algorithms,"Notice of Violation of IEEE Publication Principles

""Network Performance Management By Using Stable Algorithms""
by J. Ramkumar and V.B. Kirubanand
in the 2010 2nd International Conference on Computer Technology and Development (ICCTD), 2010, pp. 692 �696

After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

This paper contains significant portions of original text from the paper cited below. The original text was copied with insufficient attribution (including appropriate references to the original author(s) and/or paper title) and without permission.

Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following article:

""A Unified Framework for System-level Design: Modeling and Performance Optimization of Scalable Networking Systems""
by Hwisung Jung and Massoud Pedram
in the 8th International Symposium on Quality Electronic Design, 2007 (ISQED 2007), 2007, pp. 198 �203

Network Management Systems have played a great important role in information systems. Management is very important and essential in any fields. There are many managements such as configuration management, fault management, performance management, security management, accounting management and etc. Among them, performance management is more important than others. Because it is very essential and useful in any fields. Performance management is to monitor and maintain the whole system. This paper focuses on how the petri net models can be exploited by markov algorithm with adaboost algorithm for performance evaluation of Wireless local area network(WLAN). The main theme of this paper is to increase the performance of various Wireless Local Area Network by using Markov algorithm with A- aboost algorithm. The EQPN model and Markov algorithm with Adaboost algorithm is employed to represent the performance behaviors and to minimize energy consumption of the system under performance constraints through mathematical programming formulations. EQPNs facilitate the integration of both hardware and software aspects of the system behavior in the improved model. In addition to any wireless technology in the systems and using EQPNs one can easily model simultaneous resource possession, synchronization, blocking and contentions for software resources. EQPNs are very powerful as a performance analysis and prediction tool. When comparing the service rates from Wired Local Area Network, it has been found that the service rate from proposed system is very efficient for implementation; we hope to motivate further research in this area.",2010,0, 5286,Dimentional inspection and feature recognition through machine vision system,"Automated computerized machine vision inspection, techniques can be used as a tooling for automated shape inspection in an advanced CIM environment so that real time, reliable, non-contact and non-destructive, and 100% quality measure activities can be achieved fruitfully. This paper develops a generic hardware system, software architecture and a technology-mix for product feature identification and measurement analysis of various machined parts/products. The feature analysis seeks to identify inherent characteristics of feature of an object found within the image frame. These characteristics are used to describe the object or the attributes of the object features, prior to subsequent task for classification of the same. The special features of such an inspection methodology include system which is (i) Micro-computer based; (ii) flexible and reprogrammable; (iii) likely to remain effective at the floor; (iv) ensure the highest possible goes to the customer and (v) very efficient.",2010,0, 5287,Lightning overvoltages on an overhead transmission line during backflashover and shielding failure,Analysis of induced voltages on transmission tower and conductors has been performed when a high voltage line is subjected to propagation of lightning transient. PSCAD/EMTDC software program is used to carry out the modelling and simulation works. Lightning strikes on the tower top or conductors result in large overvoltages appearing between the tower and the conductors. Two cases considered for these effects are: (i) direct strike to a shield wire or tower top; (ii) shielding failure. The probability of a lightning strike terminating on a shield wire or tower top is higher than that of a phase conductor. Voltages produced during shielding failure on conductors are more significant than during back flashovers. The severity of the induced voltages from single stroke and multiple stroke lightning is illustrated using the simulation results. The results demonstrate high magnitude of induced voltages by the multiple stroke lightning compared to those by single strokes. Analytical studies were performed to verify the results obtained from the simulation. Analysis of the performance of the line using IEEE Flash version 1.81 computer programme was also carried out.,2010,0, 5288,A novel method of fingerprint minutiae extraction based on Gabor phase,"In this paper, a novel method of fingerprint minutiae extraction on grey-scale images is proposed based on the Gabor phase field. The novelty of our approach is that the proposed algorithm is performed on the transform domain, i.e. the Gabor phase field of the fingerprint image. This is different from most existing minutiae extraction methods, in which the minutiae are usually extracted from the binarized and thinned fingerprint image. Experimental results on benchmark data sets demonstrate that the proposed algorithm has promising performances.",2010,0, 5289,Notice of Violation of IEEE Publication Principles
Finding the location of switched capacitor banks in distribution systems based on wavelet transform,"Notice of Violation of IEEE Publication Principles

""Finding the Location of Switched Capacitor Banks in Distribution Systems Based on Wavelet Transform""
by B. Noshad, M. Keramatzadeh, M. Saniei
in the Proceedings of the 45th International Universities Power Engineering Conference (UPEC), August 2010

After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

This paper contains significant portions of original text from the paper cited below. The original text was copied with insufficient attribution and without permission.

""On Tracking and Finding the Location of Switched Capacitor Banks in Distribution Systems""
by H. Khani, M. Moallem, S. Sadri
in the Proceedings of the Transmission & Distribution Conference & Exposition: Asia and Pacific, October 2009

""A Novel Algorithm for Determining the Exact Location of Switched Capacitor Banks in Distribution Systems ""
by H. Khani, M. Moallem, S. Sadri
Proceedings of the Transmission & Distribution Conference & Exposition: Asia and Pacific, October 2009

In this paper, a method is proposed for determining the exact location of switched capacitor banks in distribution systems. The method consists of two steps. In the first step, the technique is based on measurement of voltage and current transients created by capacitor switching. The hypothesis states that: The slope of voltage and current transient wave forms created by capacitor switching immediately before and after the switching instant will have opposite polarity for monitoring the location at line and branches feeding the capacitor and identical polarity for monitoring the location at other part of the power networks. Wavelet transform techniques are used to determine the exact switching instant effectively. Simulation is done by EMTP. In the second step, using the exis- ting capacitor switching transient data, a method based on linear circuit theory is used to estimate the exact distance of switched capacitor bank from the monitoring location and it is simulated with Matlab Software. The method is valid for both wye and delta configured capacitor banks. This proposed method is applied to the IEEE 37-bus distribution system.",2010,0, 5290,Implementation of finite mutual impedances and its influence on earth potential rise estimation along transmission lines,"As the proximity of high fault current power lines to residential areas increases, the need for accurate prediction of earth potential rise (EPR) is of crucial importance for both safety and equipment protection. To date, the most accurate methods for predicting EPR are power system modelling software tools, such as EMTP, or recursive methods that use a span by span approach to model a transmission line. These techniques are generally used in conjunction with impedances and admittances that are derived from the assumption of infinite line length. In this paper a span by span model was created to predict the EPR along a dual circuit transmission line in EMTP-RV, where the mutual impedances were considered to be between finite length conductors. A series of current injection tests were also performed on the system under study in order to establish the accuracy of both the finite and infinite methods.",2010,0, 5291,Global motion temporal filtering for in-loop deblocking,"One of the most severe problems in hybrid video coding is its block-based approach, which leads to distortions called blocking artifacts. These artifacts affect not only the subjective perception at the receiver but also the motion compensated prediction (MCP) that generates a prediction signal from previously decoded pictures. It is therefore directly connected to the amount of data that has to be transmitted. In this paper, we propose a technique called global motion temporal filtering for blocking artifact reduction. Other than common deblocking techniques, this approach does not reduce the blocking artifacts spatially. Filtering is performed temporally using a set of neighboring pictures from the picture buffer. This approach is incorporated into an H.264/AVC reference software. Experimental evaluation shows that the proposed technique significantly improves the quality in terms of rate-distortion performance.",2010,0, 5292,Submerged aquatic vegetation habitat product development: On-screen digitizing and spatial analysis of Core Sound,"A hydrophyte of high relevance, submerged aquatic vegetation (SAV) is of great importance to estuarine environments. SAV helps improve water quality, provides food and shelter for waterfowl, fish, and shellfish, as well as protects shorelines from erosion. In coastal bays most SAV was eliminated by disease in the 1930's. In the late 1960's and 1970's a dramatic decline of all SAV species was correlated with increasing nutrient and sediment inputs from development of surrounding watersheds (MDNP et. al 2004). Currently state programs work to protect and restore existing wetlands, however, increasing development and population pressure continue to degrade and destroy both tidal and non-tidal wetlands and hinder overall development of SAV growth. The focus of this research was to utilize spatial referencing software in the mapping of healthy submerged aquatic vegetation (SAV) habitats. In cooperation with the United States Fish and Wildlife Service (USFWS), and the National Oceanic and Atmospheric Administration (NOAA), students from Elizabeth City State University (ECSU) developed and applied Geographic Information Systems (GIS) skills to evaluate the distribution and abundance of SAV in North Carolina's estuarine environments. Utilizing ESRI ArcView, which includes ArcMap, ArcCatalog and ArcToolbox, and the applications of on-screen digitizing, an assessment of vegetation cover was made through the delineation of observable SAV beds in Core Sound, North Carolina. Aerial photography of the identified coastal water bodies was taken at 12,000 feet above mean terrain (AMT) scale 1:24,000. The georeferenced aerial photographs were assessed for obscurities and the SAV beds were digitized. Through the adoption of NOAA guidelines and criteria for benthic habitat mapping using aerial photography for image acquisition and analysis, students delineated SAV beds and developed a GIS spatial database relevant to desired results. This newly created database yielded products in - - the form of usable shapefiles of SAV polygonsι as well as attribute information with location information, area in hectares, and percent coverage of SAV.",2010,0, 5293,Multi-view prediction structure for free viewpoint video,"In this paper, we described geometric prediction structure for multi-view video coding. By exploiting the geometric relations between each camera pose, we can make prediction pair which maximizes the spatial correlation of each view. To analyze the relationship of each camera pose, we defined the mathematical mean view and view distance in 3D space. We proposed an algorithm for establishing the geometric prediction structure based on view center and view distance. Using this prediction structure, inter-view prediction is performed to camera pair of maximum spatial correlation. In our prediction structure, we also considered the view scalability. Experiments are done using JMVC software on MPEG-FTV test sequences. Overall performance is measured in the PSNR and subjective image quality measure such as PSPNR.",2010,0, 5294,Application-Aware diagnosis of runtime hardware faults,"Extreme technology scaling in silicon devices drastically affects reliability, particularly because of runtime failures induced by transistor wearout. Current online testing mechanisms focus on testing all components in a microprocessor, including hardware that has not been exercised, and thus have high performance penalties. We propose a hybrid hardware/software online testing solution where components that are heavily utilized by the software application are tested more thoroughly and frequently. Thus, our online testing approach focuses on the processor units that affect application correctness the most, and it achieves high coverage while incurring minimal performance overhead. We also introduce a new metric, Application-Aware Fault Coverage, measuring a test's capability to detect faults that might have corrupted the state or the output of an application. Test coverage is further improved through the insertion of observation points that augment the coverage of the testing system. By evaluating our technique on a Sun OpenSPARC T1, we show that our solution maintains high Application-Aware Fault Coverage while reducing the performance overhead of online testing by more than a factor of 2 when compared to solutions oblivious to application's behavior. Specifically, we found that our solution can achieve 95% fault coverage while maintaining a minimal performance overhead (1.3%) and area impact (0.4%).",2010,0, 5295,Definition and Validation of Metrics for ITSM Process Models,"Process metrics can be used to establish baselines, to predict the effort required to go from an “as-isâ€?to a “to-beâ€?scenario or to pinpoint problematic ITSM process models. Several metrics proposed in the literature for business process models can be used for ITSM process models as well. This paper formalizes some of those metrics and proposes some new ones, using the Metamodel-Driven Measurement (M2DM) approach that provides precision, objectiveness and automatic collection. According to that approach, metrics were specified with the Object Constraint Language (OCL), upon a lightweight BPMN metamodel that is briefly described. That metamodel was instantiated with a case study consisting of two ITSM processes with two scenarios (“as-isâ€?and “to-beâ€? each. Values collected automatically by executing the OCL metrics definitions, upon the instantiated metamodel, are presented. Using a larger sample with several thousand meta-instances, we analyzed the collinearity of the formalized metrics and were able to identify a smaller set, which will be used to perform further research work on the complexity of ITSM processes.",2010,0, 5296,Model-driven development of ARINC 653 configuration tables,"Model-driven development (MDD) has become a key technique in systems and software engineering, including the aeronautic domain. It facilitates on systematic use of models from a very early phase of the design process and through various model transformation steps (semi-)automatically generates source code and documentation. However, on one hand, the use of model-driven approaches for the development of configuration data is not as widely used as for source code synthesis. On the other hand, we believe that, particular systems that make heavy use of configuration tables like the ARINC 653 standard can benefit from model-driven design by (i) automating error-prone configuration file editing and (ii) using model based validation for early error detection. In this paper, we will present the results of the European project DIANA that investigated the use of MDD in the context of Integrated Modular Avionics (IMA) and the ARINC 653 standard. In the scope of the project, a tool chain was implemented that generates ARINC 653 configuration tables from high-level architecture models. The tool chain was integrated with different target systems (VxWorks 653, SIMA) and evaluated during case studies with real-world and real-sized avionics applications.",2010,0,5297 5297,Model-driven development of ARINC 653 configuration tables,"Model-driven development (MDD) has become a key technique in systems and software engineering, including the aeronautic domain. It facilitates on systematic use of models from a very early phase of the design process and through various model transformation steps (semi-)automatically generates source code and documentation. However, on one hand, the use of model-driven approaches for the development of configuration data is not as widely used as for source code synthesis. On the other hand, we believe that, particular systems that make heavy use of configuration tables like the ARINC 653 standard can benefit from model-driven design by (i) automating error-prone configuration file editing and (ii) using model based validation for early error detection. In this paper, we will present the results of the European project DIANA that investigated the use of MDD in the context of Integrated Modular Avionics (IMA) and the ARINC 653 standard. In the scope of the project, a tool chain was implemented that generates ARINC 653 configuration tables from high-level architecture models. The tool chain was integrated with different target systems (VxWorks 653, SIMA) and evaluated during case studies with real-world and real-sized avionics applications.",2010,0, 5298,A Tool for Automatic Defect Detection in Models Used in Model-Driven Engineering,"In the Model-Driven Engineering (MDE) field, the quality assurance of the involved models is fundamental for performing correct model transformations and generating final software applications. To evaluate the quality of models, defect detection is usually performed by means of reading techniques that are manually applied. Thus, new approaches to automate the defect detection in models are needed. To fulfill this need, this paper presents a tool that implements a novel approach for automatic defect detection, which is based on a model-based functional size measurement procedure. This tool detects defects related to the correctness and the consistency of the models. Thus, our contribution lays in the new approach presented and its automation for the detection of defects in MDE environments.",2010,0, 5299,Analyzing the Similarity among Software Projects to Improve Software Project Monitoring Processes,"Software project monitoring and control is crucial to detect deviation considering the project plan and to take appropriate actions, when needed. However, to determine which action should be taken is not an easy task, since project managers have to analyze the context of the deviation event and search for actions that were successfully taken on previous similar contexts, trying to repeat the effectiveness of these actions. To do so, usually managers use previous projects data or their own experience, and frequently there is no measure or similarity criteria formally established. Thus, in this paper we present the results of a survey that aimed to identify characteristics that can determine the similarity among software projects and also a measure to indicate the level of similarity among them. A recommendation system to support the execution of corrective actions based on previous projects is also described. We believe these results can support the improvement of software project monitoring process, providing important knowledge to project managers in order to improve monitoring and control activities on the projects.",2010,0, 5300,A Method for Continuous Code Quality Management Using Static Analysis,The quality of source code is a key factor for any software product and its continuous monitoring is an indispensable task for a software development project. We have developed a method for systematic assessing and improving the code quality of ongoing projects by using the results of various static code analysis tools. With different approaches for monitoring the quality (a trend-based one and a benchmarking-based one) and an according tool support we are able to manage the large amount of data that is generated by these static analyses. First experiences when applying the method with software projects in practice have shown the feasibility of our method.,2010,0, 5301,Towards Automated Quality Models for Software Development Communities: The QualOSS and FLOSSMetrics Case,"Quality models for software products and processes help both to developers and users to better understand their characteristics. In the specific case of libre (free, open source) software, the availability of a mature and reliable development community is an important factor to be considered, since in most cases both the evolvability and future fitness of the product depends on it. Up to now, most of the quality models for communities have been based on the manual examination by experts, which is time-consuming, generally inconsistent and often error-prone. In this paper, we propose a methodology, and some examples of how it works in practice, of how a quality model for development communities can be automated. The quality model used is a part of the QualOSS quality model, while the metrics are those collected by the FLOSS Metrics project.",2010,0, 5302,Studying Supply and Demand of Software Maintenance and Evolution Services,"Software maintenance and evolution constitutes an important part of the total cost of the life cycle of software. Some even argue this is the most important fraction of the cost. The added value of software maintenance and evolution is often not fully understood by the customer leading to a perception that software maintenance organizations are costly and inefficient. A common view of maintenance and evolution is that it merely fixes bugs. However, studies over the years have indicated that in many organizations the majority of the maintenance effort is devoted to value-added activities. To improve customer perceptions it is important to provide them with better insight into the activities performed by the maintenance organization and to document such performance with objective measures of software maintenance activities. In this paper, software maintenance trend analysis is used as a basis for improvement. First, the differences between software maintenance activities and the Information System (IS) department development projects are described. Then a trend model is developed as a mean to manage the expectations of customers. To conclude, some remarks are made regarding the application of trend analysis by maintenance work categories for software maintenance managers.",2010,0, 5303,Reducing Subjectivity in Code Smells Detection: Experimenting with the Long Method,"Guidelines for refactoring are meant to improve software systems internal quality and are widely acknowledged as among software's best practices. However, such guidelines remain mostly qualitative in nature. As a result, judgments on how to conduct refactoring processes remain mostly subjective and therefore non-automatable, prone to errors and unrepeatable. The detection of the Long Method code smell is an example. To address this problem, this paper proposes a technique to detect Long Method objectively and automatically, using a Binary Logistic Regression model calibrated by expert's knowledge. The results of an experiment illustrating the use of this technique are reported.",2010,0, 5304,IDS: An Immune-Inspired Approach for the Detection of Software Design Smells,"We propose a parallel between object-oriented system designs and living creatures. We suggest that, like any living creature, system designs are subject to diseases, which are design smells (code smells and anti patterns). Design smells are conjectured in the literature to impact the quality and life of systems and, therefore, their detection has drawn the attention of both researchers and practitioners with various approaches. With our parallel, we propose a novel approach built on models of the immune system responses to pathogenic material. We show that our approach can detect more than one smell at a time. We build and test our approach on Gantt Project v1.10.2 and Xerces v2.7.0, for which manually-validated and publicly available smells exist. The results show a significant improvement in detection time, precision, and recall, in comparison to the state-of-the-art approaches.",2010,0, 5305,Embedded system environment: A framework for TLM-based design and prototyping,"This paper presents Embedded System Environment (ESE), which is a comprehensive set of tools for supporting a model-based design methodology for multi-processor embedded systems. It consists of two parts: ESE Front-End and ESE BackEnd. ESE Front-End provides automatic generation of SystemC transaction level models (TLMs) from graphical capture of system platform and application C/C++ code. ESE generated TLMs can be used either as virtual platforms for SW development or for fast and early timing estimation of system performance. ESE Back-End provides automatic synthesis from TLM to Cycle Accurate Model (CAM) consisting of RTL interfaces, system SW and prototype ready FPGA project files. ESE generated RTL can be synthesized using standard logic synthesis tools and system SW can be compiled along with application code for target processors. ESE automatically creates Xilinx EDK projects for board prototyping. Our experimental results demonstrate that ESE can drastically reduce embedded system design and prototyping time, while maintaining design quality similar to manual design.",2010,0, 5306,A Metrics-Based Approach to Technical Documentation Quality,"Technical documentation is now fully taking the step from stale printed booklets (or electronic versions of these) to interactive and online versions. This provides opportunities to reconsider how we define and assess the quality of technical documentation. This paper suggests an approach based on the Goal-Question-Metric paradigm: predefined quality goals are continuously assessed and visualized by the use of metrics. To test this approach, we perform two experiments. We adopt well known software analysis techniques, e.g., clone detection and test coverage analysis, and assess the quality of two real world documentations, that of a mobile phone and of (parts of) a warship. The experiments show that quality issues can be identified and that the approach is promising.",2010,0, 5307,Algorithm for QOS-aware web service composition based on flow path tree with probabilities,"This paper first presents a novel model for QoS-aware web services composition based on workflow patterns, and an executing path tree with probability. Then a discrete PSO algorithm is proposed to fit our model, which also requires some other preliminary algorithms, such as the generation of all reachable paths and executing path tree. The experiments show the performance of that algorithm and its advantages, compared with genetic algorithm. Finally, we suggest some drawbacks of it and future works.",2010,0, 5308,Adaptive initial quantization parameter selection for H.264/SVC rate control,"Initial quantization parameter (QP) selection is critical for the overall performance of video rate control. In this paper, we propose an adaptive initial QP selection algorithm for H.264/AVC Scalable Extension (SVC) rate control. The proposed algorithm introduces a number of efficient methods, including a coding complexity measure for intra-frames, and a rate-complexity-quantization (R-C-Q) model to accurately determine initial QPs for H.264/SVC rate control. Our experimental results demonstrate that, the proposed initial QP selection algorithm outperforms the one of JVT-W043 rate control scheme, adopted in the H.264/SVC reference software, by providing more accurate initial QP prediction, reducing buffer overflow/underflow, depressing quality fluctuations, and finally, improves the overall coding quality by up to 0.48dB.",2010,0, 5309,Software Defect Prediction Using Dissimilarity Measures,"In order to improve the accuracy of software defect prediction, a novel method based on dissimilarity measures is proposed. Different from traditional predicting methods based on feature space, we solve the problem in dissimilarity space. First the new unit features in dissimilarity space are obtained by measuring the dissimilarity between the initial units and prototypes. Then proper classifier is chosen to complete prediction. By prototype selecting, we can reduce the dimension of units' features and the computational complexity of prediction. The empirical results in the NASA database KC2 and CM1 show that the prediction accuracies of KNN, Bayes, and SVM classifier in dissimilarity space are higher than that of feature space from 1.86% to 9.39%. Also the computational complexities reduce from 18% to 67%.",2010,0, 5310,An Autonomic Performance-Aware Workflow Job Management for Service-Oriented Computing,"Workflow job is composed of several ordered subtasks which invoke different computing services. These computing services are deployed on geographically distributed servers. Towards Workflow Job Management, how to schedule workflow jobs to achieve high server resource utilization and how to ensure Quality of Service (QoS) pose several challenges. In the paper, an autonomic Performance-Aware Workflow Job Management is proposed. It firstly decomposes workflow jobs into subtasks, and then adopts Virtual Allocation Strategy, which utilizes the concept of virtual queue, to allocate them to the servers. We also apply a Detection Adjustment Approach for Virtual Allocation to dynamically adjust workload of each computing server according to the real-time system workload changes. Additionally, it also utilizes Occupy Allocation to ensure the QoS. These capacities enable our Workflow Job Management adaptable and autonomic. Finally we establish simulations to demonstrate system performance and QoS.",2010,0, 5311,Energy-Efficient Clustering Rumor Routing Protocol for Wireless Sensor Networks,"To develop an energy-efficient routing protocol becomes one of the most important and difficult key tasks for wireless sensor networks. Traditional Rumor Routing is effective in random path building but winding or even looped path is prone to be formed, leading to enormous energy wasting. Clustering Rumor Routing proposed in this paper has advantages in energy saving by making full use of three features-clustering structure, selective next-hop scheme and double variable energy thresholds for rotation of cluster-heads. Thus CRR can effectively avoid the problems that occur in RR and a more energy-efficient routing path from event agents to queries can be built. The performance between CRR and traditional RR are evaluated by simulations. The results indicate that compared with traditional RR, the CRR can save more energy consumption, provide better path quality, and improve the delivery rate as well.",2010,0, 5312,The impact of Shift Work Implementation on sleeping quality and job performance: A case study of semi-conductor manufacturing company,"This research is mainly focused on the worker performance of the semi-conductor industry in Taiwan. All we are concerned is how the sleeping quality of the workers on different shifts affects the Job performance. Investigations on workers are undertaken from Silicon Integrated Systems Co-operation, SiS. Pittsburgh Sleep Quality Index, PSQI, and Attitude Towards Shiftwork Scale, ATSS, are used to evaluate workers' sleeping quality and job performance respectively. A questionnaire is designed for collecting relative data. Four hundred well-answered questionnaires are successfully collected. The statistic software package, SPSS11.5, is used to perform t-test, single factor ANOVA analysis and multiple regressions. The evaluation of the correlation between Shift Work Implementation and sleep quality revealed that different work shifts have made significant influences on sleeping quality and job performance. This research also revealed that it would be a good way to reduce accidents resulting from job and to enhance job performance if the employer pays more attention on worker's sleeping quality. For those so called `highly risking group', the company should strengthen worker's health knowledge by implementing necessary courses or practices.",2010,0, 5313,Research of Congestion Control in Wireless Video Transmission,"With the development of the wireless communications technology and the multimedia technology, the wireless internet multimedia transport of real-time becomes an important applications field. We will meet a problem that the distinguish of the status of package loss is not very good which will affect the QoS when we transport multimedia data in wireless internet. Based on this problem the article comes up with a new algorithm which is named LDA(Loss Detection Algorithm). The algorithm utilizes the delay from transmitting end to receiving end to distinguish the status of package loss and use the simulation software NS2 to build a platform for analog video. Through it, we can confirm the performance of the TFRC which is improved has bigger progress.",2010,0, 5314,Attribute Selection and Imbalanced Data: Problems in Software Defect Prediction,"The data mining and machine learning community is often faced with two key problems: working with imbalanced data and selecting the best features for machine learning. This paper presents a process involving a feature selection technique for selecting the important attributes and a data sampling technique for addressing class imbalance. The application domain of this study is software engineering, more specifically, software quality prediction using classification models. When using feature selection and data sampling together, different scenarios should be considered. The four possible scenarios are: (1) feature selection based on original data, and modeling (defect prediction) based on original data; (2) feature selection based on original data, and modeling based on sampled data; (3) feature selection based on sampled data, and modeling based on original data; and (4) feature selection based on sampled data, and modeling based on sampled data. The research objective is to compare the software defect prediction performances of models based on the four scenarios. The case study consists of nine software measurement data sets obtained from the PROMISE software project repository. Empirical results suggest that feature selection based on sampled data performs significantly better than feature selection based on original data, and that defect prediction models perform similarly regardless of whether the training data was formed using sampled or original data.",2010,0, 5315,Requirement based test case prioritization,"Test case prioritization involves scheduling test cases in an order that increases the effectiveness in achieving some performance goals. One of the most important performance goals is the rate of fault detection. Test cases should run in an order that increases the possibility of fault detection and also that detects faults at the earliest in its testing life cycle. In this paper, an algorithm is proposed for system level test case prioritization (TCP) from software requirement specification to improve user satisfaction with quality software and also to improve the rate of severe fault detection. The proposed model prioritizes the system test cases based on the three factors: customer priority, changes in requirement, implementation complexity. The proposed prioritization technique is validated with two different sets of industrial projects and the results show that the proposed prioritization technique improves the rate of severe fault detection.",2010,0, 5316,Selecting High Quality Frames for Super Resolution Reconstruction Using Perceptual Quality Metrics,"Super-resolution involves the use of signal processing techniques to estimate the high-resolution (HR) version of a scene from multiple low-resolution (LR) observations. It follows that the quality of the reconstructed HR image would depend on the quality of the LR observations. The latter depends on multiple factors like the image acquisition process, encoding or compression and transmission. However, not all images are equally affected by a given type of impairment. A proper choice of the LR observations for reconstruction, should yield a better estimate of the HR image, over a naive method using all images. We propose a simple, model-free approach to improve the performance of super-resolution systems based on the use of perceptual quality metrics. Our approach does not require, or even assume, a specific realization of the SR system. Instead, we select the image subset with high perceptual quality from the available set of LR images. Finally, we present the logical extension of our approach to select the perceptually significant regions in a given LR image, for use in SR reconstruction.",2010,0, 5317,Error analysis in indoors localization using ZigBee wireless networks,"This paper describes indoors radio frequency (RF) localization using radio signal strength indication (RSSI) which is available in wireless communications networks. The main advantage of described methodology is taking as localization hardware the communications sub-system and developing dedicated software in order to obtain a location of mobile network nodes using trilateration algorithm. Accuracy is strongly dependent on quality of measured RSSI. Needing of post-processing of raw RSSI values in order to obtain good results in terms of mobile nodes location estimation is shown. In order to apply RSSI values from wireless communications hardware to localization sub-system, filtering is thus an approach to overcome limitations due to RSSI values fluctuations.",2010,0, 5318,Identification and compensation of torque ripples of a PMSM in a haptic context,"A haptic system is an articulated mechanical structure with motors, sensors, embedded electronics as well computer software allowing force feedback. It enables the user to interact with virtual reality through the sense of touch and sight. It has two main operating modes. In the blocked mode the haptic system should be able to reproduce virtual shocks. In the transparent mode, the haptic system should be as transparent as possible. The transparency is defined as the quality of the free movement while displacing outside the blocked regions. A Permanent Magnet Synchronous Motor (PMSM) is used to drive the mechanical structure. Nowadays PMSM connectors use ultra-powerful magnet in order to reduce the motor volume with respect to the delivered electromagnetic torque ratio. Furthermore the use of those magnets, torque ripples phenomena (cogging torque) can be observed and it can deteriorate the haptic interface performances. In this paper a torque ripples identification and compensation method is proposed and evaluated experimentally. The developed least square identification procedure uses the measured currents, while the PMSM is driven at constant speed, in order to reconstruct the cogging signal. The main advantage of the proposed method is that it doesn't use the motor mechanical structure characteristics in order to calculate its parameters.",2010,0, 5319,Visual Indicator Component Software to Show Component Design Quality and Characteristic,"Good design is one of the prerequisites of high quality product. To measure the quality of software design, software metrics are used. Unfortunately in software development practice, there are a lot of software developers who are not concerned with the component's quality and characteristic. Software metrics does not interest them, because to understand the measurement of the metrics, a deep understanding about degree, dimension, and capacity of some attribute of the software product is needed. This event triggers them to build software's whose quality is below the standard. What is more dangerous is that these developers are not aware of quality and do not care with their work product. Of course these occurrences is concerning and a solution needed to be found. Through this paper the researcher is trying to formulate an indicator of component software that shows component design quality and characteristic visually. This indicator can help software developers to make design decision and refactoring decision, detect the design problem more quickly, able to decide which area to apply refactoring, and enable us to do early or final detection of design defects.",2010,0, 5320,Intelligent monitoring and fault tolerance in large-scale distributed systems,"Summary form only. Electronic devices are starting to become widely available for monitoring and controlling large-scale distributed systems. These devices may include sensing capabilities for online measurement, actuators for controlling certain variables, microprocessors for processing information and making realtime decisions based on designed algorithms, and telecommunication units for exchanging information with other electronic devices or possibly with human operators. A collection of such devices may be referred to as a networked intelligent agent system. Such systems have the capability to generate a huge volume of spatial-temporal data that can be used for monitoring and control applications of large-scale distributed systems. One of the most important research challenges in the years ahead is the development of information processing methodologies that can be used to extract meaning and knowledge out of the ever-increasing electronic information that will become available. Even more important is the capability to utilize the information that is being produced to design software and devices that operate seamlessly, autonomously and reliably in some intelligent manner. The ultimate objective is to design networked intelligent agent systems that can make appropriate real-time decisions in the management of large-scale distributed systems, while also providing useful high-level information to human operators. One of the most important classes of large-scale distributed systems deals with the reliable operation and intelligent management of critical infrastructures, such as electric power systems, telecommunication networks, water systems, and transportation systems. The design, control and fault monitoring of critical infrastructure systems is becoming increasingly more challenging as their size, complexity and interactions are steadily growing. Moreover, these critical infrastructures are susceptible to natural disasters, frequent failures, as well as malicio- - us attacks. There is a need to develop a common system-theoretic fault diagnostic framework for critical infrastructure systems and to design architectures and algorithms for intelligent monitoring, control and security of such systems. The goal of this presentation is to motivate the need for health monitoring, fault diagnosis and security of critical infrastructure systems and to provide a fault diagnosis methodology for detecting, isolating and accommodating both abrupt and incipient faults in a class of complex nonlinear dynamic systems. A detection and approximation estimator based on computational intelligence techniques is used for online health monitoring. Various adaptive approximation techniques and learning algorithms will be presented and illustrated, and directions for future research will be discussed.",2010,0, 5321,Reliable online water quality monitoring as basis for fault tolerant control,"Clean data are essential for any kind of alarm or control system. To achieve the required level of data quality in online water quality monitoring, a system for fault tolerant control was developed. A modular approach was used, in which a sensor and station management module is combined with a data validation and an event detection module. The station management module assures that all relevant data, including operational data, is available and the state of the monitoring devices is fully documented. The data validation module assures that unreliable data is detected, marked as such, and that the need for sensor maintenance is timely indicated. Finally, the event detection module marks unusual system states and triggers measures and notifications. All these modules were combined into a new software package to be used on water quality monitoring stations.",2010,0, 5322,An Empirical Comparison of Fault-Prone Module Detection Approaches: Complexity Metrics and Text Feature Metrics,"In order to assure the quality of software product, early detection of fault-prone products is necessary. Fault-prone module detection is one of the major and traditional area of software engineering. However, comparative study using the fair environment rarely conducted so far because there is little data publicly available. This paper tries to conduct a comparative study of fault-prone module detection approaches.",2010,0, 5323,Design and Analysis of Cost-Cognizant Test Case Prioritization Using Genetic Algorithm with Test History,"During software development, regression testing is usually used to assure the quality of modified software. The techniques of test case prioritization schedule the test cases for regression testing in an order that attempts to increase the effectiveness in accordance with some performance goal. The most general goal is the rate of fault detection. It assumes all test case costs and fault severities are uniform. However, those factors usually vary. In order to produce a more satisfactory order, the cost-cognizant metric that incorporates varying test case costs and fault severities is proposed. In this paper, we propose a cost-cognizant test case prioritization technique based on the use of historical records and a genetic algorithm. We run a controlled experiment to evaluate the proposed technique's effectiveness. Experimental results indicate that our proposed technique frequently yields a higher Average Percentage of Faults Detected per Cost (APFDc). The results also show that our proposed technique is also useful in terms of APFDc when all test case costs and fault severities are uniform.",2010,0, 5324,Using Load Tests to Automatically Compare the Subsystems of a Large Enterprise System,"Enterprise systems are load tested for every added feature, software updates and periodic maintenance to ensure that the performance demands on system quality, availability and responsiveness are met. In current practice, performance analysts manually analyze load test data to identify the components that are responsible for performance deviations. This process is time consuming and error prone due to the large volume of performance counter data collected during monitoring, the limited operational knowledge of analyst about all the subsystem involved and their complex interactions and the unavailability of up-to-date documentation in the rapidly evolving enterprise. In this paper, we present an automated approach based on a robust statistical technique, Principal Component Analysis (PCA) to identify subsystems that show performance deviations in load tests. A case study on load test data of a large enterprise application shows that our approach do not require any instrumentation or domain knowledge to operate, scales well to large industrial system, generate few false positives (89% average precision) and detects performance deviations among subsystems in limited time.",2010,0, 5325,A Benchmarking Framework for Domain Specific Software,"With the development of the software engineering, finding the ""best-in-class"" practice of a specific domain software and locating its position is an urgent task. This paper proposes a systematic, practical and simplified benchmarking framework for domain specific software. The methodology is useful to assess characteristics, sub-characteristics and attributes that influence the domain specific software product quality qualitative and quantitative. It is helpful for the business managers and the software designers to obtain the competitiveness analysis results of software. Using the domain specific benchmarking framework, the software designers and the business managers can easily recognize the positive and negative gap of their product and locate its position in the specific domain.",2010,0, 5326,A Novel Evaluation Method for Defect Prediction in Software Systems,"In this paper, we propose a novel evaluation method for defect prediction in object-oriented software systems. For each metric to evaluate, we start by applying it to the dependency graph extracted from the target software system, and obtain a list of classes ordered by their predicted degree of defect under that metric. By utilizing the actual defect data mined from the subversion database, we evaluate the quality of each metric through means of a weighted reciprocal ranking mechanism. Our method can tell not only the overall quality of each evaluated metric, but also the quality of the prediction result for each class, especially those costly ones. Evaluation results and analysis show the efficiency and rationality of our method.",2010,0, 5327,System Identification and Application Based on Parameters Self-Adaptive SMO,"This paper studies the identification algorithm of parameters self adaptive SMO based on linear kernel function, and analyses its performance and advantages. For ARX model and long-term prediction model, the method is used to identify the model of main steam pressure of thermal system and dual-lane gas turbine engine of aero system. The simulation results show that the algorithm can effectively identify model parameters and has a higher accuracy, reducing the requirements of training data including quantity and quality, so that its engineering applications and implementation are easier.",2010,0, 5328,On Engineering Challenges of Applying Relevance Feedback to Fingerprint Identification Systems,"Effective fingerprint identification is critical in crime detection and many security related operations. This article investigates the use of Relevance Feedback with automatic fingerprint identification systems. It also summarises how several unique engineering challenges faced when applying relevance feedback are being addressed. Relevance feedback is a process for acquiring and using knowledge from a human user to improve the quality of results from an information retrieval system. It has been applied extensively to both text-based and images-based information retrieval systems, but not to fingerprint identification systems. Compared to automatic processes, relevance feedback has the potential to assist in faster convergence towards correct fingerprint identification and removes the reliance on black box fingerprint matching algorithm for the system's performance. Experimental results collected from a prototype software implementation confirm that relevance feedback can improve the quality of fingerprint identification queries significantly.",2010,0, 5329,Naive Bayes Software Defect Prediction Model,"Although the value of using static code attributes to learn defect predictor has been widely debated, there is no doubt that software defect predictions can effectively improve software quality and testing efficiency. Many data mining methods have already been introduced into defect predictions. We noted there have several versions of defect predictor based on Naive Bayes theory, and analyzed their difference estimation method and algorithm complexity. We found the best one which is Multi- variants Gauss Naive Bayes (MvGNB) by performing prediction performance evaluation, and we compared this model with decision tree learner J48. Experiment results on the benchmarking data sets of MDP made us believe that MvGNB would be useful for defect predictions.",2010,0, 5330,Artificial Neural Network Model Application on Long Term Water Level Predictions of South Florida Coastal Waters,"Increasing the number and quality of water level data is very important to the hurricane and surge research. Long-term water level data of local stations in estuaries and inland waterway can be used to validate the performance of traditional storm surge models in complex coastal environments. However, field data collections are expensive and often limited by available research budget. Only a few water level stations operated by NOAA provide long-term observations close to 100 years. In this study, a Feed-forward backpropagation ANN Model has been applied to establish quantitative relationship between short-term water level measurements at Naples station and long-term water measurements at Cedar Key station. Using water levels at NOAA stations, the neural network model can be used to derive reliable long-term historical water level data at other stations along south Florida coast by model training and verification. Long-term water level data derived from the ANN model can be used analyze historic hurricane surge hydrograph in Florida coast after removing tidal signals. The data can also be used as boundaries for modeling hurricane storm surges in bays, estuaries, and coastal waterways.",2010,0, 5331,"Crystals and Snowflakes: Building Computation from Stochastically-Assembled, Defect- and Variation-prone Nanowire Crossbars","Continued Moore's Law capacity scaling is threatened by the capabilities and rising costs of optical lithography. Bottom-up synthesis and assembly of nanoscale structures such as nanowires provide an alternative to top-down lithography. However, bottom-up synthesis demands high regularlity (crystals) creating challenges for differentiation and comes with high rates of defects and variation (snowflakes). With suitable architectures---crossbar-based PLAs, memories, and interconnect---and paradigm shifts in our assembly and usage models---stochastic assembly and post-fabrication, component-specific mapping---we can accommodate these requirements. This allows us to exploit the compactness and energy benefits of single-nanometer dimension devices and to extend these structures into the third dimension without depending on top-down lithography to define the smallest feature sizes in the system.",2010,0, 5332,Seamless high speed simulation of VHDL components in the context of comprehensive computing systems using the virtual machine faumachine,"Testing the interaction between hard- and software is only possible once prototype implementations of the hardware exist. HDL simulations of hardware models can help to find defects in the hardware design. To predict the behavior of entire software stacks in the environment of a complete system, virtual machines can be used. Combining a virtual machine with HDL-simulation enables to project the interaction between hard- and software implementations, even if no prototype was created yet. Hence it allows for software development to begin at an earlier stage of the manufacturing process and helps to decrease the time to market. In this paper we present the virtual machine FAUmachine that offers high speed emulation. It can co-simulate VHDL components in a transparent manner while still offering good overall performance. As an example application, a PCI sound card was simulated using the presented environment.",2010,0, 5333,An efficient negotiation based algorithm for resources advanced reservation using hill climbing in grid computing system,"Ensuring quality of services and reducing the blocking probability is one of the important cases in the environment of grid computing. In this paper, we present a deadline aware algorithm to give solution to ensuring the end-to-end QoS and improvement of the efficiency of grid resources. Investigating requests as a group in the start of time slot and trying to accept the highest number of them. Totally can cause to increase the possibility of acceptance of requests and also increase efficiency of grid resources. Deadline aware algorithm reaches us to this goal. Simulations show that the deadline aware algorithm improves efficiency of advance reservation resources in both case. Possibility of acceptance of requests and optimizing resources in short time slot and also in rate of high entrance of requests in each time.",2010,0, 5334,“Beautiful picture of an ugly placeâ€? Exploring photo collections using opinion and sentiment analysis of user comments,"User generated content in the form of customer reviews, feedbacks and comments plays an important role in all types of Internet services and activities like news, shopping, forums and blogs. Therefore, the analysis of user opinions is potentially beneficial for the understanding of user attitudes or the improvement of various Internet services. In this paper, we propose a practical unsupervised approach to improve user experience when exploring photo collections by using opinions and sentiments expressed in user comments on the uploaded photos. While most existing techniques concentrate on binary (negative or positive) opinion orientation, we use a real-valued scale for modeling opinion and sentiment strengths. We extract two types of sentiments: opinions that relate to the photo quality and general sentiments targeted towards objects depicted on the photo. Our approach combines linguistic features for part of speech tagging, traditional statistical methods for modeling word importance in the photo comment corpora (in a real-valued scale), and a predefined sentiment lexicon for detecting negative and positive opinion orientation. In addition, a semi-automatic photo feature detection method is applied and a set of syntactic patterns is introduced to resolve opinion references. We implemented a prototype system that incorporates the proposed approach and evaluates it on several regions in the World using real data extracted from Flickr.",2010,0, 5335,An Upper Bound on the Probability of Instability of a DVB-T/H Repeater with a Digital Echo Canceller,"The architecture of a digital On-Channel Repeater (OCR) for DVB-T/H signals is described in this paper. The presence of a coupling channel between the transmitting and the receiving antennas gives origin to one or more echoes, having detrimental effects on the quality of the repeated signal and critically affecting the overall system stability. A low-complexity echo canceller unit is then proposed, performing a coupling channel estimation based on the local transmission of low-power training signals. In particular, in this paper we focus on the stability issues which arise due to the non perfect echo cancellation. An upper bound on the probability of instability of the system is analytically found, providing useful guidelines for conservative OCR design, and some performance figures concerning different propagation scenarios are provided.",2010,0, 5336,Reconstructing control flow graph for control flow checking,"In the space radiation environment, a large number of cosmic rays often lead to transient faults on the on-board computer. These transient faults result in data flow errors or control flow errors during program running. The present software implemented hardware fault tolerant technology mainly uses the signature analysis method to realize the control flow checking, namely, through assigning signature for each basic block and inserting some instructions into every basic block to realize the control flow checking. Because the size of different basic blocks in one program usually exist obvious difference, applying unified checking method for these basic blocks will reduce the protection efficiency. To solve this problem, this paper has proposed a control flow checking optimization method named RCFG by reconstructing control flow graph. RCFG firstly merges basic blocks into larger logic blocks, then cuts the logic blocks into basic logic blocks with similar size. At last, control flow detection algorithm can be applied based on the control flow graph composed with the basic logic blocks. RCFG can effectively improve the protection efficiency of algorithm, and user can regulate the balance between performance and reliability by configuring the size of basic logic block. This paper has finished the fault injection experiment for a typical signature analysis algorithm named CFCSS. According to the experiment result, compared with the original CFCSS algorithm, the average performance expense of the CFCSS algorithm implemented based on RCFG increased by 16.6%, and the average memory expense increased by 13.5%, but the number of the faults resulting in the program outputting wrong result reduced by 47.67% equally.",2010,0, 5337,A comprehensive evaluation methodology for domain specific software benchmarking,"With the wide using of information system, software users are demanding higher quality software and the software developers are pursue the “best-in-classâ€?practice in a specific domain. But till now it is difficult to get an evaluation result due to lack of benchmark method. This paper proposes a systematic, practical and simplified evaluation methodology for domain specific software benchmarking. In this methodology, an evaluation model with software test characteristics (elementary set) and domain characteristics (extended set) is described; in order to weaken the uncertainty of subjective and objective, the rough set theory is applied to gain the attributes weights, and the gray analysis method is introduced to remedy the incomplete and inadequate information. The method is useful to assess characteristics, sub-characteristics and attribute that influence the domain specific software product quality qualitative and quantitative. The evaluation and benchmarking results prove that the model is practical and effective.",2010,0, 5338,Software documents quality measurement- a fuzzy approach,"The quality of software documents is important to software developers and those who testing and evaluating software. Good documents reduce the risk around schedule and cost. Effective documents quality measurement is one challenging activity in software development that software industries are facing. The goal of this research work is to put forward a method to measure documents quality by a fuzzy approach. Quality measurement model and synthetically comparison model is presented to estimate the quality of software documents. By using this method, we can comparison software documents impersonal and get the shortage of documents, and improve the documents quality accordingly.",2010,0, 5339,Dependency-aware fault diagnosis with metric-correlation models in enterprise software systems,"The normal operation of enterprise software systems can be modeled by stable correlations between various system metrics; errors are detected when some of these correlations fail to hold. The typical approach to diagnosis (i.e., pinpoint the faulty component) based on the correlation models is to use the Jaccard coefficient or some variant thereof, without reference to system structure, dependency data, or prior fault data. In this paper we demonstrate the intrinsic limitations of this approach, and propose a solution that mitigates these limitations. We assume knowledge of dependencies between components in the system, and take this information into account when analyzing the correlation models. We also propose the use of the Tanimoto coefficient instead of the Jaccard coefficient to assign anomaly scores to components. We evaluate our new algorithm with a Trade6-based test-bed. We show that we can find the faulty components within top-3 components with the highest anomaly score in four out of nine cases, while the prior method can only find one.",2010,0, 5340,Similarity metric for risk assessment in IT change plans,"The proper management of IT infrastructures is essential for organizations that aim to deliver high quality services. Given the dynamics of these infrastructures, changes become imminent. In some cases, these changes might raise failures, causing disruption to provided services and consequently affecting the business continuity. Therefore, it is strongly recommended to evaluate the risks associated with changes before their actual execution. Learning from information of past deployed changes it is possible to estimate the risks for recently planned ones. Thereby, in this paper, we propose a solution to weigh the information available from past executed plans by the similarity calculated in relation with the analyzed change plan. A prototype system has been developed in order to evaluate the effectiveness of the solution over an emulated IT infrastructure. The results obtained show that the solution is capable of capturing similarity among activities in change plans, improving the accuracy of risk assessment for IT change planning.",2010,0, 5341,A Quasi-best Random Testing,"Random testing, having been employed in both hardware and software for a long time, is well known for its simplicity and straightforwardness, in which each test is selected randomly regardless of the tests previously generated. However, traditionally, it seems to be inefficient for its random selection of test patterns. Therefore, a new concept of quasi-best distance random testing is proposed in the paper to make it more effective in testing. The new idea is based on the fact that the distance between two adjacent selected test vectors in a test sequence would greatly influence the efficiency of fault testing. Procedures of constructing such a testing sequence are presented and discussed in detail. The new approach has shown its remarkable advantage of fitting in most circuits. Experimental results and mathematical analysis of efficiency are also given to assess the performances of the proposed approach.",2010,0, 5342,Quantitative Analysis of Requirements Evolution across Multiple Versions of an Industrial Software Product,"Requirements evolution is one of critical problems influencing software engineering activities. Despite there is much research on requirements evolution, there still lacks quantitative understanding of requirements evolution. In this paper, we quantitatively analyze requirements evolution across multiple versions of an industrial software product. Based on data of requirements evolution and defects, we analyze the relationship between requirements evolution and requirements as well as between defects and requirements evolution. We also analyze the evolution characteristics about requirements modification. Our findings include estimation of the number of defects using evolved requirements may increase accuracy of defect estimation and business rule is the most volatile part in requirements. These findings deepen our understanding of requirements evolution and can help software organizations manage requirements evolution.",2010,0, 5343,Quality Attributes Assessment for Feature-Based Product Configuration in Software Product Line,"Product configuration based on a feature model in software product lines is the process of selecting the desired features based on customers' requirements. In most cases, application engineers focus on the functionalities of the target product during product configuration process whereas the quality attributes are handled until the final product is produced. However, it is costly to fix the problem if the quality attributes have not been considered in the product configuration stage. The key issue of assessing a quality attribute of a product configuration is to measure the impact on a quality attribute made by the set of functional variable features selected in a configuration. Current existing approaches have several limitations, such as no quantitative measurements provided or requiring existing valid products and heavy human effort for the assessment. To overcome theses limitations, we propose an Analytic Hierarchical Process (AHP) based approach to estimate the relative importance of each functional variable feature on a quality attribute. Based on the relative importance value of each functional variable feature on a quality attribute, the level of quality attributes of a product configuration in software product lines can be assessed. An illustrative example based on the Computer Aided Dispatch (CAD) software product line is presented to demonstrate how the proposed approach works.",2010,0, 5344,An Interaction-Pattern-Based Approach to Prevent Performance Degradation of Fault Detection in Service Robot Software,"In component-based robot software, it is crucial to monitor software faults and deal with them on time before they lead to critical failures. The main causes of software failures include limited resources, component-interoperation mismatches, and internal errors of components. Message-sniffing is one of the popular methods to monitor black-box components and handle these types of faults during runtime. However, this method normally causes some performance problems of the target software system because the fault monitoring and detection process consumes a significant amount of resources of the target system. There are three types of overheads that cause the performance degradation problems: frequent monitoring, transmission of a large amount of monitoring-data, and the processing time for fault analysis. In this paper, we propose an interaction-pattern-based approach to reduce the performance degradation caused by fault monitoring and detection in component-based service robot software. The core idea of this approach is to minimize the number of messages to monitor and analyze in detecting faults. Message exchanges are formalized as interaction patterns which are commonly observed in robot software. In addition, important messages that need to be monitored are identified in each of the interaction patterns. An automatic interaction pattern-identification method is also developed. To prove the effectiveness of our approach, we have conducted a performance simulation. We are also currently applying our approach to silver-care robot systems.",2010,0, 5345,Quantitative Analysis for Non-linear System Performance Data Using Case-Based Reasoning,"Effective software architecture evaluation methods are essential in today's system development for mission critical systems. We have previously developed MEMS and a set of test statistics for evaluating middleware architectures, which proven an effective assessment of important quality attributes and their characterizations. We have observed it is common that many system performance response data are not of linear nature, where using linear modeling is not feasible in these scenarios for system performance predictions. To provide an alternative quantitative assessment on the system performance using actual runtime datasets, we developed a set of non-linear analysis procedure based on Case-based Reasoning (CBR), a machine learning method widely used in another disciplines of Software Engineering. Experiments were carried out based on actual runtime performance datasets. Results confirm that our non-linear analysis method CBR4MEMS produced accurate performance predictions and outperformed linear approaches. Our approach utilizing CBR to enable performance assessments on non-linear datasets, a major step forward to support software architecture evaluation.",2010,0, 5346,Using Faults-Slip-Through Metric as a Predictor of Fault-Proneness,"Background: The majority of software faults are present in small number of modules, therefore accurate prediction of fault-prone modules helps improve software quality by focusing testing efforts on a subset of modules. Aims: This paper evaluates the use of the faults-slip-through (FST) metric as a potential predictor of fault-prone modules. Rather than predicting the fault-prone modules for the complete test phase, the prediction is done at the specific test levels of integration and system test. Method: We applied eight classification techniques, to the task of identifying fault prone modules, representing a variety of approaches, including a standard statistical technique for classification (logistic regression), tree-structured classifiers (C4.5 and random forests), a Bayesian technique (Naïve Bayes), machine-learning techniques (support vector machines and back-propagation artificial neural networks) and search-based techniques (genetic programming and artificial immune recognition systems) on FST data collected from two large industrial projects from the telecommunication domain. Results: Using area under the receiver operating characteristic (ROC) curve and the location of (PF, PD) pairs in the ROC space, the faults slip-through metric showed impressive results with the majority of the techniques for predicting fault-prone modules at both integration and system test levels. There were, however, no statistically significant differences between the performance of different techniques based on AUC, even though certain techniques were more consistent in the classification performance at the two test levels. Conclusions: We can conclude that the faults-slip-through metric is a potentially strong predictor of fault-proneness at integration and system test levels. The faults-slip-through measurements interact in ways that is conveniently accounted for by majority of the data mining techniques.",2010,0, 5347,"Distributed, Scalable Clustering for Detecting Halos in Terascale Astronomy Datasets","Terascale astronomical datasets have the potential to provide unprecedented insights into the origins of our universe. However, automated techniques for determining regions of interest are a must if domain experts are to cope with the intractable amounts of simulation data. This paper addresses the important problem of locating and tracking high density regions in space that generally correspond to halos and sub-halos and host galaxies. A density based, mode following clustering method called Automated Hierarchical Density Shaving (Auto-HDS) is adapted for this application. Auto-HDS can detect clusters of different densities while discarding the vast majority of background data. Two alternative parallel implementations of the algorithm, based respectively on the dataflow computational model and on Hadoop/ MapReduce functional programming constructs, are realized and compared. Based on runtime performance, scalability across compute cores and across increasing data volumes, we demonstrate the benefits of fine grain parallelism. The proposed distributed and multithreaded AutoHDS clustering algorithm is shown to produce high quality clusters, be computationally efficient, and scalable from 1 through 1024 compute-cores.",2010,0, 5348,Clustering Performance on Evolving Data Streams: Assessing Algorithms and Evaluation Measures within MOA,"In today's applications, evolving data streams are ubiquitous. Stream clustering algorithms were introduced to gain useful knowledge from these streams in real-time. The quality of the obtained clusterings, i.e. how good they reflect the data, can be assessed by evaluation measures. A multitude of stream clustering algorithms and evaluation measures for clusterings were introduced in the literature, however, until now there is no general tool for a direct comparison of the different algorithms or the evaluation measures. In our demo, we present a novel experimental framework for both tasks. It offers the means for extensive evaluation and visualization and is an extension of the Massive Online Analysis (MOA) software environment released under the GNU GPL License.",2010,0, 5349,Multi process real-time network applications in Distribution Management System,"The heart of every Power System Control Center (PSCC) - Distribution Management System (DMS) software is sophisticated calculation real-time core of Network Applications. Crucial applications of the control center such as State Estimator - Power Flow, Volt VAr Control, Fault isolation and service restoration, must operate in real-time, providing high performance calculation, communication and data processing. This paper presents implementation of real-time multi-process Distribution State Estimator and Volt VAr control and those important performance considerations specific to the scheduling processes of these applications, by leveraging multi-core processor technology. Real-time Scheduler as one of the crucial performance wise components is analyzed. The scheduling process and algorithm are analyzed in detail with its dependence on the nature of Power Networks. Tests are done and results for multi-process Distribution State Estimator scheduling executions on multi-core processors are presented.",2010,0, 5350,How dynamic is the Grid? Towards a quality metric for Grid information systems,"Grid information systems play a core role in today's production Grid Infrastructures. They provide a coherent view of the Grid services in the infrastructure while addressing the performance, robustness and scalability issues that occur in dynamic, large-scale, distributed systems. Quality metrics for Grid information systems are required in order to compare different implementations and to evaluate suggested improvements. This paper proposes the adoption of a quality metric, first used in the domain of Web search, to measure the quality of Grid information systems with respect to their information content. The application of this metric requires an understanding of the dynamic nature of Grid information. An empirical study based on information from the EGEE Grid infrastructure is carried out to estimate the frequency of change for different types of Grid information. Using this data, the proposed metric is assessed with regards to its applicability to measuring the quality of Grid information systems.",2010,0, 5351,Analysis and modeling of time-correlated failures in large-scale distributed systems,"The analysis and modeling of the failures bound to occur in today's large-scale production systems is invaluable in providing the understanding needed to make these systems fault-tolerant yet efficient. Many previous studies have modeled failures without taking into account the time-varying behavior of failures, under the assumption that failures are identically, but independently distributed. However, the presence of time correlations between failures (such as peak periods with increased failure rate) refutes this assumption and can have a significant impact on the effectiveness of fault-tolerance mechanisms. For example, the performance of a proactive fault-tolerance mechanism is more effective if the failures are periodic or predictable; similarly, the performance of checkpointing, redundancy, and scheduling solutions depends on the frequency of failures. In this study we analyze and model the time-varying behavior of failures in large-scale distributed systems. Our study is based on nineteen failure traces obtained from (mostly) production large-scale distributed systems, including grids, P2P systems, DNS servers, web servers, and desktop grids. We first investigate the time correlation of failures, and find that many of the studied traces exhibit strong daily patterns and high autocorrelation. Then, we derive a model that focuses on the peak failure periods occurring in real large-scale distributed systems. Our model characterizes the duration of peaks, the peak inter-arrival time, the inter-arrival time of failures during the peaks, and the duration of failures during peaks; we determine for each the best-fitting probability distribution from a set of several candidate distributions, and present the parameters of the (best) fit. Last, we validate our model against the nineteen real failure traces, and find that the failures it characterizes are responsible on average for over 50% and up to 95% of the downtime of these systems.",2010,0, 5352,Adaptively detecting changes in Autonomic Grid Computing,"Detecting the changes is the common issue in many application fields due to the non-stationary distribution of the applicative data, e.g., sensor network signals, web logs and grid-running logs. Toward Autonomic Grid Computing, adaptively detecting the changes in a grid system can help to alarm the anomalies, clean the noises, and report the new patterns. In this paper, we proposed an approach of self-adaptive change detection based on the Page-Hinkley statistic test. It handles the non-stationary distribution without the assumption of data distribution and the empirical setting of parameters. We validate the approach on the EGEE streaming jobs, and report its better performance on achieving higher accuracy comparing to the other change detection methods. Meanwhile this change detection process could help to discover the device fault which was not claimed in the system logs.",2010,0, 5353,Validation of CK Metrics for Object Oriented Design Measurement,"Since object oriented system is becoming more pervasive, it is necessary that software engineers have quantitative measurements for accessing the quality of designs at both the architectural and components level. These measures allow the designer to access the software early in the process, making changes that will reduce complexity and improve the continuing capability of the product. Object oriented design metrics is an essential part of software engineering. This paper presents a case study of applying design measures to assess software quality. Six Java based open source software systems are analyzed using CK metrics suite to find out quality of the system and possible design faults that will reversely affect different quality parameters such as reusability, understandability, testability, maintainability. This paper also presents general guidelines for interpretation of reusability, understandability, testability, maintainability in the context of selected projects.",2010,0, 5354,Sensors integration in embedded systems,"Sensors/actuators are critical components of several Complex Embedded Systems (CES). In these CES a fault in the sensors or actuators can trigger the incidence of catastrophic events. Sensor/actuator faults detection is difficult and impacts critically the system performance. Subsystem or Embedded Processing Component (EPC), integration testing with sensor or actuator is required to assure that the `Subsystem under test (SUT)' meets the specifications. Several researchers have addressed software integration issues only; however sensors/actuators integration issues were not addressed adequately. This paper focuses on the integration testing of sensors/actuators in ES. The sensor integration with EPC is viewed as identification of faulty communication channel within SUT. The Sensor and EPC models are developed based on and are modified for incorporating faults in the communication channel between them. A faulty communication channel is modelled by a “faulty process Ψâ€?that changes the operations or possibly modifies events between the fault-free processes, while the components are assumed to be fault free. The functional tests are important in verifying system and their reuse in integration testing correctly identifies the known and tested system requirements. Comparing the observed output in the communication channel with expected response identifies the fault(s) in the channel and provides integration test verdict: pass or fail. An example illustrates the application of functional testing for integration of sensor.",2010,0, 5355,Many Task Computing for modeling the fate of oil discharged from the Deep Water Horizon well blowout,"The Deep Water Horizon well blowout on April 20th 2010 discharged between 40,000-1.2 million tons of crude oil into the Gulf of Mexico. In order to understand the fate and impact of the discharged oil, particularly on the environmentally sensitive Florida Keys region, we have implemented a multi-component application which consists of many individual tasks that utilize a distributed set of computational and data management resources. The application consists of two 3D ocean circulation models of the Gulf and South Florida and a 3D oil spill model. The ocean models used here resolve the Gulf at 2 km and the South Florida region at 900 m. This high resolution information on the ocean state is then integrated with the oil model to track the fate of approximately 10 million oil particles. These individual components execute as MPI based parallel applications on a 576 core IBM Power 5 cluster and a 5040 core Linux cluster, both operated by the Center for Computational Science, University of Miami. The data and workflow between is handled by means of a custom distributed software framework built around the Open Project for Networked Data Access Protocol (OPeNDAP). In this paper, we present this application as an example of Many Task Computing, report on the execution characteristics of this application, and discuss the challenges presented by the many task distributed workflow involving heterogeneous components. The application is a typical example from the ocean modeling and forecasting field and imposes soft timeliness and output quality constraints on top of the traditional performance requirements.",2010,0, 5356,The Equipment Development of Detecting the Beer Bottles' Thickness in Real Time,"In recent years, the requirement of beer bottle quality becomes ever-strict, so the annual detection can't satisfy the requirement of a modern industry, in this paper, we design a equipment which can detect the beer bottles' thickness in real time. The measurement system takes the chip computer W77E58 as the core, and takes the theory of ultrasonic measurement as the basis, we can observe directly the thickness of the detected beer bottle and judge the bottle is qualified or not by the visualization software. Through the analysis of experimental data, ultrasonic testing is relatively stable, it can meet the requirement of modern industry.",2010,0, 5357,A Bayesian Based Method for Agile Software Development Release Planning and Project Health Monitoring,"Agile software development (ASD) techniques are iteration based powerful methodologies to deliver high quality software. To ensure on time high quality software, the impact of factors affecting the development cycle should be evaluated constantly. Quick and precise factor evaluation results in better risk assessment, on time delivery and optimal use of resources. Such an assessment is easy to carry out for a small number of factors. However, with the increase of factors, it becomes extremely difficult to assess in short time periods. We have designed and developed a project health measurement model to evaluate the factors affecting software development of the project. We used Bayesian networks (BNs) as an approach that gives such an estimation. We present a quantitative model for project health evaluation that helps decision makers make the right decision early to amend any discrepancy that may hinder on time and high quality software delivery.",2010,0, 5358,Two Efficient Software Techniques to Detect and Correct Control-Flow Errors,"This paper proposes two efficient software techniques, Control-flow and Data Errors Correction using Data-flow Graph Consideration (CDCC) and Miniaturized Check-Pointing (MCP), to detect and correct control-flow errors. These techniques have been implemented based on addition of redundant codes in a given program. The creativity applied in the methods for online detection and correction of the control-flow errors is using data-flow graph alongside of using control-flow graph. These techniques can detect most of the control-flow errors in the program firstly, and next can correct them, automatically. Therefore, both errors in the control-flow and program data which is caused by control-flow errors can be corrected, efficiently. In order to evaluate the proposed techniques, a post compiler is used, so that the techniques can be applied to every 80×86 binaries, transparently. Three benchmarks quick sort, matrix multiplication and linked list are used, and a total of 5000 transient faults are injected on several executable points in each program. The experimental results demonstrate that at least 93% and 89% of the control-flow errors can be detected and corrected without any data error generation by the CDCC and MCP, respectively. Moreover, the strength of these techniques is significant reduction in the performance and memory overheads in compare to traditional methods, for as much as remarkable correction abilities.",2010,0, 5359,Phase Characterization and Classification for Micro-architecture Soft Error,"Transient faults have become a key challenge to modern processor design. Processor designers take Architectural Vulnerability Factor (AVF) as an estimation method of micro-architectures soft error rate. Dynamic, phase-based system reliability management, which tunes system hardware and software parameters at runtime for different phases, has become a focus in the field of processor design. Phase characterization technique (PCT) and phase classification algorithm (PCA) determine the accuracy of phase identification, which is the foundation of dynamic, phase-based system management. To our knowledge, this paper is the first to give a comprehensive evaluation and comparison of PCTs and PCAs for micro-architecture soft error. We first compare the efficiency of basic block vectors (BBV) and performance metric counters (PMC) based PCTs in reliability-oriented phase characterization on three micro-architectural structures (i.e. instruction queue, function unit and reorder buffer). Experimental results show that PMC based PCT performs better than BBV based PCT for most programs studied. Also, we compare the accuracy of three clustering algorithms (i.e. hierarchical clustering, k-means clustering and regression tree) in reliability-oriented phase classification. Regression tree method is demonstrated to improve the accuracy of classification by 30% compared with other two PCAs on average. Furthermore, based on the comparisons of PCTs and PCAs, we propose the optimal combination of PCT and PCA for soft error reliability-oriented phase identification - the combination of PMC and regression tree. In addition, we quantify the upper bound of predictability of AVF using BBV/PMC. Overall, an average of 82% AVF can be explained by PMC, while BBV can explain 78% AVF averagely.",2010,0, 5360,PBTrust: A Priority-Based Trust Model for Service Selection in General Service-Oriented Environments,"How to choose the best service provider (agent), which a service consumer can trust in terms of the quality and success rate of the service in an open and dynamic environment, is a challenging problem in many service-oriented applications such as Internet-based grid systems, e-trading systems, as well as service-oriented computing systems. This paper presents a Priority-Based Trust (PBTrust) model for service selection in general service-oriented environments. The PBTrust is robust and novel from several perspectives. (1) The reputation of a service provider is derived from referees who are third parties and had interactions with the provider in a rich context format, including attributes of the service, the priority distribution on attributes and a rating value for each attribute from a third party, (2) The concept of 'Similarity' is introduced to measure the difference in terms of distributions of priorities on attributes between requested service and a refereed service in order to precisely predict the performance of a potential provider on the requested service, (3) The concept of general performance of a service provider on a service in history is also introduced to improve the success rate on the requested service. The experimental results can prove that PBtrust has a better performance than that of the CR model in a service-oriented environment.",2010,0, 5361,A Novel Method to Measure Comprehensive Complexity of Software Based on the Metrics Statistical Model,"Calculating software complexity is one of the most challenging problems in the Software Engineering due to using them in estimating errors, having a landscape of software reliability, approximating costs of software implementation and maintenance, and delivering software with better quality. Most of the recent researches on calculating the software's complexity focus on special directions and goals. This paper presents a novel method for measuring comprehensive complexity of software based on Statistical model evaluation of the existing complexity metrics through modules. To reach this purpose, the amount of comprehensive complexity is achieved for every module by identifying statistical distribution of complexity metric quantities, normalization and their combination. Afterward, the comprehensive complexity of the software is calculated by composition of the module's complexity amounts. This method is applied on some samples of the ""NASA Software Engineering laboratory"" and some of its positive results are presented.",2010,0, 5362,Context-aware autonomic systems in real-time applications: Performance evaluation,"Nowadays one can observe rapid development of various technologies supporting autonomic features and context-awareness of pervasive systems. And this is perfectly understandable as these features effectively increase the systems autonomicity. However, in the case of safety-critical realtime systems it becomes crucial to determine how the entire system performance is affected by the growth in complexity of its autonomic / dynamic decision making engine which is processing the context information for control purposes. In this paper we consider policy-based computing as a candidate technology to support selected autonomic features and context-awareness in safety-critical systems. We conduct performance analysis in a generic policy-supervised system in order to determine the relation between policy complexity (reflecting decision making system complexity) and the decision evaluation time (affecting the entire system performance). We present a coherent characteristic polynomial method as a means of pre-estimation of policy evaluation time depending on its complexity. The results we present can be potentially used for pre-evaluation of whether a policy of a certain complexity level is of any applicability in a safety critical system that has to meet fixed real-time deadlines.",2010,0, 5363,Research on load balance of Service Capability Interaction Management,"With the evolution of the IP Multimedia Subsystem (IMS), service interaction and management has received an increasing amount of attention. Service Capability Interaction Management (SCIM) is introduced as a component of Service Delivery Platform (SDP) for providing service interaction of IMS. Although there is much work on the standardization of SCIM, architectures based on industry standards like IMS may still have problems, especially the load balance issue, which has been seldom mentioned and discussed. Limited application servers become the bottleneck that affects the quality of service (QoS) due to the rising traffic levels. SCIM should provide load balance function among homogenous application servers to improve the performance of the whole system. We propose a module called Load Balancer that handles service messages from Call Control Network and sends each message to the optimal application server (AS). Four components of this module Load estimator, Load analyzer, Load Policies Center and Dispatcher cooperate together to implement the load balance function. This paper introduces load balance problem of SCIM from the network architecture viewpoint, then describes a load balance model and the mechanism of load balance. It discusses the concept and content of Load Policies which provide the necessary information for deciding the optimal AS to implement the load balance. Then it provides a load balance algorithm and the effectiveness is validated through simulation.",2010,0, 5364,Registry support for core component evolution,"The United Nations Centre for Trade Facilitation and Electronic Business (UN/CEFACT) provides a conceptual approach, named the Core Components Technical Specification (CCTS), for creating business document models. These business document models are essential for defining service interfaces of service-oriented systems and are typically stored in a registry for enabling easy storage, search, and retrieval of these artifacts. However, in such a highly dynamic environment with ever-changing market demands, business partners are confronted with the need to constantly adapt their systems. This implies revising conceptual business document models as well as adapting the corresponding service interface definitions resulting in a tedious and often error-prone task when done manually. In this paper, we present a framework for dealing with these types of evolution in service-oriented systems. Having such a framework at hand, supports business partners in coping with the evolution of business document models, as well as in adapting the corresponding service interface definitions automatically. Furthermore, we present a prototypical implementation and an evaluation of the framework proposed.",2010,0, 5365,A Comparative Study of Ensemble Feature Selection Techniques for Software Defect Prediction,"Feature selection has become the essential step in many data mining applications. Using a single feature subset selection method may generate local optima. Ensembles of feature selection methods attempt to combine multiple feature selection methods instead of using a single one. We present a comprehensive empirical study examining 17 different ensembles of feature ranking techniques (rankers) including six commonly-used feature ranking techniques, the signal-to-noise filter technique, and 11 threshold-based feature ranking techniques. This study utilized 16 real-world software measurement data sets of different sizes and built 13,600 classification models. Experimental results indicate that ensembles of very few rankers are very effective and even better than ensembles of many or all rankers.",2010,0, 5366,Efficient packet classification on FPGAs also targeting at manageable memory consumption,"Packet classification involving multiple fields is used in the area of network intrusion detection, as well as to provide quality of service and value-added network services. With the ever-increasing growth of the Internet and packet transfer rates, the number of rules needed to be handled simultaneously in support of these services has also increased. Field-Programmable Gate Arrays (FPGAs) provide good platforms for hardware-software co-designs that can yield high processing efficiency for highly complex applications. However, since FPGAs contain rather limited user-programmable resources, it becomes necessary for any FPGA-based packet classification algorithm to compress the rules as much as possible in order to achieve a widely acceptable on-chip solution in terms of performance and resource consumption; otherwise, a much slower external memory will become compulsory. We present a novel FPGA-oriented method for packet classification that can deal with rules involving multiple fields. This method first groups the rules based on their number of important fields, then attempts to match two fields at a time and finally combines the constituent results to identify longer matches. Our design with a single FPGA yields a larger than 9.68 Gbps throughput and has a small memory consumption of around 256Kbytes for more than 10,000 rules. This memory consumption is the lowest among all previously proposed FPGA-based designs for packet classification. Our scalable design can easily be extended to achieve a higher than 10Gbps throughput by employing multiple FPGAs running in parallel.",2010,0, 5367,Rule Engine based on improvement Rete algorithm,"Rete algorithm is the mainstream of the algorithm in the rules engine; it provides efficient local entities data and rules pattern matching method. But Rete algorithm applied in rules engine has defects in performance and demand aspect, this paper applies three methods to improve the Rete algorithm in the rule engine: rule decomposition, Alpha-Node-Hashing and Beta-Node-Indexing. Rules engine is enterprise applications, business logic framework for extracting rules from software application, and make it more flexible. This paper describes the principle of Rule Engine at first, secondly show the Rete algorithm, and finally we do some improvement to Rete algorithm. It makes Rule Engine widely meet the application demand and be greatly improved the efficiency.",2010,0, 5368,QAM: QoS-assured management for interoperability and manageability for SOA,"This paper presents the quality factors of web services with definition, classification, and sub-factors for interoperability and manageability. In SOA systems, service consumers and service providers are usually placed in different ownership domains. They are also developed independently on various platforms and loosely-coupled by network. Services are consumed generally without direct management by service consumers. As a result, interoperability and manageability for SOA systems are core issues for enabling SOA services. This paper addresses the new concept of sub-quality factors for interoperability and manageability in the view point of management. This is a result of SOA project designed for implementing and evaluating Korea e-government system.",2010,0, 5369,Quality factors of business value and service level measurement for SOA,"This paper presents the quality factors of web services with definition, classification, and sub-factors for business and service level measurement. Web services, main enabler for SOA, usually have characteristics distinguished from general stand-alone software. They are provided in different ownership domain by network. They can also adopt various binding mechanism and be loosely-coupled, platform independent, and use standard protocols. As a result, a web service system requires its own quality factors unlike installation-based software. For instance, as the quality of web services can be altered in real-time according to the change of the service provider, considering real-time property of web services is very meaningful in describing the web services quality. This paper focuses especially to the two quality factors: business value and service level measurement, affecting real world business activities and a service performance.",2010,0, 5370,Wide area measurement based out-of-step detection technique,"The electrical power systems function as a huge interconnected network dispersed over a large area. A balance exist between generated and consumed power, any disturbance to this balance in the system caused due to change in load as well as faults and their clearance often results in electromechanical oscillations. As a result there is variation in power flow between two areas. This phenomenon is referred as Power Swing. This paper uses PMU data to measure the currents and voltages of the three phases of two buses connected to a 400 kV line. The measured data is then used for differentiating between a swing or fault condition, and if a swing is detected, to predict whether the swing is a stable or an unstable one. The performance of the method has been tested on a simulated system using PSCAD and MATLAB software.",2010,0, 5371,The prediction of software aging trend based on user intention,"Owing to the limitation of traditional software aging trend prediction method that based on time and based on measurement in dealing with sudden large scale concurrent questions, this paper proposes a new software aging trend prediction method which is based on user intention. This method predicts the trend of software aging according to the quantity of user requests for each components during the moment of system operation, and the software aging damage with each component is requested once.The experiment indicates, compared with the measurement method, this method has highter accuracy in dealing with sudden large scale concurrent questions.",2010,0, 5372,Diagnosing the root-causes of failures from cluster log files,"System event logs are often the primary source of information for diagnosing (and predicting) the causes of failures for cluster systems. Due to interactions among the system hardware and software components, the system event logs for large cluster systems are comprised of streams of interleaved events, and only a small fraction of the events over a small time span are relevant to the diagnosis of a given failure. Furthermore, the process of troubleshooting the causes of failures is largely manual and ad-hoc. In this paper, we present a systematic methodology for reconstructing event order and establishing correlations among events which indicate the root-causes of a given failure from very large syslogs. We developed a diagnostics tool, FDiag, to extract the log entries as structured message templates and uses statistical correlation analysis to establish probable cause and effect relationships for the fault being analyzed. We applied FDiag to analyze failures due to breakdowns in interactions between the Lustre file system and its clients on the Ranger supercomputer at the Texas Advanced Computing Center (TACC). The results are positive. FDiag is able to identify the dates and the time periods that contain the significant events which eventually led to the occurrence of compute node soft lockups.",2010,0, 5373,Fault-tolerant communication runtime support for data-centric programming models,"The largest supercomputers in the world today consist of hundreds of thousands of processing cores and many more other hardware components. At such scales, hardware faults are a commonplace, necessitating fault-resilient software systems. While different fault-resilient models are available, most focus on allowing the computational processes to survive faults. On the other hand, we have recently started investigating fault resilience techniques for data-centric programming models such as the partitioned global address space (PGAS) models. The primary difference in data-centric models is the decoupling of computation and data locality. That is, data placement is decoupled from the executing processes, allowing us to view process failure (a physical node hosting a process is dead) separately from data failure (a physical node hosting data is dead). In this paper, we take a first step toward data-centric fault resilience by designing and implementing a fault-resilient, one-sided communication runtime framework using Global Arrays and its communication system, ARMCI. The framework consists of a fault-resilient process manager; low-overhead and network-assisted remote-node fault detection module; non-data-moving collective communication primitives; and failure semantics and err or codes for one-sided communication runtime systems. Our performance evaluation indicates that the framework incurs little overhead compared to state-of-the-art designs and provides a fundamental framework of fault resiliency for PGAS models.",2010,0, 5374,QoS metrics for service level measurement for SOA environment,"This paper presents the quality factors of web services with definition, classification, and sub-factors for business and service level measurement. Web services, main enabler for SOA, usually have characteristics distinguished from general stand-alone software. They are provided in different ownership domain by network. They can also adopt various binding mechanism and be loosely-coupled, platform independent, and use standard protocols. As a result, a web service system requires its own quality factors unlike installation-based software. For instance, as the quality of web services can be altered in real-time according to the change of the service provider, considering real-time property of web services is very meaningful in describing the web services quality. This paper establishes the QoS metrics for service level measurement and focuses especially to the two quality factors: business value and service level measurement, affecting real world business activities and a service performance.",2010,0, 5375,Automating Coverage Metrics for Dynamic Web Applications,"Building comprehensive test suites for web applications poses new challenges in software testing. Coverage criteria used for traditional systems to assess the quality of test cases are simply not sufficient for complex dynamic applications. As a result, faults in web applications can often be traced to insufficient testing coverage of the complex interactions between the components. This paper presents a new set of coverage criteria for web applications, based on page access, use of server variables, and interactions with the database. Following an instrumentation transformation to insert dynamic tracking of these aspects, a static analysis is used to automatically create a coverage database by extracting and executing only the instrumentation statements of the program. The database is then updated dynamically during execution by the instrumentation calls themselves. We demonstrate the usefulness of our coverage criteria and the precision of our approach on the analysis of the popular internet bulletin board system PhpBB 2.0.",2010,0, 5376,Effort-Aware Defect Prediction Models,"Defect Prediction Models aim at identifying error-prone modules of a software system to guide quality assurance activities such as tests or code reviews. Such models have been actively researched for more than a decade, with more than 100 published research papers. However, most of the models proposed so far have assumed that the cost of applying quality assurance activities is the same for each module. In a recent paper, we have shown that this fact can be exploited by a trivial classifier ordering files just by their size: such a classifier performs surprisingly good, at least when effort is ignored during the evaluation. When effort is considered, many classifiers perform not significantly better than a random selection of modules. In this paper, we compare two different strategies to include treatment effort into the prediction process, and evaluate the predictive power of such models. Both models perform significantly better when the evaluation measure takes the effort into account.",2010,0, 5377,Using Architecturally Significant Requirements for Guiding System Evolution,"Rapidly changing technology is one of the key triggers of system evolution. Some examples are: physically relocating a data center, replacement of infrastructure such as migrating from an in-house broker to CORBA, moving to a new architectural approach such as migrating from client-server to a service-oriented architecture. At a high level, the goals of such an evolution are easy to describe. While the end goals of the evolution are typically captured and known, the key architecturally significant requirements that guide the actual evolution tasks are often unexplored. At best, they are tucked under maintainability and/or modifiability concerns. In this paper, we argue that eliciting and using architecturally significant requirements of an evolution has a potential to significantly improve the quality of the evolution effort. We focus on elicitation and representation techniques of architecturally significant evolution requirements, and demonstrate their use in analysis for evolution planning.",2010,0, 5378,Reverse Engineering Component Models for Quality Predictions,"Legacy applications are still widely spread. If a need to change deployment or update its functionality arises, it becomes difficult to estimate the performance impact of such modifications due to absence of corresponding models. In this paper, we present an extendable integrated environment based on Eclipse developed in the scope of the Q-Impress project for reverse engineering of legacy applications (in C/C++/Java). The Q-Impress project aims at modeling quality attributes (performance, reliability, maintainability) at an architectural level and allows for choosing the most suitable variant for implementation of a desired modification. The main contributions of the project include i) a high integration of all steps of the entire process into a single tool, a beta version of which has been already successfully tested on a case study, ii) integration of multiple research approaches to performance modeling, and iii) an extendable underlying meta-model for different quality dimensions.",2010,0, 5379,InCode: Continuous Quality Assessment and Improvement,"While significant progress has been made over the last ten years in the research field of quality assessment, developers still can't take full advantage of the benefits of these new tools and technique. We believe that there at least two main causes for this lack of adoption: (i) the lack of integration in mainstream IDEs and (ii) the lack of support for a continuous (daily) usage of QA tools. In this context we created INCODE as an Eclipe plug in that would transform quality assessment and code inspections from a standalone activity, into a continuous, agile process, fully integrated in the development life-cycle. But INCODE not only assesses continuously the quality of Java systems, it also assists developers in taking restructuring decisions, and even supports them in triggering refactorings.",2010,0, 5380,Multi resolution analysis for bearing fault diagnosis,A wavelet-based vibration analysis was done for Defense Applications. Vibration measurements were carried out in MSL Engine test bed in order to verify that it meets the specifications at different load conditions. Such measurements are often carried out in connection with troubleshooting in order to determine whether vibration levels are within acceptable limits at given engine speeds and maneuvers. A State-of-the-art portable Vibration Data Recorder is used for data acquisition of real-world signals. This paper is intended to take the reader through the various stages in a signal processing of the vibration data using modern digital technology. Vibration signals are post-analyzed using Wavelet Transform for data mining of the vibration signal observed from accelerometers. Wavelet Transform (WT) techniques are applied to decipher vibration characteristics due to Engine excitation to diagnose the faulty bearing. The Time-Scale analysis by WT achieves a comparable accuracy than Fast Fourier Transform (FFT) while having a lower computational cost with fast predictive capability. The result from wavelet analysis is validated using the LabVIEW software.,2010,0, 5381,An Online Performance Anomaly Detector in Cluster File Systems,"Performance problems, which can stem from different system components, such as network, memory, and storage devices, are difficult to diagnose and isolate in a cluster file system. In this paper, we present an online performance anomaly detector which is able to efficiently detect performance anomaly and accurately identify the faulty sources in a system node of a cluster file system. Our method exploits the stable relationship between workloads and system resource statistics to detect the performance anomaly and identify faulty sources which cause the performance anomaly in the system. Our preliminary experimental results demonstrate the efficiency and accuracy of the proposed performance anomaly detector.",2010,0, 5382,A Software-Implemented Configurable Control Flow Checking Method,"In space radiation environment, a large number of cosmic ray often results in transient faults on on-board computer. These transient faults lead to data flow errors or control flow errors during program running. For the control flow errors caused by transient faults, this paper proposes a control flow checking method based on classifying basic blocks CFCCB. CFCCB firstly classifies the basic blocks based on the control flow graph that has been inserted the abstract blocks. CFCCB then designs formatted signatures for the basic blocks and inserts the instructions for comparing and update signatures into every basic block for the purpose of checking the control flow errors that are inter-blocks, intra-blocks or inter-procedures. Compared to existing algorithms, CFCCB not only has high label express ability, but also can be configured flexibly. The fault injection experiment results of CFCCB and other similar algorithms have shown that, the average fail rate of programs with CFCCB has decreased to 19.9% at the cost of increasing the executing time by 34% and increasing the memory overhead by 41.5% in average. CFCCB has lower performance and memory overhead, and has highest reliability among the similar algorithms.",2010,0, 5383,The Q-ImPrESS Method -- An Overview,"The difficulty in evolving service-oriented architectures with extra-functional requirements seriously hinders the spread of this paradigm in critical application domains. The Q-ImPrESS method offsets this disadvantage by introducing a quality impact prediction, which allows software engineers to predict the consequences of alternative design decisions on the quality of software services and select the optimal architecture without having to resort to costly prototyping. The method takes a wider perspective on software quality by explicitly considering multiple quality attributes (performance, reliability and maintainability), and the typical trade-offs between these attributes. The benefit of using this approach is that it enables the creation of service-oriented systems with predictable end-to-end quality.",2010,0, 5384,Estimating design quality of digital systems via machine learning,"Although the term design quality of digital systems can be assessed from many aspects, the distribution and density of bugs are two decisive factors. This paper presents the application of machine learning techniques to model the relationship between specified metrics of high-level design and its associated bug information. By employing the project repository (i.e., high level design and bug repository), the resultant models can be used to estimate the quality of associated designs, which is very beneficial for design, verification and even maintenance processes of digital systems. A real industrial microprocessor is employed to validate our approach. We hope that our work can shed some light on the application of software techniques to help improve the reliability of various digital designs.",2010,0, 5385,Novel fault diagnostic technique for permanent Magnet Synchronous Machines using electromagnetic signature analysis,"This paper proposes a novel alternative scheme for permanent magnet synchronous machine (PMSM) health monitoring and multi-faults detection using direct flux measurement with search coils. Phase current spectrum is not used for analysis and therefore it is not influenced by power supply introduced harmonics any more. In addition, it is also not necessary to specify load condition for accurate diagnosis. In this study, numerical models of a healthy machine and of machines with various faults are developed and examined. Simulation by means of a two-dimensional finite-element analysis (FEA) software package is presented to verify the application of the proposed method over different motor operation conditions.",2010,0, 5386,Using pattern detection techniques and refactoring to improve the performance of ASMOV,"One of the most important challenges in semantic Web is ontology matching. Ontology matching is a technology that enables semantic interoperability between structurally and semantically heterogeneous resources on the Web. Despite serious research efforts on ontology matching, matchers still suffers from severe problems with respect to the quality of matching results. Furthermore, Most of them take a lot of time for finding the correspondences. The aim of this paper is improving ontology matching results by adding the preprocessing phase for analyzing the input ontologies. This phase is added in order to solve problems caused by ontology diversity. We select one of the best matchers of Ontology Alignment Evaluation Initiative (OAEI) which is Automated Semantic Matching of Ontologies with Verification, called ASMOV. In preprocessing phase, some new patterns of ontologies are detected and then refactoring operations are used for reaching assimilated ontologies. Afterward, we applied ASMOV for testing our approach on both the original ontologies and their refactored counterparts. Experimental results show that these refactored ontologies are more efficient than the original unrepaired ones with respect to the standard evaluation measures i.e. Precision, Recall, and F-Measure.",2010,0, 5387,Measuring testability of aspect oriented programs,"Testability design is an effective way to realize the fault detection and isolation. It becomes crucial in the case of Aspect Oriented designs where control flows are generally not hierarchical, but are diffuse and distributed over the whole architecture. In this paper, we concentrate on detecting, pinpointing and suppressing potential testability weaknesses of a UML Aspect-class diagram. The attribute significant from design testability is called “class interactionâ€? it appears when potentially concurrent client/supplier relationships between classes exist in the system. These interactions point out parts of the design that need to be improved, driving structural modifications or constraints specifications, to reduce the final testing effort. This paper does an extensive review on testability of aspect oriented software, and put forth some relevant information about class-level testability.",2010,0, 5388,The research and development of comprehensive evaluation and management system for harmonic and negative-sequence,"Power quality interference sources generate a large number of harmonics and negative-sequence current into power grid in the process of using electricity. Harmonics and negative-sequence current not only affect power grid's safety and economical operation also cause interference to other normal users, which brings large threat and danger to the power grid and users. As an effective method to prevent and control power quality interference source, it is important to evaluate the harmonic and negative-sequence current at the installation part of power quality interference source, guide the user taking measures to control power quality in the design of current electricity-consummation. This paper describes the research and development of comprehensive evaluation and management system for harmonic and negative-sequence current. This software has the following functions: harmonic computation, comprehensive assessment of user harmonic and negative-sequence, filter design and checking, SVC measurement, harmonic source database management. This software models the system components, load and typical harmonic source under the CIGRE standard with graphical interface. It models the power supply network by the means of one-line diagram, completes the harmonic calculation by Monte-Carlo or Laguerre polynomial, and simulates the distribution of power system harmonics by statistical moment with harmonics power flow techniques. The evaluation of user's harmonic and negative-sequence based mainly on the GB, combined with the access point short-circuit capacity, power capacity and protocol capacity to calculate the limit value of harmonic and negative-sequence current that assigned to user. According to the typical user's harmonic emission level or measured value, we can calculate the value of harmonic current generated by user and execute assessment by comparing them. One way to design filter and carry on SVC evaluation is calculating by custom method. Another way is to cut the ex- - isting data into calculation and checking, and then the bus voltage waveform before and after inputting filter or SVC by graphical virtual operation can be obtained, and a rich and intuitive results reports and graphics can be generated at the same time. This software is easy to operate, and its calculation results are accurate. It is an effective tool to assess and manage power quality.",2010,0, 5389,An evolutionary algorithm for regression test suite reduction,"As the software is modified and new test cases are added to the test-suite, the size of the test-suite grows and the cost of regression testing increases. In order to decrease the cost of regression testing, researchers have focused on the use of test-suite reduction techniques, which identify a subset of test cases that provides the same coverage of the software, according to some criterion, as the original test-suite. This paper investigates the use of an evolutionary approach, called genetic algorithms, for test-suite reduction. The proposed model builds the initial population based on test history, calculates the fitness value using coverage and run time of test case, and then selectively breeds the successive generations using genetic operations and allows only the fit tests to the reduced suite. This generational process is repeated until an optimized test-suite is found.",2010,0, 5390,The development of monitoring and analyzing system of voltage sag,"It is necessary to monitor and analyze the power quality firstly in order to solve the power quality problems in power system, and then the possible reforming project can be put forward for improving power supply quality. According to the actual situation in the region, a system approach to voltage sag monitoring and analyzing has been designed and developed. The system and its composition, software design, network architecture and application effects, etc, are detail introduced, analyzed and discussed in this paper. System software, including front-desk program, the background-desk program, analysis software, adopt homemade advanced unary multi-point voltage sag criterion used for giving more voltage sag characteristics, improves the monitoring accuracy¿second-order Butterworth low-pass filters designed to reduce the impact of the voltage harmonics on the voltage sag detection algorithm, so reduces misjudgment rate during the voltage sag detection. The system has taken a series of measures, including signal acquisition, signal isolation module, anti-jamming, that is why it has a good reliability and stability. In the May 1 2007, a load rejection and voltage sag incident occurred in a 110kV substation, the device accurate records provided significant assistance to the accident analysis. The system meets the design requirements, that is, can be online accurately, high-density monitors the measurement system's three-phase bus voltage. Measurement system can accurately record the system operation state, the voltage sag occurs in the system will be accurate and timely recorded, and be able to regularly analyze the overall operation of the system voltage, it is beneficial to make detailed, scientific judgments and analysis for the system voltage sag and incidents.",2010,0, 5391,Research and Design for Infrared Diagnostic System of Aviation Electronic Components,"The civil airports now mostly use the imported equipment, the maintenance of electronic circuit board becomes the important segment of aviation electronic equipment maintenance. In the rising and development of component level maintenance, as a new nondestructive testing technology, infrared fault diagnosis gets more research and application. The paper utilizes the Ti50 thermal infrared imager to design a set of electronic circuit of infrared diagnostic system. Considering the real-time and complexity of electronic circuit board, by applying different signal sources, we first group the electronic components of electronic circuit board, then measure, which can rapidly narrow the scope of failure and prevent undetected. Through testing laboratories and the actual use, the speed of the test system is high, diagnostic efficiency is good, with a strong practical value.",2010,0, 5392,The alice data quality monitoring system,"ALICE (A Large Ion Collider Experiment) is a heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). Due to the complexity of ALICE in terms of number of detectors and performance requirements, data quality monitoring (DQM) plays an essential role in providing an online feedback on the data being recorded. It intends to provide operators with precise and complete information to quickly identify and overcome problems, and, as a consequence, to ensure acquisition of high quality data. DQM typically involves the online gathering of data samples, their analysis by user-defined algorithms and the visualization of the monitoring results. In this paper, we illustrate the final design of the DQM software framework of ALICE, AMORE (Automatic Monitoring Environment), and its latest features and developments. We describe how this system is used to monitor the event data coming from the ALICE detectors allowing operators and experts to access a view of monitoring elements and to detect potential problems. Important features include the integration with the offline analysis and reconstruction framework and the interface with the electronic logbook that makes the monitoring results available everywhere through a web browser. Furthermore, we show the advantage of using multi-core processors through a parallel images/results production and the flexibility of the graphic user interface that gives to the user the possibility to apply filters and customize the visualization. We finally review the wide range of usage people make of this framework, from the basic monitoring of a single sub-detector to the most complex ones within the High Level Trigger farm or using the Prompt Reconstruction. We also describe the various ways of accessing the monitoring results. We conclude with our experience, after the LHC restart, when monitoring the data quality in a realworld and challenging environment.",2010,0, 5393,The ATLAS trigger monitoring and operation in proton-proton collisions,"With the first proton-proton collisions at √s = 900 GeV in December 2009, the full chain of the ATLAS Trigger and Data Acquisition system could be tested under real conditions for the first time. Monitoring the performance and operation of these systems required a smooth and parallel running of many complex software tools depending on each other. They are the basis of rate measurements, data quality determination of selected objects and supervision of the system during the data taking. Based on the successful data taking experience with first collisions, the ATLAS trigger monitoring and operations performance are described.",2010,0, 5394,Hardware/software optimization of error detection implementation for real-time embedded systems,"This paper presents an approach to system-level optimization of error detection implementation in the context of fault-tolerant real-time distributed embedded systems used for safety-critical applications. An application is modeled as a set of processes communicating by messages. Processes are mapped on computation nodes connected to the communication infrastructure. To provide resiliency against transient faults, efficient error detection and recovery techniques have to be employed. Our main focus in this paper is on the efficient implementation of the error detection mechanisms. We have developed techniques to optimize the hardware/software implementation of error detection, in order to minimize the global worst-case schedule length, while meeting the imposed hardware cost constraints and tolerating multiple transient faults. We present two design optimization algorithms which are able to find feasible solutions given a limited amount of resources: the first one assumes that, when implemented in hardware, error detection is deployed on static reconfigurable FPGAs, while the second one considers partial dynamic reconfiguration capabilities of the FPGAs.",2010,0, 5395,Structural Aware Quantitative Interprocedural Dataflow Analysis,"A key challenge in system design whether for high performance computing or in embedded systems is to automatically partition software on the thread level for target architectures like multi-core, heterogeneous, or even hardware/software co-design systems. Similar to the techniques employed on the instruction level, the inter-procedural data flow in a software system is an essential property for identifying global data dependencies and hence extracting tasks or exploiting coarse grained parallelism. Also, detailed insight into the global data flow in quantity and its dynamics is vital to precisely estimate interconnect resources demands (e.g. bandwidth, latency) in distributed or embedded, heterogeneous target environments. This paper presents a novel approach to quantitative, global dataflow analysis based on a virtual instruction set architecture, which allows precise investigation of interprocedural dataflow over time of complex software system from the atomic to the transactional level. The introduced analysis system LLILA is part of the Synphony HW/SW co-design framework and can be used to evaluate the quality of system partitioning.",2010,0, 5396,Step Up Transformer online monitoring experience in Tucuruí Power Plant,"The Step Up Transformers at the Tucuruí Hydroelectric Power Plant are very important for the National Interconnected System (SIN). Due to that, and due to the severe work conditions, Eletrobrás Eletronorte has always kept a rigorous preventive maintenance program for these equipments. However, transformer failure history in the first powerhouse (older ones) led to the implantation of the online monitoring system, in order to detect the defects when they start, and mitigate the risks even more. System installation started in 2006, with sensors and software. Four transformers which were already operating began to be monitored and implantation for three more was in progress, taking advantage of the modularity and expandability features of the decentralized architecture used. The Architecture and the solutions applied in system implantation, as well as the results obtained, will be described in this paper. Some of the goals successfully attained were easier insurance negotiation for some equipment and more safety for the personnel, the equipment and the facility.",2010,0, 5397,Vision based cross sectional area estimator for industrial rubber profile extrusion process controlling,"This paper presents a vision based cross sectional area measurement system suitable for in-process quality controlling of rubber profile extrusions via cross section measurements. The system is basically developed for solid trapezoidal shaped rubber profiles commonly used in the rubber tack manufacturing industry. Since the extrusion continuously comes out the extruder, it is impossible to view the cross section directly. So in the proposed system, a laser projected at an angle on to the profile is used which is finally observed by a camera. Laser projections seen in each camera image is processed in a general purpose note book computer by the use of a vision software to estimate the cross section area of the actual profile. The paper contains the development approach of the method, experimental results and further improvement tips. Several test results are also presented in order to give an idea of the system accuracy and repeatability. The technology with further developments can be used as closed loop feedbacks for efficient process controlling of rubber profile extruders.",2010,0,5398 5398,Vision based cross sectional area estimator for industrial rubber profile extrusion process controlling,"This paper presents a vision based cross sectional area measurement system suitable for in-process quality controlling of rubber profile extrusions via cross section measurements. The system is basically developed for solid trapezoidal shaped rubber profiles commonly used in the rubber tack manufacturing industry. Since the extrusion continuously comes out the extruder, it is impossible to view the cross section directly. So in the proposed system, a laser projected at an angle on to the profile is used which is finally observed by a camera. Laser projections seen in each camera image is processed in a general purpose note book computer by the use of a vision software to estimate the cross section area of the actual profile. The paper contains the development approach of the method, experimental results and further improvement tips. Several test results are also presented in order to give an idea of the system accuracy and repeatability. The technology with further developments can be used as closed loop feedbacks for efficient process controlling of rubber profile extruders.",2010,0, 5399,PINCETTE â€?Validating changes and upgrades in networked software,"Summary form only given. PINCETTE is a STREP project under the European Community's 7th Framework Programme [FP7/2007-2013]. The project focuses on detecting failures resulting from software changes, thus improving the reliability of networked software systems. The goal of the project is to produce technology for efficient and scalable verification of complex evolving networked software systems, based on integration of static and dynamic analysis and verification algorithms, and the accompanying methodology. The resulting technology will also provide quality metrics to measure the thoroughness of verification. The PINCETTE consortium is composed of the following partners: IBM Israel, University of Oxford, Universita della Svizzera Italiana (USI), Universita degli Studi di Milano-Bicocca (UniMiB), Technical Research Center of Finland (VTT), ABB, and Israeli Aerospace Industries (IAI).",2010,0, 5400,Monitoring and fault diagnosis of photovoltaic panels,"Solar irradiance and temperature affect the performance of systems using photovoltaic generator. In the same way, it is essential to insure good performances of the installation so that its profitability won't be reduced. The objective of this work consists in diagnosing the panels faults and in certain cases in locating the faults using a model, the temperatures, the luminous flow, the speed of the wind, as well as the currents and the tensions. The development of software fault detection on a real installation is performed under the Matlab/Simulink environment.",2010,0, 5401,Analyzing personality types to predict team performance,"This paper presents an approach in analyzing personality types, temperament and team diversity to determine software engineering (SE) teams performance. The benefits of understanding personality types and its relationships amongst team members are crucial for project success. Rough set analysis was used to analyze Myers-Briggs Type Indicator (MBTI) personality types, Keirsey temperament, team diversity, and team performance. The result shows positive relationships between these attributes.",2010,0, 5402,Test effort optimization by prediction and ranking of fault-prone software modules,"Identification of fault-prone or not fault-prone modules is very essential to improve the reliability and quality of a software system. Once modules are categorized as fault-prone or not fault-prone, test effort are allocated accordingly. Testing effort and efficiency are primary concern and can be optimized by prediction and ranking of fault-prone modules. This paper discusses a new model for prediction and ranking of fault-prone software modules for test effort optimization. Model utilizes the classification capability of data mining techniques and knowledge stored in software metrics to classify the software module as fault-prone or not fault-prone. A decision tree is constructed using ID3 algorithm for the existing project data. Rules are derived form the decision tree and integrated with fuzzy inference system to classify the modules as either fault-prone or not fault-prone for the target data. The model is also able to rank the fault-prone module on the basis of its degree of fault-proneness. The model accuracy are validated and compared with some other models by using the NASA projects data set of PROMOSE repository.",2010,0, 5403,Reliability comparison of computer based core temperature monitoring system with two and three thermocouples per sub-assembly for Fast Breeder Reactors,"Prototype Fast Breeder Reactor (PFBR) is a mixed oxide fuelled, sodium cooled, 500 MWe, pool type fast breeder reactor under construction at Kalpakkam, India. The reactor core consists of fuel pins assembled in a number of hexagonal shaped, vertically stacked SubAssemblies (SA). Sodium flows from the bottom of the SAs, takes heat from the fission reaction, comes out through the top. Reactor protection systems are provided to trip the reactor in case of design basis events which may cause the safety parameters (like clad, fuel and coolant temperature) to cross their limits. Computer based Core temperature monitoring system (CTMS) is one of the protection systems. CTMS for PFBR has two thermocouples (TC) at the outlet of each SA(other than central SA) to measure coolant outlet temperature, three TC at central SA outlet and six thermocouples to measure coolant inlet temperature. Each thermocouple at SA outlet is electronically triplicated and fed to three computer systems for further processing and generate reactor trip signal whenever necessary. Since the system has two sensors per SA and three processing units the redundancy provided is not independent. A study is done to analyze the reliability implications of providing three thermocouples at the outlet of each SA and thereby feed independent thermocouple signals to three computer systems. Failure data derived from fast reactor experiences and from reliability prediction methods provided by handbooks are used. Fault trees are built for the existing CTMS system with two TC per SA and for the proposed system with three TC per SA. Failure probability upon demand and spurious trip rates are estimated as reliability indicators. Since the computer systems have software intelligence to sense invalid field inputs, not all sensor failures would directly affect the system probability to fail upon a demand. For instance, the coolant outlet temperature cannot be lower than the coolant inlet temperature. This intelligence is ta- en into account by assuming different “fault coverage percentageâ€?and comparing the results. A 100% fault coverage means the software algorithm could detect all of the possible thermocouple faults. It was found that the system probability to fail upon demand is reduced in the new independent system but the spurious trip rate is slightly worse. The diagnostic capability is marginally affected due to complete independence. The paper highlights how an intelligent computer based safety system poses difficulties in modeling and the checks and balances between an interlinked and independent redundancy.",2010,0, 5404,Regulatory review of computer based systems: Indian perspectives,"The use of state of art digital instrumentation and control (I&C) in safety and safety related systems in nuclear power plants has become prevalent due to the performance in terms of accuracy, computational capabilities and data archiving capability for future diagnosis. Added advantages in computer based systems are fault tolerance, self-testing, signal validation capability and process system diagnostics. But, uncertainty exists about the quality, reliability and performance of such software based nuclear instrumentation which poses new challenges for the industry and regulators in using them for safety and safety related systems. To obtain adequate confidence in licensing them for use in NPPs, CBS were deployed gradually from monitoring system to control system (i.e, non-safety, safety related & lastly safety systems). Based upon the experience over a decade, AERB safety guide AERB/SGID-25 was prepared to prescribe the criteria and requirements to assess the qualitative reliability of such software based nuclear instrumentation. This paper describes the regulatory review and audit process as required by the above guide. Further, Software Configuration Management (SCM) is an important item during life cycle of CBS, whether it is design phase or operating phase. Configuration control becomes necessary due to operation feedback, introduction of additional features and due to obsolescence. Therefore configuration control during operating phase for CBS becomes all the more important. This paper elaborates on the regulatory approach adopted by AERB for regulatory review and control of design modifications in operating phase of NPPs. This paper also covers a case study of AERB audit on verification & validation activities for software based safety and safety related systems used in an Indian plant.",2010,0, 5405,Optimal testing resource allocation for modular software considering imperfect debugging and change point using genetic algorithm,"Optimal decision making is of utmost importance for planning, controlling, and working of any industry. One of the fields where mathematical modeling and optimization have been vastly applied is software reliability, which is the most significant quality metric for commercial software. Such softwares are generally modular in structure. Testing provides a mathematical measure of software reliability and is a process of raising the confidence that the software is free of flaws. However, due to the complexity of software, the testing team may not be able to remove/correct the fault perfectly on observation/detection of a failure and the original fault may remain resulting in a phenomenon known as imperfect debugging, or get replaced by another fault causing error generation. Moreover, fault detection/correction rate may not be same throughout testing; it may change at any time moment called as change point. Also, testing cannot be done indefinitely due to the availability of limited testing resources. In this paper we formulate an optimization problem to allocate the resources among different modules such that the total fault removal is maximized while incorporating the effect of both types of imperfect debugging and change point. The relative importance of each module is obtained using Analytical Hierarchy Process. We have also considered the problem of determining minimum requirement of the testing resources so that a desired proportion of faults are removed from each module. The non linear optimization problems are solved using genetic algorithm. Numerical example has been discussed to illustrate the solution of the formulated optimal resource allocation problems.",2010,0, 5406,"Clone detection: Why, what and how?","Excessive code duplication is a bane of modern software development. Several experimental studies show that on average 15 percent of a software system can contain source code clones - repeatedly reused fragments of similar code. While code duplication may increase the speed of initial software development, it undoubtedly leads to problems during software maintenance and support. That is why many developers agree that software clones should be detected and dealt with at every stage of software development life cycle. This paper is a brief survey of current state-of-the-art in clone detection. First, we highlight main sources of code cloning such as copy-and-paste programming, mental code patterns and performance optimizations. We discuss reasons behind the use of these techniques from the developer's point of view and possible alternatives to them. Second, we outline major negative effects that clones have on software development. The most serious drawback duplicated code have on software maintenance is increasing the cost of modifications - any modification that changes cloned code must be propagated to every clone instance in the program. Software clones may also create new software bugs when a programmer makes some mistakes during code copying and modification. Increase of source code size due to duplication leads to additional difficulty of code comprehension. Third, we review existing clone detection techniques. Classification based on used source code representation model is given in this work. We also describe and analyze some concrete examples of clone detection techniques highlighting main distinctive features and problems that are present in practical clone detection. Finally, we point out some open problems in the area of clone detection. Currently questions like ""What is a code clone?"", ""Can we predict the impact clones have on software quality"" and ""How can we increase both clone detection precision and recall at the same time? "" stay open to further re- - search. We list the most important questions in modern clone detection and explain why they continue to remain unanswered despite all the progress in clone detection research.",2010,0, 5407,Source code modification technology based on parameterized code patterns,"Source code modification is one of the most frequent operations which developers perform in software life cycle. Such operation can be performed in order to add new functionality, fix bugs or bad code style, optimize performance, increase readability, etc. During the modification of existing source code developer needs to find parts of code, which meet to some conditions, and change it according to some rules. Usually developers perform such operations repeatedly by hand using primitive search/replace mechanisms and ""copy and paste programming"", and that is why manual modification of large-scale software systems is a very error-prone and time-consuming process. Automating source code modifications is one of the possible ways of coping with this problem because it can considerably decrease both the amount of errors and the time needed in the modification process. Automated source code modification technique based on parameterized source code patterns is considered in this article. Intuitive modification description that does not require any knowledge of complex transformation description languages is the main advantage of our technique. We achieve this feature by using a special source code pattern description language which is closely tied with the programming language we're modifying. This allows developers to express the modification at hand as simple ""before ""/""after"" source code patterns very similar to source code. Regexp-like elements are added to the language to increase its expressionalpower. The source code modification is carried out using difference abstract syntax trees. We build a set of transformation operations based on ""before""/""after"" patterns (using algorithm for change detection in hierarchically structured information) and apply them to those parts of source code that match with the search ""before "" pattern. After abstract syntax tree transformation is completed we pretty-print them back to source code. A prototype of source code modification sy- - stem based on this technique has been implemented for the Java programming language. Experimental results show that this technique in some cases can increase the speed of source code modifications by several orders of magnitude, at the same time completely avoiding ""copy-and paste "" errors. In future we are planning to integrate prototype with existing development environments such as Eclipse and NetBeans.",2010,0, 5408,Simulation strategies for the LHC ATLAS experiment,"The ATLAS experiment, operational at the new LHC collider, is fully simulated using a Geant 4-based simulation program within the ATLAS Athena framework. This simulation software is used for large-scale production of events on the LHC Computing Grid as well as for smaller-scale studies. Simulation of ATLAS requires many components, from the generators that simulate particle collisions to packages simulating the response of the various detectors and triggers. The latest developments in the saga of ATLAS simulation have been focussed on better representation of the real detector, based on first LHC data, and on performance improvements to ensure that event simulation is not a limiting factor in ATLAS physics analyses. The full process is constantly monitored and profiled to guarantee the best use of available resources without any degradation in the quality and accuracy of the simulation.",2010,0, 5409,Diagnostic systems and resource utilization of the ATLAS high level trigger,"With the first year of successful data taking of proton-proton collisions at LHC, the full chain of the ATLAS Trigger and Data Acquisition System could be tested under real conditions. The trigger monitoring and data quality infrastructure was essential to this success. We describe the software tools used to monitor the trigger system performance, assess the overall quality of the trigger selection and analyze the resource utilization during collision runs. Monitoring the performance and operation of these systems required smooth and parallel running of many complex software tools depending on each other. These are the basis for rate measurements, data quality determination of selected objects and supervision of the system during the data taking. Based on the data taking experience with first collisions, we describe the ATLAS trigger monitoring and operations performance.",2010,0, 5410,New ultra high resolution LYSO pentagon detector blocks for lower-cost murine PET-CT (MuPET/CT),"We developed and built a solid LYSO detector ring for a new MD Anderson Murine PET (MUPET) camera using regular Ø 19 mm photomultipliers in quadrant-sharing (PQS) configuration. 180 PQS pentagon blocks distributed in 6 sub-rings (30 blocks each) constitute the detector system Each block has nominal dimensions of 19 ×x 19 × 10 mm3 in a 13 × 13 crystal array. To form a zero gap solid ring, PQS-blocks were ground into pentagon shapes with a taper angle of 6° on the edge crystals and optical surfaces. Two crystal widths were used for the blocks; a largest width for two edge crystals and, smallest for inner crystals. We explored the possibility of increasing the detectors performance by implementing new design, materials and techniques, testing the detectors response and, measuring the timing resolution. List-mode data were acquired using Ga-68 source with our high yield pileup-event recovery (HYPER) electronics and data acquisition software. Four PQS-blocks were selected to evaluate our detector quality and our mass-production method. The performances of these blocks were quite similar. A typical block is presented in this study. This block has a packing fraction of 95%. All 169 crystal detectors are clearly decoded, peak-to-valley ratio of 2.4, light collection efficiency of 78% and, energy resolution of 14% at 511 keV. Average timing resolution of 430-ps (FWHM) was achieved by single crystal with PQS block-to-block coincidence of 530-ps. This PQS-pentagon block design shows the feasibility of high resolution (~1.2 mm) MuPET-CT detector ring using the lower cost Ø 19 mm PMTs as an alternative to the more expensive position sensitive PMTs or solid state detectors.",2010,0, 5411,Simulation of high quality ultrasound imaging,"This paper investigates if the influence on image quality using physical transducers can be simulated with an sufficient accuracy to reveal system performance. The influence is investigated in a comparative study between Synthetic Aperture Sequential Beamformation (SASB) and Dynamic Receive Focus (DRF). The study is performed as a series of simulations and validated by measurements. The influence from individual element impulse response, phase, and amplitude deviations are quantized by the lateral resolution (LR) at Full Width at Half Maximum (FWHM), Full Width at One-Tenth Maximum (FWOTM), and at Full Width at One-Hundredth Maximum (FWOHM) of 9 points spread functions resulting from evenly distributed point targets at depths ranging from 10 mm to 90 mm. The results are documented for a 64 channel system, using a 192 element linear array transducer model. A physical BK Medical 8804 transducer is modeled by incorporating measured element pulse echo responses into the simulation software. Validation is performed through measurements on a water phantom with three metal wires, each with a diameter of 0.07 mm. Results show that when comparing measurement and simulation, the lateral beam profile using SASB can be estimated with a correlation coefficient of 0.97. Further, it is shown that SASB successfully maintains a constant LR though depth at FWHM, and is a factor of 2.3 better than DRF at 80 mm. However, when using SASB the LR at FWOHM is affected by non-ideal element responses. Introducing amplitude and phase compensation, the LR at FWOHM improves from 6.3 mm to 4.7 mm and is a factor of 2.2 better than DRF. This study has shown that individual element impulse response, phase, and amplitude deviations are important to include in simulated system performance evaluations. Furthermore, it is shown that SASB provides a constant LR through depth and has improved resolution and contrast compared to DRF.",2010,0, 5412,Design considerations for a high voltage compact power transformer,"This paper presents a new topology for a high voltage 50kV, high frequency (HF) 20kHz, multi-cored transformer. The transformer is suitable for use in pulsed power application systems. The main requirements are: high voltage capability, small size and weight. The HV, HF transformer is a main critical block of a high frequency power converter system. The transformer must have high electrical efficiency and in the proposed approach has to be optimized by the number of the cores. The transformer concept has been investigated analytically and through software simulations and experiments. This paper introduces the transformer topology and discusses the design procedure. Experimental measurements to predict core losses are also presented.",2010,0, 5413,Advanced frequency estimation technique using gain compensation,"The frequency is an important operating parameter for protection, control, and stability of a power system. Due to the sudden change in generation and loads or faults in power system, the frequency is supposed to deviate from its nominal value. It is essential that the frequency be maintained very close to its nominal frequency. An accurate monitoring of the power frequency is essential to optimum operation and prevention for wide area blackout. As most conventional frequency estimation schemes are based on DFT filter, it has been pointed out that the gain error could cause the defects when the frequency is deviated from nominal value. This paper presents an advanced frequency estimation technique using gain compensation to enhance the DFT filter based technique. The proposed technique can reduce the gain error caused when the frequency deviates from nominal value. Simulation studies using both the data from EMTP-RV software and user defined arbitrary signals are performed to demonstrate the effectiveness of the proposed algorithm. The simulation results show that the proposed algorithm achieves good performance under both steady state tests and dynamic conditions.",2010,0, 5414,The Network Interdisciplinary Computing Environment,"The study of mobile networking modeling for ad hoc networks is still in its infancy, a full analysis environment of common utilities does not exist. While some very important capability currently exists, a means of integrating them together into a production quality environment does not. The Network Interdisciplinary Computing Environment (NiCE) is designed to accomplish this task. NiCE is built around a new common data model and format for mobile networking experiments and codes. NiCE provides network researchers a cross-platform setup, runtime, analysis and visualization environment. Since this computational discipline is not as mature as some other physics based modeling efforts, it is difficult to exchange models and results between experimental and computational resources to develop high-fidelity, accurate predictions of how various networks will perform under realistic military conditions.",2010,0, 5415,Maturity model for process of academic management,"The segment of education in Brazil, especially in higher education, has undergone major changes. The search for professionalization, for cost reduction and process standardization has led many institutions to associate through partnerships or acquisitions. On the other hand, maturity models have been used very successfully in several areas of knowledge especially in software development, as an approach for quality models. This paper presents a methodology for assessing the maturity of academic process management in private institutions of higher education that, initially, aims the Brazilian market, but its idea can be applied to a global maturity model of academic process management.",2010,0, 5416,"Measuring complexity, effectiveness and efficiency in software course projects","This paper discusses results achieved in measuring complexity, effectiveness and efficiency, in a series of related software course projects, spanning a period of seven years. We focus on how the complexity of those projects was measured, and how the success of the students in effectively and efficiently taming that complexity was assessed. This required defining, collecting, validating and analyzing several indicators of size, effort and quality; their rationales, advantages and limitations are discussed. The resulting findings helped to improve the process itself.",2010,0, 5417,A discriminative model approach for accurate duplicate bug report retrieval,"Bug repositories are usually maintained in software projects. Testers or users submit bug reports to identify various issues with systems. Sometimes two or more bug reports correspond to the same defect. To address the problem with duplicate bug reports, a person called a triager needs to manually label these bug reports as duplicates, and link them to their ""master"" reports for subsequent maintenance work. However, in practice there are considerable duplicate bug reports sent daily; requesting triagers to manually label these bugs could be highly time consuming. To address this issue, recently, several techniques have be proposed using various similarity based metrics to detect candidate duplicate bug reports for manual verification. Automating triaging has been proved challenging as two reports of the same bug could be written in various ways. There is still much room for improvement in terms of accuracy of duplicate detection process. In this paper, we leverage recent advances on using discriminative models for information retrieval to detect duplicate bug reports more accurately. We have validated our approach on three large software bug repositories from Firefox, Eclipse, and OpenOffice. We show that our technique could result in 17-31%, 22-26%, and 35-43% relative improvement over state-of-the-art techniques in OpenOffice, Firefox, and Eclipse datasets respectively using commonly available natural language information only.",2010,0, 5418,A machine learning approach for tracing regulatory codes to product specific requirements,"Regulatory standards, designed to protect the safety, security, and privacy of the public, govern numerous areas of software intensive systems. Project personnel must therefore demonstrate that an as-built system meets all relevant regulatory codes. Current methods for demonstrating compliance rely either on after-the-fact audits, which can lead to significant refactoring when regulations are not met, or else require analysts to construct and use traceability matrices to demonstrate compliance. Manual tracing can be prohibitively time-consuming; however automated trace retrieval methods are not very effective due to the vocabulary mismatches that often occur between regulatory codes and product level requirements. This paper introduces and evaluates two machine-learning methods, designed to improve the quality of traces generated between regulatory codes and product level requirements. The first approach uses manually created traceability matrices to train a trace classifier, while the second approach uses web-mining techniques to reconstruct the original trace query. The techniques were evaluated against security regulations from the USA government's Health Insurance Privacy and Portability Act (HIPAA) traced against ten healthcare related requirements specifications. Results demonstrated improvements for the subset of HIPAA regulations that exhibited high fan-out behavior across the requirements datasets.",2010,0, 5419,Predicting build outcome with developer interaction in Jazz,Investigating the human aspect of software development is becoming prominent in current research. Studies found that the misalignment between the social and technical dimensions of software work leads to losses in developer productivity and defects. We use the technical and social dependencies among pairs of developers to predict the success of a software build. Using the IBM Jazzâ„?data we found information about developers and their social and technical relation can build a powerful predictor for the success of a software build. Investigating human aspects of software development is becoming prominent in current research. High misalignment between the social and technical dimensions of software work lowers productivity and quality.,2010,0, 5420,Transparent combination of expert and measurement data for defect prediction: an industrial case study,"Defining strategies on how to perform quality assurance (QA) and how to control such activities is a challenging task for organizations developing or maintaining software and software-intensive systems. Planning and adjusting QA activities could benefit from accurate estimations of the expected defect content of relevant artifacts and the effectiveness of important quality assurance activities. Combining expert opinion with commonly available measurement data in a hybrid way promises to overcome the weaknesses of purely data-driven or purely expert-based estimation methods. This article presents a case study of the hybrid estimation method HyDEEP for estimating defect content and QA effectiveness in the telecommunication domain. The specific focus of this case study is the use of the method for gaining quantitative predictions. This aspect has not been empirically analyzed in previous work. Among other things, the results show that for defect content estimation, the method performs significantly better statistically than purely data-based methods, with a relative error of 0.3 on average (MMRE).",2010,0, 5421,A cost-benefit framework for making architectural decisions in a business context,"In any IT-intensive organization, it is useful to have a model to associate a value with software and system architecture decisions. More generally, any effort-a project undertaken by a team-needs to have an associated value to offset its labor and capital costs. Unfortunately, it is extremely difficult to precisely evaluate the benefit of ""architecture projects""-those that aim to improve one or more quality attributes of a system via a structural transformation without (generally) changing its behavior. We often resort to anecdotal and informal ""hand-waving"" arguments of risk reduction or increased developer productivity. These arguments are typically unsatisfying to the management of organizations accustomed to decision-making based on concrete metrics. This paper will discuss research done to address this long-standing dilemma. Specifically, we will present a model derived from analyzing actual projects undertaken at Vistaprint Corporation. The model presented is derived from an analysis of effort tracked against modifications to specific software components before and after a significant architectural transformation to the subsystem housing those components. In this paper, we will discuss the development, implementation, and iteration of the model and the results that we have obtained.",2010,0, 5422,JDF: detecting duplicate bug reports in Jazz,"Both developers and users submit bug reports to a bug repository. These reports can help reveal defects and improve software quality. As the number of bug reports in a bug repository increases, the number of the potential duplicate bug reports increases. Detecting duplicate bug reports helps reduce development efforts in fixing defects. However, it is challenging to manually detect all potential duplicates because of the large number of existing bug reports. This paper presents JDF (representing Jazz Duplicate Finder), a tool that helps users to find potential duplicates of bug reports on Jazz, which is a team collaboration platform for software development and process management. JDF finds potential duplicates for a given bug report using natural language and execution information.",2010,0, 5423,Exploratory study of a UML metric for fault prediction,"This paper describes the use of a UML metric, an approximation of the CK-RFC metric, for predicting faulty classes before their implementation. We built a code-based prediction model of faulty classes using Logistic Regression. Then, we tested it in different projects, using on the one hand their UML metrics, and on the other hand their code metrics. To decrease the difference of values between UML and code measures, we normalized them using Linear Scaling to Unit Variance. Our results indicate that the proposed UML RFC metric can predict faulty code as well as its corresponding code metric does. Moreover, the normalization procedure used was of great utility, not just for enabling our UML metric to predict faulty code, using a code-based prediction model, but also for improving the prediction results across different packages and projects, using the same model.",2010,0, 5424,Capturing the long-term impact of changes,"Developers change source code to add new functionality, fix bugs, or refactor their code. Many of these changes have immediate impact on quality or stability. However, some impact of changes may become evident only in the long term. The goal of this thesis is to explore the long-term impact of changes by detecting dependencies between code changes and by measuring their influence on software quality, software maintainability, and development effort. Being able to identify the changes with the greatest long-term impact will strengthen our understanding of a project's history and thus shape future code changes and decisions.",2010,0, 5425,First International Workshop on Quantitative Stochastic Models in the Verification and Design of Software Systems (QUOVADIS 2010),"Nowadays requirements related to quality attributes such as performance, reliability, safety and security are often considered the most important requirements for software development projects. To reason about these quality attributes different stochastic models can be used. These models enable probabilistic verification as well as quantitative prediction at design time. On the other hand, these models could be also used to perform runtime adaptation in order to achieve certain quality goals. This workshop aims to provide a forum for researchers in these areas that should help with the adoption of quantitative stochastic models into general software development processes.",2010,0, 5426,Spatial error concealment technique for losslessly compressed images using data hiding in error-prone channels,"Error concealment techniques are significant due to the growing interest in imagery transmission over error-prone channels. This paper presents a spatial error concealment technique for losslessly compressed images using least significant bit (LSB)-based data hiding to reconstruct a close approximation after the loss of image blocks during image transmission. Before transmission, block description information (BDI) is generated by applying quantization following discrete wavelet transform. This is then embedded into the LSB plane of the original image itself at the encoder. At the decoder, this BDI is used to conceal blocks that may have been dropped during the transmission. Although the original image is modified slightly by the message embedding process, no perceptible artifacts are introduced and the visual quality is sufficient for analysis and diagnosis. In comparisons with previous methods at various loss rates, the proposed technique is shown to be promising due to its good performance in the case of a loss of isolated and continuous blocks.",2010,0, 5427,Automated Derivation of Application-Aware Error Detectors Using Static Analysis: The Trusted Illiac Approach,"This paper presents a technique to derive and implement error detectors to protect an application from data errors. The error detectors are derived automatically using compiler-based static analysis from the backward program slice of critical variables in the program. Critical variables are defined as those that are highly sensitive to errors, and deriving error detectors for these variables provides high coverage for errors in any data value used in the program. The error detectors take the form of checking expressions and are optimized for each control-flow path followed at runtime. The derived detectors are implemented using a combination of hardware and software and continuously monitor the application at runtime. If an error is detected at runtime, the application is stopped so as to prevent error propagation and enable a clean recovery. Experiments show that the derived detectors achieve low-overhead error detection while providing high coverage for errors that matter to the application.",2011,0, 5428,Nondetection Zone Assessment of an Active Islanding Detection Method and its Experimental Evaluation,"This paper analytically determines the nondetection zone (NDZ) of an active islanding detection method, and proposes a solution to obviate the NDZ. The method actively injects a negative-sequence current through the interface voltage-sourced converter (VSC) of a distributed generation (DG) unit, as a disturbance signal for islanding detection. The estimated magnitude of the corresponding negative-sequence voltage at the PCC is used as the islanding detection signal. In this paper, based on a laboratory test system, the performance of the islanding detection method under UL1741 anti-islanding test conditions is evaluated. Then, determining the NDZ of the method and proposing the countermeasure, the existence of the NDZ and the performance of the modified method to eliminate the NDZ is verified based on simulation results in PSCAD/EMTDC software environment and experimental tests.",2011,0, 5429,Self-Supervising BPEL Processes,"Service compositions suffer changes in their partner services. Even if the composition does not change, its behavior may evolve over time and become incorrect. Such changes cannot be fully foreseen through prerelease validation, but impose a shift in the quality assessment activities. Provided functionality and quality of service must be continuously probed while the application executes, and the application itself must be able to take corrective actions to preserve its dependability and robustness. We propose the idea of self-supervising BPEL processes, that is, special-purpose compositions that assess their behavior and react through user-defined rules. Supervision consists of monitoring and recovery. The former checks the system's execution to see whether everything is proceeding as planned, while the latter attempts to fix any anomalies. The paper introduces two languages for defining monitoring and recovery and explains how to use them to enrich BPEL processes with self-supervision capabilities. Supervision is treated as a cross-cutting concern that is only blended at runtime, allowing different stakeholders to adopt different strategies with no impact on the actual business logic. The paper also presents a supervision-aware runtime framework for executing the enriched processes, and briefly discusses the results of in-lab experiments and of a first evaluation with industrial partners.",2011,0, 5430,GUI Interaction Testing: Incorporating Event Context,"Graphical user interfaces (GUIs), due to their event-driven nature, present an enormous and potentially unbounded way for users to interact with software. During testing, it is important to “adequately coverâ€?this interaction space. In this paper, we develop a new family of coverage criteria for GUI testing grounded in combinatorial interaction testing. The key motivation of using combinatorial techniques is that they enable us to incorporate “contextâ€?into the criteria in terms of event combinations, sequence length, and by including all possible positions for each event. Our new criteria range in both efficiency (measured by the size of the test suite) and effectiveness (the ability of the test suites to detect faults). In a case study on eight applications, we automatically generate test cases and systematically explore the impact of context, as captured by our new criteria. Our study shows that by increasing the event combinations tested and by controlling the relative positions of events defined by the new criteria, we can detect a large number of faults that were undetectable by earlier techniques.",2011,0, 5431,Recovery Device for Real-Time Dual-Redundant Computer Systems,"This paper proposes the design of specialized hardware, called Recovery Device, for a dual-redundant computer system that operates in real-time. Recovery Device executes all fault-tolerant services including fault detection, fault type determination, fault localization, recovery of system after temporary (transient) fault, and reconfiguration of system after permanent fault. The paper also proposes the algorithms for determination of fault type (whether the fault is temporary or permanent) and localization of faulty computer without using self-testing techniques and diagnosis routines. Determination of fault type allows us to eliminate only the computer with a permanent fault. In other words, the determination of fault type prevents the elimination of nonfaulty computer because of short temporary fault. On the other hand, localization of faulty computer without using self-testing techniques and diagnosis routines shortens the recovery point time period and reduces the probability that a fault will occur during the execution of fault-tolerant procedure. This is very important for real-time fault-tolerant systems. These contributions bring both an increase in system performance and an increase in the degree of system reliability.",2011,0, 5432,hiCUDA: High-Level GPGPU Programming,"Graphics Processing Units (GPUs) have become a competitive accelerator for applications outside the graphics domain, mainly driven by the improvements in GPU programmability. Although the Compute Unified Device Architecture (CUDA) is a simple C-like interface for programming NVIDIA GPUs, porting applications to CUDA remains a challenge to average programmers. In particular, CUDA places on the programmer the burden of packaging GPU code in separate functions, of explicitly managing data transfer between the host and GPU memories, and of manually optimizing the utilization of the GPU memory. Practical experience shows that the programmer needs to make significant code changes, often tedious and error-prone, before getting an optimized program. We have designed hiCUDA}, a high-level directive-based language for CUDA programming. It allows programmers to perform these tedious tasks in a simpler manner and directly to the sequential code, thus speeding up the porting process. In this paper, we describe the hiCUDA} directives as well as the design and implementation of a prototype compiler that translates a hiCUDA} program to a CUDA program. Our compiler is able to support real-world applications that span multiple procedures and use dynamically allocated arrays. Experiments using nine CUDA benchmarks show that the simplicity hiCUDA} provides comes at no expense to performance.",2011,0, 5433,Automatic Generation of Multicore Chemical Kernels,"Abstract-This work presents the Kinetics Preprocessor: Accelerated (KPPA), a general analysis and code generation tool that achieves significantly reduced time-to-solution for chemical kinetics kernels on three multicore platforms: NVIDIA GPUs using CUDA, the Cell Broadband Engine, and Intel Quad-Core Xeon CPUs. A comparative performance analysis of chemical kernels from WRFChem and the Community Multiscale Air Quality Model (CMAQ) is presented for each platform in double and single precision on coarse and fine grids. We introduce the multicore architecture parameterization that KPPA uses to generate a chemical kernel for these platforms and describe a code generation system that produces highly tuned platform-specific code. Compared to state-of-the-art serial implementations, speedups exceeding 25x are regularly observed, with a maximum observed speedup of 41.1x in single precision.",2011,0, 5434,Software-Based Self-Test of Set-Associative Cache Memories,"Embedded microprocessor cache memories suffer from limited observability and controllability creating problems during in-system tests. This paper presents a procedure to transform traditional march tests into software-based self-test programs for set-associative cache memories with LRU replacement. Among all the different cache blocks in a microprocessor, testing instruction caches represents a major challenge due to limitations in two areas: 1) test patterns which must be composed of valid instruction opcodes and 2) test result observability: the results can only be observed through the results of executed instructions. For these reasons, the proposed methodology will concentrate on the implementation of test programs for instruction caches. The main contribution of this work lies in the possibility of applying state-of-the-art memory test algorithms to embedded cache memories without introducing any hardware or performance overheads and guaranteeing the detection of typical faults arising in nanometer CMOS technologies.",2011,0, 5435,An Adaptive Filter to Approximate the Bayesian Strategy for Sonographic Beamforming,"A first-principles task-based approach to the design of medical ultrasonic imaging systems for breast lesion discrimination is described. This study explores a new approximation to the ideal Bayesian observer strategy that allows for object heterogeneity. The new method, called iterative Wiener filtering, is implemented using echo data simulations and a phantom study. We studied five lesion features closely associated with visual discrimination for clinical diagnosis. A series of human observer measurements for the same image data allowed us to quantitatively compare alternative beamforming strategies through measurements of visual discrimination efficiency. Employing the Smith-Wagner model observer, we were able to breakdown efficiency estimates and identify the processing stage at which performance losses occur. The methods were implemented using a commercial scanner and a cyst phantom to explore development of spatial filters for systems with shift-variant impulse response functions. Overall we found that significant improvements were realized over standard B-mode images using a delay-and-sum beamformer but at the cost of higher complexity and computational load.",2011,0, 5436,"Evaluating Complexity, Code Churn, and Developer Activity Metrics as Indicators of Software Vulnerabilities","Security inspection and testing require experts in security who think like an attacker. Security experts need to know code locations on which to focus their testing and inspection efforts. Since vulnerabilities are rare occurrences, locating vulnerable code locations can be a challenging task. We investigated whether software metrics obtained from source code and development history are discriminative and predictive of vulnerable code locations. If so, security experts can use this prediction to prioritize security inspection and testing efforts. The metrics we investigated fall into three categories: complexity, code churn, and developer activity metrics. We performed two empirical case studies on large, widely used open-source projects: the Mozilla Firefox web browser and the Red Hat Enterprise Linux kernel. The results indicate that 24 of the 28 metrics collected are discriminative of vulnerabilities for both projects. The models using all three types of metrics together predicted over 80 percent of the known vulnerable files with less than 25 percent false positives for both projects. Compared to a random selection of files for inspection and testing, these models would have reduced the number of files and the number of lines of code to inspect or test by over 71 and 28 percent, respectively, for both projects.",2011,0, 5437,Simple Fault Diagnosis Based on Operating Characteristic of Brushless Direct-Current Motor Drives,"In this paper, a simple fault diagnosis scheme for brushless direct-current motor drives is proposed to maintain control performance under an open-circuit fault. The proposed scheme consists of a simple algorithm using the measured phase current information and detects open-circuit faults based on the operating characteristic of motors. It requires no additional sensors or electrical devices to detect open-circuit faults and can be embedded into the existing drive software as a subroutine without excessive computation effort. The feasibility of the proposed fault diagnosis algorithm is proven by simulation and experimental results.",2011,0, 5438,Wireless Sensor Networks for Distributed Chemical Sensing: Addressing Power Consumption Limits With On-Board Intelligence,"Chemicals detection and quantification is extremely important for ensuring safety and security in multiple application domains like smart environments, building automation, etc. Characteristics of chemical signal propagation make single point of measure approach mostly inefficient. Distributed chemical sensing with wireless platforms may be the key for reconstructing chemical images of sensed environment but its development is currently hampered by technological limits on solid-state sensors power management. We present the implementation of power saving sensor censoring strategies on a novel wireless electronic nose platform specifically designed for cooperative chemical sensing and based on TinyOS. An on-board sensor fusion component complements its software architecture with the capability of locally estimate air quality and chemicals concentrations. Each node is hence capable to decide the informative content of sampled data extending the operative lifespan of the entire network. Actual power savings are modeled and estimated with a measurement approach in experimental scenarios.",2011,0, 5439,An Automatic Approach for Radial Lens Distortion Correction From a Single Image,"Radial lens distortion correction is a fundamental task in photogrammetry and computer vision disciplines, mainly when metric and quality results are required. Radial lens distortion estimation plays an essential role in several photogrammetric tasks such as: registration of sensors, 3-D reconstruction, and mapping of textures. This paper presents an iterative numerical approach for the automatic estimation and correction of radial lens distortion using only a single image. To this end, several geometric constraints (rectilinear elements and vanishing points) are considered. The reported experimental tests indicate that within certain limits, results from a single image can stand the comparison with those from multiple image bundle adjustment.",2011,0, 5440,A General Software Defect-Proneness Prediction Framework,"BACKGROUND - Predicting defect-prone software components is an economically important activity and so has received a good deal of attention. However, making sense of the many, and sometimes seemingly inconsistent, results is difficult. OBJECTIVE - We propose and evaluate a general framework for software defect prediction that supports 1) unbiased and 2) comprehensive comparison between competing prediction systems. METHOD - The framework is comprised of 1) scheme evaluation and 2) defect prediction components. The scheme evaluation analyzes the prediction performance of competing learning schemes for given historical data sets. The defect predictor builds models according to the evaluated learning scheme and predicts software defects with new data according to the constructed model. In order to demonstrate the performance of the proposed framework, we use both simulation and publicly available software defect data sets. RESULTS - The results show that we should choose different learning schemes for different data sets (i.e., no scheme dominates), that small details in conducting how evaluations are conducted can completely reverse findings, and last, that our proposed framework is more effective and less prone to bias than previous approaches. CONCLUSIONS - Failure to properly or fully evaluate a learning scheme can be misleading; however, these problems may be overcome by our proposed framework.",2011,0, 5441,Dynamic QoS Management and Optimization in Service-Based Systems,"Service-based systems that are dynamically composed at runtime to provide complex, adaptive functionality are currently one of the main development paradigms in software engineering. However, the Quality of Service (QoS) delivered by these systems remains an important concern, and needs to be managed in an equally adaptive and predictable way. To address this need, we introduce a novel, tool-supported framework for the development of adaptive service-based systems called QoSMOS (QoS Management and Optimization of Service-based systems). QoSMOS can be used to develop service-based systems that achieve their QoS requirements through dynamically adapting to changes in the system state, environment, and workload. QoSMOS service-based systems translate high-level QoS requirements specified by their administrators into probabilistic temporal logic formulae, which are then formally and automatically analyzed to identify and enforce optimal system configurations. The QoSMOS self-adaptation mechanism can handle reliability and performance-related QoS requirements, and can be integrated into newly developed solutions or legacy systems. The effectiveness and scalability of the approach are validated using simulations and a set of experiments based on an implementation of an adaptive service-based system for remote medical assistance.",2011,0, 5442,Optimizing the Performance of Virtual Machine Synchronization for Fault Tolerance,"Hypervisor-based fault tolerance (HBFT), which synchronizes the state between the primary VM and the backup VM at a high frequency of tens to hundreds of milliseconds, is an emerging approach to sustaining mission-critical applications. Based on virtualization technology, HBFT provides an economic and transparent fault tolerant solution. However, the advantages currently come at the cost of substantial performance overhead during failure-free, especially for memory intensive applications. This paper presents an in-depth examination of HBFT and options to improve its performance. Based on the behavior of memory accesses among checkpointing epochs, we introduce two optimizations, read-fault reduction and write-fault prediction, for the memory tracking mechanism. These two optimizations improve the performance by 31 percent and 21 percent, respectively, for some applications. Then, we present software superpage which efficiently maps large memory regions between virtual machines (VM). Our optimization improves the performance of HBFT by a factor of 1.4 to 2.2 and achieves about 60 percent of that of the native VM.",2011,0, 5443,High Dynamic Range Image Display With Halo and Clipping Prevention,"The dynamic range of an image is defined as the ratio between the highest and the lowest luminance level. In a high dynamic range (HDR) image, this value exceeds the capabilities of conventional display devices; as a consequence, dedicated visualization techniques are required. In particular, it is possible to process an HDR image in order to reduce its dynamic range without producing a significant change in the visual sensation experienced by the observer. In this paper, we propose a dynamic range reduction algorithm that produces high-quality results with a low computational cost and a limited number of parameters. The algorithm belongs to the category of methods based upon the Retinex theory of vision and was specifically designed in order to prevent the formation of common artifacts, such as halos around the sharp edges and clipping of the highlights, that often affect methods of this kind. After a detailed analysis of the state of the art, we shall describe the method and compare the results and performance with those of two techniques recently proposed in the literature and one commercial software.",2011,0, 5444,Protector: A Probabilistic Failure Detector for Cost-Effective Peer-to-Peer Storage,"Maintaining a given level of data redundancy is a fundamental requirement of peer-to-peer (P2P) storage systems-to ensure desired data availability, additional replicas must be created when peers fail. Since the majority of failures in P2P networks are transient (i.e., peers return with data intact), an intelligent system can reduce significant replication costs by not replicating data following transient failures. Reliably distinguishing permanent and transient failures, however, is a challenging task, because peers are unresponsive to probes in both cases. In this paper, we propose Protector, an algorithm that enables efficient replication policies by estimating the number of “remaining replicasâ€?for each object, including those temporarily unavailable due to transient failures. Protector dramatically improves detection accuracy by exploiting two opportunities. First, it leverages failure patterns to predict the likelihood that a peer (and the data it hosts) has permanently failed given its current downtime. Second, it detects replication level across groups of replicas (or fragments), thereby balancing false positives for some peers against false negatives for others. Extensive simulations based on both synthetic and real traces show that Protector closely approximates the performance of a perfect “oracleâ€?failure detector, and significantly outperforms time-out-based detectors using a wide range of parameters. Finally, we design, implement and deploy an efficient P2P storage system called AmazingStore by combining Protector with structured P2P overlays. Our experience proves that Protector enables efficient long-term data maintenance in P2P storage systems.",2011,0, 5445,Islanding Detection for Inverter-Based DG Coupled With Frequency-Dependent Static Loads,"Islanding detection is an essential protection requirement for distribution generation (DG). Antiislanding techniques for inverter-based DG are typically designed and tested on constant RLC loads and, hence, do not take into account frequency-dependent loads. In this paper, a new antiislanding technique, based on simultaneous P -f and Q- V droops, is designed and tested under different islanding conditions considering different load types. The behavior of frequency-dependent static loads during islanding operation is discussed. The developed antiislanding technique is designed to fully address the critical islanding cases with frequency-dependent static loads and RLC-based loads. The performance of the proposed method is tested to accommodate load switching disturbances and grid voltage distortions. Theoretical investigation coupled with intensive simulation results using the Matlab/Simulink software is presented to validate the performance of the proposed antiislanding technique. Unlike previously proposed methods, the results show that the proposed islanding detection method is robust to frequency-dependent loads and system disturbances.",2011,0, 5446,Preventing Temporal Violations in Scientific Workflows: Where and How,"Due to the dynamic nature of the underlying high-performance infrastructures for scientific workflows such as grid and cloud computing, failures of timely completion of important scientific activities, namely, temporal violations, often take place. Unlike conventional exception handling on functional failures, nonfunctional QoS failures such as temporal violations cannot be passively recovered. They need to be proactively prevented through dynamically monitoring and adjusting the temporal consistency states of scientific workflows at runtime. However, current research on workflow temporal verification mainly focuses on runtime monitoring, while the adjusting strategy for temporal consistency states, namely, temporal adjustment, has so far not been thoroughly investigated. For this issue, two fundamental problems of temporal adjustment, namely, where and how, are systematically analyzed and addressed in this paper. Specifically, a novel minimum probability time redundancy-based necessary and sufficient adjustment point selection strategy is proposed to address the problem of where and an innovative genetic-algorithm-based effective and efficient local rescheduling strategy is proposed to tackle the problem of how. The results of large-scale simulation experiments with generic workflows and specific real-world applications demonstrate that our temporal adjustment strategy can remarkably prevent the violations of both local and global temporal constraints in scientific workflows.",2011,0, 5447,A Refactoring Approach to Parallelism,"In the multicore era, a major programming task will be to make programs more parallel. This is tedious because it requires changing many lines of code; it's also error-prone and nontrivial because programmers need to ensure noninterference of parallel operations. Fortunately, interactive refactoring tools can help reduce the analysis and transformation burden. The author describes how refactoring tools can improve programmer productivity, program performance, and program portability. The article also describes a toolset that supports several refactorings for making programs thread-safe, threading sequential programs for throughput, and improving scalability of parallel programs.",2011,0, 5448,Density-Aware Routing in Highly Dynamic DTNs: The RollerNet Case,"We analyze the dynamics of a mobility data set collected in a pipelined disruption-tolerant network (DTN), a particular class of intermittently-connected wireless networks characterized by a 1-D topology. First, we collected and investigated traces of contact times among thousands of participants of a rollerblading tour in Paris. The data set shows extreme dynamics in the mobility pattern of a large number of nodes. Most strikingly, fluctuations in the motion of the rollerbladers cause a typical accordion phenomenon - the topology expands and shrinks with time, thus influencing connection times and opportunities between participants. Second, we show through an analytical model that the accordion phenomenon, through the variation of the average node degree, has a major impact on the performance of epidemic dissemination. Finally, we test epidemic dissemination and other existing forwarding schemes on our traces, and conclude that routing should adapt to the varying, though predictable, nature of the network. To this end, we propose DA-SW (Density-Aware Spray-and-Wait), a measurement-oriented variant of the spray-and-wait algorithm that tunes, in a dynamic fashion, the number of a message copies to be disseminated in the network. The particularity of DA-SW is that it relies on a set of abaci that represents the three phases of the accordion phenomenon: aggregation, expansion, and stabilization. We show that DA-SW leads to performance results that are close to the best case (obtained with an oracle).",2011,0, 5449,Fault Detection for T–S Fuzzy Discrete Systems in Finite-Frequency Domain,"This paper investigates the problem of fault detection for Takagi-Sugeno (T-S) fuzzy discrete systems in finite-frequency domain. By means of the T-S fuzzy model, both a fuzzy fault detection filter system and the dynamics of filtering error generator are constructed. Two finite-frequency performance indices are introduced to measure fault sensitivity and disturbance robustness. Faults are considered in a middle frequency domain, while disturbances are considered in another certain finite-frequency domain interval. By using the generalized Kalman-Yakubovic-Popov Lemma in a local linear system model, the design methods are presented in terms of solutions to a set of linear matrix inequalities, which can be readily solved via standard numerical software. The design problem is formulated as a two-objective optimization algorithm. A numerical example is given to illustrate the effectiveness and potential of the developed techniques.",2011,0, 5450,Athanasia: A User-Transparent and Fault-Tolerant System for Parallel Applications,"This article presents Athanasia, a user-transparent and fault-tolerant system, for parallel applications running on large-scale cluster systems. Cluster systems have been regarded as a de facto standard to achieve multitera-flop computing power. These cluster systems, as we know, have an inherent failure factor that can cause computation failure. The reliability issue in parallel computing systems, therefore, has been studied for a relatively long time in the literature, and we have seen many theoretical promises arise from the extensive research. However, despite the rigorous studies, practical and easily deployable fault-tolerant systems have not been successfully adopted commercially. Athanasia is a user-transparent checkpointing system for a fault-tolerant Message Passing Interface (MPI) implementation that is primarily based on the sync-and-stop protocol. Athanasia supports three critical functionalities that are necessary for fault tolerance: a light-weight failure detection mechanism, dynamic process management that includes process migration, and a consistent checkpoint and recovery mechanism. The main features of Athanasia are that it does not require any modifications to the application code and that it preserves many of the high performance characteristics of high-speed networks. Experimental results show that Athanasia can be a good candidate for practically deployable fault-tolerant systems in very-large and high-performance clusters and that its protocol can be applied to a variety of parallel communication libraries easily.",2011,0, 5451,A recursive Otsu thresholding method for scanned document binarization,"The use of digital images of scanned handwritten historical documents has increased in recent years, especially with the online availability of large document collections. However, the sheer number of images in some of these collections makes them cumbersome to manually read and process, making the need for automated processing of increased importance. A key step in the recognition and retrieval of such documents is binarization, the separation of document text from the page's background. Binarization of images of historical documents that have been affected by degradation or are otherwise of poor image quality is difficult and continues to be a topic of research in the field of image processing. This paper presents a novel approach to this problem, including two primary variations. One combines a recursive extension of Otsu thresholding and selective bilateral filtering to allow automatic binarization and segmentation of handwritten text images. The other also builds on the recursive Otsu method and adds improved background normalization and a post-processing step to the algorithm to make it more robust and to perform adequately even for images that present bleed-through artifacts. Our results show that these techniques segment the text in historical documents comparable to and in some cases better than many state-of-the-art approaches based on their performance as evaluated using the dataset from the recent ICDAR 2009 Document Image Binarization Contest.",2011,0, 5452,A Game Platform for Treatment of Amblyopia,"We have developed a prototype device for take-home use that can be used in the treatment of amblyopia. The therapeutic scenario we envision involves patients first visiting a clinic, where their vision parameters are assessed and suitable parameters are determined for therapy. Patients then proceed with the actual therapeutic treatment on their own, using our device, which consists of an Apple iPod Touch running a specially modified game application. Our rationale for choosing to develop the prototype around a game stems from multiple requirements that such an application satisfies. First, system operation must be sufficiently straight-forward that ease-of-use is not an obstacle. Second, the application itself should be compelling and motivate use more so than a traditional therapeutic task if it is to be used regularly outside of the clinic. This is particularly relevant for children, as compliance is a major issue for current treatments of childhood amblyopia. However, despite the traditional opinion that treatment of amblyopia is only effective in children, our initial results add to the growing body of evidence that improvements in visual function can be achieved in adults with amblyopia.",2011,0, 5453,Efficient Fault Detection and Diagnosis in Complex Software Systems with Information-Theoretic Monitoring,"Management metrics of complex software systems exhibit stable correlations which can enable fault detection and diagnosis. Current approaches use specific analytic forms, typically linear, for modeling correlations. In practice, more complex nonlinear relationships exist between metrics. Moreover, most intermetric correlations form clusters rather than simple pairwise correlations. These clusters provide additional information and offer the possibility for optimization. In this paper, we address these issues by using Normalized Mutual Information (NMI) as a similarity measure to identify clusters of correlated metrics, without assuming any specific form for the metric relationships. We show how to apply the Wilcoxon Rank-Sum test on the entropy measures to detect errors in the system. We also present three diagnosis algorithms to locate faulty components: RatioScore, based on the Jaccard coefficient, SigScore, which incorporates knowledge of component dependencies, and BayesianScore, which uses Bayesian inference to assign a fault probability to each component. We evaluate our approach in the context of a complex enterprise application, and show that 1) stable, nonlinear correlations exist and can be captured with our approach; 2) we can detect a large fraction of faults with a low false positive rate (we detect up to 18 of the 22 faults we injected); and 3) we improve the diagnosis with our new diagnosis algorithms.",2011,0, 5454,Hardware/Software Codesign Architecture for Online Testing in Chip Multiprocessors,"As the semiconductor industry continues its relentless push for nano-CMOS technologies, long-term device reliability and occurrence of hard errors have emerged as a major concern. Long-term device reliability includes parametric degradation that results in loss of performance as well as hard failures that result in loss of functionality. It has been reported in the ITRS roadmap that effectiveness of traditional burn-in test in product life acceleration is eroding. Thus, to assure sufficient product reliability, fault detection and system reconfiguration must be performed in the field at runtime. Although regular memory structures are protected against hard errors using error-correcting codes, many structures within cores are left unprotected. Several proposed online testing techniques either rely on concurrent testing or periodically check for correctness. These techniques are attractive, but limited due to significant design effort and hardware cost. Furthermore, lack of observability and controllability of microarchitectural states result in long latency, long test sequences, and large storage of golden patterns. In this paper, we propose a low-cost scheme for detecting and debugging hard errors with a fine granularity within cores and keeping the faulty cores functional, with potentially reduced capability and performance. The solution includes both hardware and runtime software based on codesigned virtual machine concept. It has the ability to detect, debug, and isolate hard errors in small noncache array structures, execution units, and combinational logic within cores. Hardware signature registers are used to capture the footprint of execution at the output of functional modules within the cores. A runtime layer of software (microvisor) initiates functional tests concurrently on multiple cores to capture the signature footprints across cores to detect, debug, and isolate hard errors. Results show that using targeted set of functional test sequences, faults ca- be debugged to a fine-granular level within cores. The hardware cost of the scheme is less than three percent, while the software tasks are performed at a high-level, resulting in a relatively low design effort and cost.",2011,0, 5455,Experimental Validation of Channel State Prediction Considering Delays in Practical Cognitive Radio,"As part of the effort toward building a cognitive radio (CR) network testbed, we have demonstrated real-time spectrum sensing. Spectrum sensing is the cornerstone of CR. However, current hardware platforms for CR introduce time delays that undermine the accuracy of spectrum sensing. The time delay named response delay incurred by hardware and software can be measured at two antennas colocated at a secondary user (SU), the receiving antenna, and the transmitting antenna. In this paper, minimum response delays are experimentally quantified and reported based on two hardware platforms, i.e., the universal software radio peripheral 2 (USRP2) and the small-form-factor software-defined-radio development platform (SFF SDR DP). The response delay has a negative impact on the accuracy of spectrum sensing. A modified hidden Markov model (HMM)-based single-secondary-user (single-SU) prediction is proposed and examined. When multiple SUs exist and their channel qualities are diverse, cooperative prediction can benefit the SUs as a whole. A prediction scheme with two stages is proposed, where the first stage includes individual predictions conducted by all the involved SUs, and the second stage further performs cooperative prediction using individual single-SU prediction results obtained at the first stage. In addition, a soft-combining decision rule for cooperative prediction is proposed. To have convincing performance evaluation results, real-world Wi-Fi signals are used to test the proposed approaches, where the Wi-Fi signals are simultaneously recorded at four different locations. Experimental results show that the proposed single-SU prediction outperforms the 1-nearest neighbor (1-NN) prediction, which uses current detected state as an estimate of future states. Moreover, even with just a few SUs, cooperative prediction leads to overall performance improvement.",2011,0, 5456,Measuring the Effectiveness of the Defect-Fixing Process in Open Source Software Projects,"The defect-fixing process is a key process in which an open source software (OSS) project team responds to customer needs in terms of detecting and resolving software defects, hence the dimension of defect-fixing effectiveness corresponds nicely to adopters' concerns regarding OSS products. Although researchers have been studying the defect fixing process in OSS projects for almost a decade, the literature still lacks rigorous ways to measure the effectiveness of this process. Thus, this paper aims to create a valid and reliable instrument to measure the defect-fixing effectiveness construct in an open source environment through the scale development methodology proposed by Churchill [4]. This paper examines the validity and reliability of an initial list of indicators through two rounds of data collection and analysis. Finally four indicators are suggested to measure defect-fixing effectiveness. The implication for practitioners is explained through a hypothetical example followed by implications for the research community.",2011,0, 5457,Quality Market: Design and Field Study of Prediction Market for Software Quality Control,"Given the increasing competition in the software industry and the critical consequences of software errors, it has become important for companies to achieve high levels of software quality. Generating early forecasts of potential quality problems can have significant benefits to quality improvement. In our research, we utilized a novel approach, called prediction markets, for generating early forecasts of confidence in software quality for an ongoing project in a firm. Analogous to financial market, in a quality market, a security was defined that represented the quality requirement to be predicted. Participants traded on the security to provide their predictions. The market equilibrium price represented the probability of occurrence of the quality being measured. The results suggest that forecasts generated using the prediction markets are closer to the actual project outcomes than polls. We suggest that a suitably designed prediction market may have a useful role in software development domain.",2011,0, 5458,Facilitating Performance Predictions Using Software Components,"Component-based software engineering (CBSE) poses challenges for predicting and evaluating software performance but also offers several advantages. Software performance engineering can benefit from CBSE ideas and concepts. The MediaStore, a fictional system, demonstrates how to achieve compositional reasoning about software performance.",2011,0, 5459,Detecting SEEs in Microprocessors Through a Non-Intrusive Hybrid Technique,"This paper presents a hybrid technique based on software signatures and a hardware module with watchdog and decoder characteristics to detect SEU and SET faults in microprocessors. These types of faults have a major influence in the microprocessor's control-flow, affecting the basic blocks and the transitions between them. In order to protect the transitions between basic blocks a light hardware module is implemented in order to spoof the data exchanged between the microprocessor and its memory. Since the hardware alone is not capable of detecting errors inside the basic blocks, it is enhanced to support the new technique and then provide full control-flow protection. A fault injection campaign is performed using a MIPS microprocessor. Simulation results show high detection rates with a small amount of performance degradation and area overhead.",2011,0, 5460,Defects Detection of Cold-roll Steel Surface Based on MATLAB,"The measure of detecting the surface edge information of cold-roll steel sheets has been investigated rely on the digital image processing toolbox of the mathematic software MATLAB, and the edge detecting experiment of surface grayscale image has been conducted on the computer. The method of detecting defects such as black flecks and scratching has been realized in the research, the different effect of operators with the disturbance of noise has been compared, the performance of the LOG's edge detection ability various due to the change of the parameter Sigma. Conclusions have been made that LOG operator maintains the satisfying performance with the disturbance of noise, smaller Sigma is, less satisfying the smoothing ability is, which maintains more details, whereas the smoothing ability is better with loss of more details, defects detection of cold-roll steel surface based on MATLAB has a quite satisfying performance.",2011,0, 5461,Fault Detection Devices of Rotating Machinery Based on SOPC,"For the common failure of motor rotation, a reliable acquisition and detection device of motor rotation signal is designed, and the general design scheme of the test system and the implementation of hardware and software are given. The system consists of high-speed frequency counter chip FPGA module and the SOPC system structure, using NIOS II as the system control unit to control the work of the counter, and completing high-precision frequency meter design at the core of FPGA with appropriate software and hardware resources. The Counting result is sent by serial port to the host computer for further signal analysis such as FFT and harmonic wavelet packet, to get more detailed distribution of signal spectrum as the basis to judge the fault signal. The frequency meter system developed with Nios technology can simplify the external measurement hardware circuits, stabilize the performance and flexibly achieve custom application. Experimental results show that the test system can well be used to fault detection of mechanical system.",2011,0, 5462,Fault Diagnosis of Rolling Element Bearing Based on Vibration Frequency Analysis,"Element bearing is one of the widely used universal parts in machine. Its running condition and failure rate influence the performance, service life and efficiency of the equipment directly. So it is significant to research on bearing condition monitoring and fault diagnosis techniques. The paper describes the use of vibration measurements by a periodic monitoring system for monitoring the condition of rolling element bearing of the centrifugal machine. Vibration data is collected using accelerometers, which are placed at the 12 o'clock position at both the drive end and the driven end of the centrifugal machine. Vibration signals are collected using a 16 channel DAT recorder, and are post processed by vibration signals analysis software in personal computer. Simple diagnosis by vibration is based on frequency analysis. Each element of rolling bearing is of its fault characteristic frequency. This paper introduces frequency analysis method of using low and high frequency bands in conjunction with time domain waveform. Fault position of drive end bearing in the centrifugal machine is detected successfully by using this method.",2011,0, 5463,Online PD Monitoring and Analysis for Step-up Transformers of Hydropower Plant,"As we known, to evaluate and diagnosis the insulation faults of large-size power transformers, partial discharges (PDs) can be detected via ultra high frequency (UHF) technique. In this paper, an UHF PD online monitoring system was developed for large-size step-up transformers. The principle of UHF PD monitoring method was introduced., and the hardware structure and software key techniques in the system were described. In order to achieve the integrated PD monitoring and ensure the accuracy of PD analysis, the operating environments and operating conditions of the transformers have been acquired synchronously. Meanwhile, the correlative analysis of PDs with respect to operating conditions can be performed and the characteristics of PD activities with respect to relevant states can be studied. The association analysis of PDs is performed as follows: (a) periodical PDs caused by power frequency voltage under stable operating conditions, (b) stochastic PDs caused by transient over-voltages under unstable operating conditions. At present, the system has been applied into several large-size step-up transforms and achieved large mount of on-site monitoring results, and the reliability of PD analyzing results is verified by the analysis of the onsite data.",2011,0, 5464,Multi-view coding and view synthesis for 3DTV,"In this paper we analyze the current developments in 3DTV technologies concerning multi-view video coding and free-viewpoint rendering. Multi-View Coding (MVC) can be performed in several ways that influence the performance and the complexity of the system. We underline the connection between coding and free-viewpoint rendering techniques. Since the latest research shows that a better multi-view coding can be reached by using interview prediction based on view synthesis, we present our view synthesis algorithm and compare it with the MPEG standardization software. The first quality measurements, indicating the difference between our view synthesis and the MPEG algorithm, are provided and discussed.",2011,0, 5465,Selecting Oligonucleotide Probes for Whole-Genome Tiling Arrays with a Cross-Hybridization Potential,"For designing oligonucleotide tiling arrays popular, current methods still rely on simple criteria like Hamming distance or longest common factors, neglecting base stacking effects which strongly contribute to binding energies. Consequently, probes are often prone to cross-hybridization which reduces the signal-to-noise ratio and complicates downstream analysis. We propose the first computationally efficient method using hybridization energy to identify specific oligonucleotide probes. Our Cross-Hybridization Potential (CHP) is computed with a Nearest Neighbor Alignment, which efficiently estimates a lower bound for the Gibbs free energy of the duplex formed by two DNA sequences of bounded length. It is derived from our simplified reformulation of t-gap insertion-deletion-like metrics. The computations are accelerated by a filter using weighted ungapped q-grams to arrive at seeds. The computation of the CHP is implemented in our software OSProbes, available under the GPL, which computes sets of viable probe candidates. The user can choose a trade-off between running time and quality of probes selected. We obtain very favorable results in comparison with prior approaches with respect to specificity and sensitivity for cross-hybridization and genome coverage with high-specificity probes. The combination of OSProbes and our Tileomatic method, which computes optimal tiling paths from candidate sets, yields globally optimal tiling arrays, balancing probe distance, hybridization conditions, and uniqueness of hybridization.",2011,0, 5466,Monitoring metal fill profile in lost foam casting process using capacitive sensors and metal fill time estimation,"Lost foam casting (LFC) is one of the most important casting methods used in the production of complex metal castings. Several casting defects may result due to an improper metal fill process during LFC; hence, monitoring the metal fill profile is important so that failures in the process can be quickly discovered. This paper presents a monitoring process based on capacitive sensors developed by the research team. In this application, an array of metallic electrodes is mounted around a target area and a set of capacitance measuring circuits are used to measure the mutual capacitances between these electrodes. Measuring the change in capacitance as grounded molten metal displaces the foam pattern provide a simple nondestructive method of acquiring metal fill information. An iterative algorithm for the estimation of metal fill time is also presented. This estimation utilizes better filtration and outlier detection techniques to provide a good prediction of the filling time. The proposed system also makes use of a wireless sensor network for transmitting data between the sensor boards and a computer running our metal fill monitoring application software; thus making this system suitable for the harsh foundry environment.",2011,0, 5467,Does Socio-Technical Congruence Have an Effect on Software Build Success? A Study of Coordination in a Software Project,"Socio-technical congruence is an approach that measures coordination by examining the alignment between the technical dependencies and the social coordination in the project. We conduct a case study of coordination in the IBM Rational Team Concert project, which consists of 151 developers over seven geographically distributed sites, and expect that high congruence leads to a high probability of successful builds. We examine this relationship by applying two congruence measurements: an unweighted congruence measure from previous literature, and a weighted measure that overcomes limitations of the existing measure. We discover that there is a relationship between socio-technical congruence and build success probability, but only for certain build types, and observe that in some situations, higher congruence actually leads to lower build success rates. We also observe that a large proportion of zero-congruence builds are successful, and that socio-technical gaps in successful builds are larger than gaps in failed builds. Analysis of the social and technical aspects in IBM Rational Team Concert allows us to discuss the effects of congruence on build success. Our findings provide implications with respect to the limits of applicability of socio-technical congruence and suggest further improvements of socio-technical congruence to study coordination.",2011,0, 5468,Using Multivariate Split Analysis for an Improved Maintenance of Automotive Diagnosis Functions,"The amount of automotive software functions is continuously growing. With their interactions and dependencies increasing, the diagnosis' task of differencing between symptoms indicating a fault, the fault cause itself and uncorrelated data gets enormously difficult and complex. For instance, up to 40% of automotive software functions are contributable to diagnostic functions, resulting in approximately three million lines of diagnostic code. The diagnosis' complexity is additionally increased by legal requirements forcing automotive manufacturers maintaining the diagnosis of their cars for 15 years after the end of the car's series production. Clearly, maintaining these complex functions over such an extend time span is a difficult and tedious task. Since data from diagnosis incidents has been transferred back to the OEMs for some years, analysing this data with statistic techniques promises a huge facilitation of the diagnosis' maintenance. In this paper we use multivariate split analysis to filter diagnosis data for symptoms having real impact on faults and their repair measures, thus detecting diagnosis functions which have to be updated as they contain irrelevant or erroneous observations and/or repair measurements. A key factor for performing an unbiased split analysis is to determine an ideally representative control data set for a given test data set showing some property whose influence is to be studied. In this paper, we present a performant algorithm for creating such a representative control data set out of a very large initial data collection. This approach facilitates the analysis and maintenance of diagnosis functions. It has been successfully evaluated on case studies and is part of BMW's continuous improvement process for automotive diagnosis.",2011,0, 5469,Design Defect Detection Rules Generation: A Music Metaphor,"We propose an automated approach for design defect detection. It exploits an algorithm that automatically finds rules for the detection of possible design defects, thus relieving the designer from doing so manually. Our algorithm derives rules in the form of metric/threshold combinations, from known instances of design defects (defect examples). Due to the large number of possible combinations, we use a music-inspired heuristic that finds the best harmony when combining metrics. We evaluated our approach on finding potential defects in three open-source systems (Xerces-J, Quick UML and Gantt). For all of them, we found more than 80% of known defects, a better result when compared to a state-of-the-art approach, where the detection rules are manually specified.",2011,0, 5470,On the Utility of a Defect Prediction Model during HW/SW Integration Testing: A Retrospective Case Study,"Testing is an important and cost-intensive part of the software development life cycle. Defect prediction models try to identify error-prone components, so that these can be tested earlier or more in-depth, and thus improve the cost-effectiveness during testing. Such models have been researched extensively, but whether and when they are applicable in practice is still debated. The applicability depends on many factors, and we argue that it cannot be analyzed without a specific scenario in mind. In this paper, we therefore present an analysis of the utility for one case study, based on data collected during the hardware/software integration test of a system from the avionic domain. An analysis of all defects found during this phase reveals that more than half of them are not identifiable by a code-based defect prediction model. We then investigate the predictive performance of different prediction models for the remaining defects. The small ratio of defective instances results in relatively poor performance. Our analysis of the cost-effectiveness then shows that the prediction model is not able to outperform simple models, which order files either randomly or by lines of code. Hence, in our setup, the application of defect prediction models does not offer any advantage in practice.",2011,0, 5471,Prioritizing Requirements-Based Regression Test Cases: A Goal-Driven Practice,"Any changes for maintenance or evolution purposes may break existing working features, or may violate the requirements established in the previous software releases. Regression testing is essential to avoid these problems, but it may be ended up with executing many time-consuming test cases. This paper tries to address prioritizing requirements-based regression test cases. To this end, system-level testing is focused on two practical issues in industrial environments: i) addressing multiple goals regarding quality, cost and effort in a project, and ii) using non-code metrics due to the lack of detailed code metrics in some situations. This paper reports a goal-driven practice at Research In Motion (RIM) towards prioritizing requirements-based test cases regarding these issues. Goal-Question-Metric (GQM) is adopted in identifying metrics for prioritization. Two sample goals are discussed to demonstrate the approach: detecting bugs earlier and maintaining testing effort. We use two releases of a prototype Web-based email client to conduct a set of experiments based on the two mentioned goals. Finally, we discuss lessons learned from applying the goal-driven approach and experiments, and we propose few directions for future research.",2011,0, 5472,Using many-core processors to improve the performance of space computing platforms,"Taking into consideration space applications' demands for higher bandwidth and processing power, we propose to efficiently apply upcoming many-core processors and other Commercial-Off-The-Shelf (COTS) products to improve the on-board processing power.1 2 A combination of traditional hardware and software-implemented fault-tolerant techniques, addresses the reliability of the system. We first describe common requirements and design challenges presented by future space applications like the High Resolution Wide Swath Synthetic Aperture Radar (HRWS SAR). After proposing the High Performance Computing (HPC) architecture, we compare between most suitable hardware technologies and give some rough performance estimations based on their features. For benchmarking purposes we have manually converted the Scalable Synthetic Compact Application (SSCA#3) from Matlab to C and parallelized it using OpenMP. It turns out that this application scales very well on many-core processors especially on a distributed Shared Memory Architecture (SMA).",2011,0, 5473,SCIPS: An emulation methodology for fault injection in processor caches,"Due to the high level of radiation endured by space systems, fault-tolerant verification is a critical design step for these systems. Space-system designers use fault-injection tools to introduce system faults and observe the system's response to these faults. Since a processor's cache accounts for a large percentage of total chip area and is thus more likely to be affected by radiation, the cache represents a key system component for fault-tolerant verification. Unfortunately, processor architectures limit cache accessibility, making direct fault injection into cache blocks impossible. Therefore, cache faults can be emulated by injecting faults into data accessed by load instructions. In this paper, we introduce SPFI-TILE, a software-based fault-injection tool for many-core devices. SPFI-TILE emulates cache fault injections by randomly injecting faults into load instructions. In order to provide unbiased fault injections, we present the cache fault-injection methodology SCIPS (Smooth Cache Injection Per Skipping). Results from MATLAB simulation and integration with SPFI-TILE reveal that SCIPS successfully distributes fault-injection probabilities across load instructions, providing an unbiased evaluation and thus more accurate verification of fault tolerance in cache memories.",2011,0, 5474,V-22 ground station enhancements and vibration diagnostics field experience,"The V-22 tiltrotor aircraft's primary tool for aircraft maintenance is the Comprehensive Automated Maintenance Environment - Optimized (CAMEO). CAMEO is a ground based system for aircraft download processing. Under the CAMEO umbrella are several tools with distinct functionality, linked together and sharing common data to support aircraft maintenance operations. This paper provides a general overview of the CAMEO system along with selected components that support mechanical systems (rotors, engines, interconnect shafting system and nacelle blower) monitoring with an emphasis on vibration diagnostics. Also covered are ongoing activities associated with improved performance and integration of maintenance tools associated with vibration monitoring. Vibration diagnostics are automated, directly linking with the associated V-22 Integrated Electronic Technical Manual (IETM) procedures. Fleet Support Team (FST) and industry original equipment manufacturer (OEM) engineers use additional tools like the V-22 Data Visualization Toolset (VDVT) to review and confirm existing diagnostic indications, or determine new ones as necessary. Recent enhancements and anticipated further improvements of the on-board Vibration, Structural Life and Engine Diagnostics (VSLED) unit are addressed. Some examples of fault detection of degraded mechanical components using vibration data are presented, with related lessons learned.",2011,0, 5475,Hardware/software-based diagnosis of load-store queues using expandable activity logs,"The increasing device count and design complexity are posing significant challenges to post-silicon validation. Bug diagnosis is the most difficult step during post-silicon validation. Limited reproducibility and low testing speeds are common limitations in current testing techniques. Moreover, low observability defies full-speed testing approaches. Modern solutions like on-chip trace buffers alleviate these issues, but are unable to store long activity traces. As a consequence, the cost of post-Si validation now represents a large fraction of the total design cost. This work describes a hybrid post-Si approach to validate a modern load-store queue. We use an effective error detection mechanism and an expandable logging mechanism to observe the microarchitectural activity for long periods of time, at processor full-speed. Validation is performed by analyzing the log activity by means of a diagnosis algorithm. Correct memory ordering is checked to root the cause of errors.",2011,0, 5476,Architectural framework for supporting operating system survivability,"The ever increasing size and complexity of Operating System (OS) kernel code bring an inevitable increase in the number of security vulnerabilities that can be exploited by attackers. A successful security attack on the kernel has a profound impact that may affect all processes running on it. In this paper we propose an architectural framework that provides survivability to the OS kernel, i.e. able to keep normal system operation despite security faults. It consists of three components that work together: (1) security attack detection, (2) security fault isolation, and (3) a recovery mechanism that resumes normal system operation. Through simple but carefully-designed architecture support, we provide OS kernel survivability with low performance overheads (<; 5% for kernel intensive benchmarks). When tested with real world security attacks, our survivability mechanism automatically prevents the security faults from corrupting the kernel state or affecting other processes, recovers the kernel state and resumes execution.",2011,0, 5477,Implementing a safe embedded computing system in SRAM-based FPGAs using IP cores: A case study based on the Altera NIOS-II soft processor,"Reconfigurable Field Programmable Gate Arrays (FPGAs) are growing the attention of developers of mission- and safety-critical applications (e.g., aerospace ones), as they allow unprecedented levels of performance, which are making these devices particularly attractive as ASICs replacement, and as they offer the unique feature of in-the-field reconfiguration. However, the sensitivity of reconfigurable FPGAs to ionizing radiation mandates the adoption of fault tolerant mitigation techniques that may impact heavily the FPGA resource usage. In this paper we consider time redundancy, that allows avoiding the high overhead that more traditional approaches like N-modular redundancy introduce, at an affordable cost in terms of application execution-time overhead. A single processor executes two instances of the same software sequentially; the two instances are segregated in their own memory space through a soft IP core that monitors the processor/memory interface for any violations. Moreover, the IP core checks for any processor functional interruption by means of a watchdog timer. Fault injection results are reported showing the characteristics of the proposed approach.",2011,0, 5478,Design of Electric Power Parameter Monitoring System Based on DSP and CPLD,"In this article, we introduce a set of online power quality monitor who is based on DSP (Digital Signal Processor). The monitor makes full use of DSP chip's strong operation capability. It can monitor power quality online, display real-time measure data and save super scalar data, which can provide accurate data to power quality evaluation and improvement. On the hardware side, the system takes DSP and CPLD as core of power parameter detection circuit, and complex programmable logic device (CPLD) and phase-locked loop (PLL) as synchronous sampling circuit, TO the software, designed relative software arithmetic. At the same time, the main program of electric power parameter detection was introduced. Experiments show the proposed system is of fast response, high accuracy, and real-time processing.",2011,0, 5479,Predicting the software performance during feasibility study,"Performance is an important non-functional attribute to be considered for producing quality software. Software performance engineering (SPE) is a methodology having significant role in software engineering to assess the performance of software systems early in the lifecycle. Gathering performance data is an essential aspect of SPE approach. The authors have proposed a methodology to gather data during feasibility study by exploiting the use case point approach, gearing factor and COCOMO model. The proposed methodology is used to estimate the performance data required for performance assessment in the integrated performance prediction process (IP3) model. The gathered data is used as the input for solving the two models, (i) use case performance model and (ii) system model. The methodology is illustrated with a case study of airline reservation application. A regression analysis is carried out to validate the response time obtained in the use case performance model. The analysis shows the proposed estimation can be used along with performance walkthrough in data gathering. The performance metrics are obtained by solving the system model, and the behaviour of the hardware resources is observed. Bottleneck resources are identified and the performance parameters are optimised using sensitivity analysis.",2011,0, 5480,Investigation of automatic prediction of software quality,"The subjective nature of software code quality makes it a complex topic. Most software managers and companies rely on the subjective evaluations of experts to determine software code quality. Software companies can save time and money by utilizing a model that could accurately predict different code quality factors during and after the production of software. Previous research builds a model predicting the difference between bad and excellent software. This paper expands this to a larger range of bad, poor, fair, good, and excellent, and builds a model predicting these classes. This research investigates decision trees and ensemble learning from the machine learning tool Weka as primary classifier models predicting reusability, flexibility, understandability, functionality, extendibility, effectiveness, and total quality of software code.",2011,0, 5481,Ultra-High Resolution LYSO PQS-SSS Heptahedron Blocks for Low-Cost MuPET,"We developed and built a solid detector ring for a new murine positron emission tomography (MuPET) system. We use cerium-doped lutetium yttrium orthosilicate (LYSO) crystals and regular round 19 mm photomultipliers (PMTs) arranged in a quadrant-sharing (PQS) configuration. The detector system comprised 180 PQS-SSS heptahedron-shaped blocks distributed in 6 subrings. Each block comprised a 13 × 13 crystal array with nominal dimensions of 19 × 19 × 10 mm3. To form a zero-gap solid ring, the rectangular blocks were ground into heptahedron-shaped blocks with a taper angle of 6° on the edge crystals and optical surfaces. The two edge crystals were 1.76 mm wide, and the inner crystals were 1.32 mm wide. We explored the possibility of increasing the detector's performance by implementing new design, materials, and production techniques; testing the detector's performance; and measuring the detector's timing resolution. List-mode data were acquired using a Ga-68 source, in-house high-yield pileup-event recovery electronics, and data-acquisition software. Four randomly selected blocks were used to evaluate the quality of the detector and our mass-production method. The four blocks' performances were quite similar. A typical block had a packing fraction of 95%, a peak-to-valley ratio of 2.4, a light collection efficiency of 78%, and an energy resolution of 14% at 511 keV, and all 169 of the block's crystal detectors were clearly decoded. Using a single crystal in coincidence with a block, the average coincidence timing resolution was found to be 430 ps (full width at half maximum). A block-to-block coincidence timing resolution of 530 ps is expected. Our PQS-SSS heptahedron block design indicates that it is feasible to construct a high resolution (~1.2 mm) MuPET detector ring using round 19 mm PMTs instead of the more expensive position-sensitive PMTs or solid-state detectors.",2011,0, 5482,Intelligent Trend Indices in Detecting Changes of Operating Conditions,"Temporal reasoning is a very valuable tool to diagnose and control slow processes. Identified trends are also used in data compression and fault diagnosis. Although humans are very good at visually detecting such patterns, for control system software it is a difficult problem including trend extraction and similarity analysis. In this paper, an intelligent trend index is developed from scaled measurements. The scaling is based on monotonously increasing, nonlinear functions, which are generated with generalised norms and moments. The monotonous increase is ensured with constraint handling. Triangular episodes are classified with the trend index and the derivative of it. Severity of the situations is evaluated by a deviation index which takes into account the scaled values of the measurements.",2011,0, 5483,Software reliability model with bathtub-shaped fault detection rate,"This paper proposes a software reliability model with a bathtub-shaped fault detection rate. We discuss how the inherent characteristics of the software testing process support the three phases of the bathtub; the first phase with a decreasing fault detection rate arises from the removal of simple, yet frequent faults like syntax errors and typos; the second phase possesses a constant fault detection rate marking the beginning of functional requirements testing; the third and final code comprehension stage exhibits an increasing fault detection rate because testers are now familiar with the system and can focus their attention on the outstanding and as yet untested portions of code. We also discuss how eliminating one of the testing phases gives rise to the burn-in model, which is a special case of the bathtub model. We compare the performance of the bathtub and burn-in models with the three classical software reliability models using the Predictive Mean Square Error and Akaike Information Criterion, by applying these models to a data set in the literature. Our results suggest that the bathtub model best describes the observed data and also most precisely predicts the future data points compared to the other popular software reliability models. The bathtub model can thus be used to provide accurate predictions during the testing process and guide optimal release time decisions.",2011,0, 5484,Nationwide real-time monitoring system for electrical quantities and power quality of the electricity transmission system,"In this study, a nationwide real-time monitoring system has been developed to monitor all electrical quantities and power quality (PQ) parameters of the electricity transmission network including its interfaces with the generation and distribution systems. The implemented system is constituted of extended PQ analysers (PQ+ analysers), a national monitoring center for power quality (NMCPQ), and the PQ retrieval and analysis software suite. The PQ+ analyser is specifically designed for multipurpose usage such as event recording and raw data collection, in addition to the usual PQ analysis functions. The system has been designed to support up to 1000 PQ+ analysers, to make the electricity transmission system and its interfaces completely observable in the future. The remote monitoring, analysis and reporting interface, the map-based interface, and the real-time monitoring interface, as well as the Web applications run on the NMCPQ servers, can be accessed by the utility, large industrial plants, distribution companies and researchers, to the extent of the authorisation given by the transmission system operator. By activating sufficient number of analogue and digital outputs of PQ+ analysers, and equipping the NMCPQ with necessary hardware, analysis and decision making software, the proposed system can serve as an advanced supervisory control and data acquisition system in the future. The proposed monitoring concept can be extended to serve the needs of the modern electricity markets, by making various regulations, codes and instructions implementable.",2011,0, 5485,Blind Image Quality Assessment: From Natural Scene Statistics to Perceptual Quality,"Our approach to blind image quality assessment (IQA) is based on the hypothesis that natural scenes possess certain statistical properties which are altered in the presence of distortion, rendering them un-natural; and that by characterizing this un-naturalness using scene statistics, one can identify the distortion afflicting the image and perform no-reference (NR) IQA. Based on this theory, we propose an (NR)/blind algorithm-the Distortion Identification-based Image Verity and INtegrity Evaluation (DIIVINE) index-that assesses the quality of a distorted image without need for a reference image. DIIVINE is based on a 2-stage framework involving distortion identification followed by distortion-specific quality assessment. DIIVINE is capable of assessing the quality of a distorted image across multiple distortion categories, as against most NR IQA algorithms that are distortion-specific in nature. DIIVINE is based on natural scene statistics which govern the behavior of natural images. In this paper, we detail the principles underlying DIIVINE, the statistical features extracted and their relevance to perception and thoroughly evaluate the algorithm on the popular LIVE IQA database. Further, we compare the performance of DIIVINE against leading full-reference (FR) IQA algorithms and demonstrate that DIIVINE is statistically superior to the often used measure of peak signal-to-noise ratio (PSNR) and statistically equivalent to the popular structural similarity index (SSIM). A software release of DIIVINE has been made available online: http://live.ece.utexas.edu/research/quality/DIIVINE_release.zip for public use and evaluation.",2011,0, 5486,Assessment of WERA long-range HF-radar performance from the user's perspective,"Since April 2006, long range (8.3MHz) WERA HF radars have been operated on the Southeastern United States coastline, as part of the U.S. Integrated Ocean Observing System (IOOS) and in particular the national HF Radar network. These radars measure currents operationally, and waves and winds experimentally across the wide continental shelf of Georgia (GA) and South Carolina (SC). Half-hourly data at 3km horizontal resolution are acquired to a range of approximately 200 km, providing measurements across the wide continental shelf and into the adjacent Gulf Stream at the shelf edge. Radar performance in range and quality is discussed. Ease in siting of these space and cable intensive systems along populated coastlines, and the feasibility of their operation by non-radar specialists is also briefly discussed. Long term in situ measurements of currents, waves and winds concurrent with the long-term radar measurements were available. These measurements were also acquired as part of the U.S. national coastal ocean observatory network, under the evolving auspices of SABSOON, SEACOOS, and SECOORA. Wind, wave and ADCP measurements from several instrumented Navy towers at the 27, 33, and 44 m isobaths are available, and winds and wave information also exist at the 18m isobath from an NDBC buoy. Comparisons between radar-derived estimates and the in situ measurements are examined for a variety of parameters, including (near) surface currents, significant wave heights, directional wave spectra, and wind direction and speed. Radar estimates of surface velocity compare quite well with in situ ADCP near surface current data, with complex vector correlation magnitude of 0.95, and phase angle of -0.2 degrees. The negative sign is consistent with an expected counterclockwise rotation with depth between the radar surface current estimates and the subsurface upper water column ADCP measurements. Tidal amplitudes, which are large and predominantly semidiurnal on the GA/SC coast are extre- > - > mely well reproduced by the radar estimates. Radar significant wave height estimates from manufacturer-supplied software are much noisier than measurements from in situ pressure sensors or wave buoys, but capture several-hour low-passed variability fairly well. The spectra from which the radar significant wave heights are estimated have been examined, also exhibiting higher variability than indicated by the in situ estimates. Wind direction estimates from radar using manufacturer-supplied software were evaluated for a range of wind speeds and direction (fetch limited or not) and by differentiating between conditions where the predominant water waves satisfied the long-wave assumption or not. For non-fetch limited wind speeds in excess of the Bragg wave propagation speed, and wave fields for which the long-wave assumption is relevant, correlations between radar and in situ anemometer wind directions were good, at approximately 0.7. HF-radar directional wave spectra estimates are also derived with aftermarket software from SeaView Sensing Ltd, and are compared to in situ estimates from a five beam ADCP. These novel in situ measurements use the ADCP slant beam horizontal velocities combined with vertical velocities from the vertically oriented 5th beam to measure wave orbital velocities and derive directional wave spectra. Wave buoy accelerometer estimates of directional wave spectra are also available for comparison. Radar significant wave heights and wind direction from SeaView software are also being examined.",2011,0, 5487,Transmission line fault detection and classification,"Transmission line protection is an important issue in power system engineering because 85-87% of power system faults are occurring in transmission lines. This paper presents a technique to detect and classify the different shunt faults on a transmission lines for quick and reliable operation of protection schemes. Discrimination among different types of faults on the transmission lines is achieved by application of evolutionary programming tools. PSCAD/EMTDC software is used to simulate different operating and fault conditions on high voltage transmission line, namely single phase to ground fault, line to line fault, double line to ground and three phase short circuit. The discrete wavelet transform (DWT) is applied for decomposition of fault transients, because of its ability to extract information from the transient signal, simultaneously both in time and frequency domain. The data sets which are obtained from the DWT are used for training and testing the SVM architecture. After extracting useful features from the measured signals, a decision of fault or no fault on any phase or multiple phases of a transmission line is carried out using three SVM classifiers. The ground detection task is carried out by a proposed ground index. Gaussian radial basis kernel function (RBF) has been used, and performances of classifiers have been evaluated based on fault classification accuracy. In order to determine the optimal parametric settings of an SVM classifier (such as the type of kernel function, its associated parameter, and the regularization parameter c), fivefold cross-validation has been applied to the training set. It is observed that an SVM with an RBF kernel provides better fault classification accuracy than that of an SVM with polynomial kernel. It has been found that the proposed scheme is very fast and accurate and it proved to be a robust classifier for digital distance protection.",2011,0, 5488,Experimental study report on Opto-electronic sensor based gaze tracker system,The paper presents a smart assistive technology to improve the life quality of the people with severe mobility disorders by giving them independence of motion despite the difficulties in moving their limbs. The objective of this paper is to develop an Opto-electronic sensor based gaze tracker apparatus to detect the eye gaze direction based on movement of iris. Eventually this detection helps the user to control and steer the wheel chair by themselves through their eye gaze. The principle of operation of this system is based on the reflection of incidental light in the iris and sclera regions of eye. The user's gaze tracker is in the form of eye-goggle embedded with infrared source and detectors. The performance of various optical sources and detectors are tested and the results are graphically represented. A template database and also an algorithm is generated to provide real time computational analysis.,2011,0, 5489,Network architecture for smart grids,"Smart grid is designed to make the existing power grid system function spontaneously and independently without human intervention. Sensors are deployed to detect faults in the flow of power. In the proposed work, smart monitoring and controls are done by intelligent electronic devices. IED's (Intelligent electronic devices) monitors and records the value of power generated, and its corresponding voltage and frequency, which in turn is fed in to the demand-supply chain. Network architecture is designed to determine the flow of power from the generation end to the consumers. Demand-supply curve is embedded in architecture to map the power generated with the supply. If the demand at a particular instant of time is higher than the supply, then tariff is fixed and warning is given to the users regarding the rate of payment, to meet the higher tariff. Trust based authencity is provided for security.",2011,0, 5490,A single-specification principle for functional-to-timing simulator interface design,"Microarchitectural simulators are often partitioned into separate, but interacting, functional and timing simulators. These simulators interact through some interface whose level of detail depends upon the needs of the timing simulator. The level of detail supported by the interface profoundly affects the speed of the functional simulator, therefore, it is desirable to provide only the detail that is actually required. However, as the microarchitectural design space is explored, these needs may change, requiring corresponding time-consuming and error-prone changes to the interface. Thus simulator developers are tempted to include extra detail in the interface ""just in case"" it is needed later, trading off simulator speed for development time. We show that this tradeoff is unnecessary if a single-specification design principle is practiced: write the simulator once with an extremely detailed interface and then derive less-detailed interfaces from this detailed simulator. We further show that the use of an Architectural Description Language (ADL) with constructs for interface specification makes it possible to synthesize simulators with less-detailed interfaces from a highly-detailed specification with only a few lines of code and minimal effort. The speed of the resulting low-detail simulators is up to 14.4 times the speed of high-detail simulators.",2011,0, 5491,Architectures for online error detection and recovery in multicore processors,"The huge investment in the design and production of multicore processors may be put at risk because the emerging highly miniaturized but unreliable fabrication technologies will impose significant barriers to the life-long reliable operation of future chips. Extremely complex, massively parallel, multi-core processor chips fabricated in these technologies will become more vulnerable to: (a) environmental disturbances that produce transient (or soft) errors, (b) latent manufacturing defects as well as aging/wearout phenomena that produce permanent (or hard) errors, and (c) verification inefficiencies that allow important design bugs to escape in the system. In an effort to cope with these reliability threats, several research teams have recently proposed multicore processor architectures that provide low-cost dependability guarantees against hardware errors and design bugs. This paper focuses on dependable multicore processor architectures that integrate solutions for online error detection, diagnosis, recovery, and repair during field operation. It discusses taxonomy of representative approaches and presents a qualitative comparison based on: hardware cost, performance overhead, types of faults detected, and detection latency. It also describes in more detail three recently proposed effective architectural approaches: a software-anomaly detection technique (SWAT), a dynamic verification technique (Argus), and a core salvaging methodology.",2011,0, 5492,ScTMR: A scan chain-based error recovery technique for TMR systems in safety-critical applications,"We propose a roll-forward error recovery technique based on multiple scan chains for TMR systems, called Scan chained TMR (ScTMR). ScTMR reuses the scan chain flip-flops employed for testability purposes to restore the correct state of a TMR system in the presence of transient or permanent errors. In the proposed ScTMR technique, we present a voter circuitry to locate the faulty module and a controller circuitry to restore the system to the fault-free state. As a case study, we have implemented the proposed ScTMR technique on an embedded processor, suited for safety-critical applications. Exhaustive fault injection experiments reveal that the proposed architecture has the error detection and recovery coverage of 100% with respect to Single Event Upset (SEU) while imposing a negligible area and performance overhead as compared to traditional TMR-based techniques.",2011,0, 5493,Multi-level attacks: An emerging security concern for cryptographic hardware,"Modern hardware and software implementations of cryptographic algorithms are subject to multiple sophisticated attacks, such as differential power analysis (DPA) and fault-based attacks. In addition, modern integrated circuit (IC) design and manufacturing follows a horizontal business model where different third-party vendors provide hardware, software and manufacturing services, thus making it difficult to ensure the trustworthiness of the entire process. Such business practices make the designs vulnerable to hard-to-detect malicious modifications by an adversary, termed as “Hardware Trojansâ€? In this paper, we show that malicious nexus between multiple parties at different stages of the design, manufacturing and deployment makes the attacks on cryptographic hardware more potent. We describe the general model of such an attack, which we refer to as Multi-level Attack, and provide an example of it on the hardware implementation of the Advanced Encryption Standard (AES) algorithm, where a hardware Trojan is embedded in the design. We then analytically show that the resultant attack poses a significantly stronger threat than that from a Trojan attack by a single adversary. We validate our theoretical analysis using power simulation results as well as hardware measurement and emulation on a FPGA platform.",2011,0, 5494,Effects of Soft Error to System Reliability,"Soft errors on hardware could affect the reliability of computer system. To estimate system reliability, it is important to know the effects of soft errors to system reliability. This paper explores the effects of soft errors to computer system reliability. We propose a new approach to measure system reliability for soft error factor. In our approach, hardware components reliability is concerned first. Then, system reliability which shows the ability to perform required function is concerned. We equal system reliability to software reliability based on the mechanism that soft errors affect system reliability. We build a software reliability model under soft errors condition. In our software model, we analyze the state of software combining with the state of hardware. For program errors which are resulted from soft errors, we give an analysis of error mask. These real errors which could lead to software failure are distinguished. Finally, our experiments illustrate our analyses and validate our approach.",2011,0, 5495,A Facility Framework for Distributed Application,"The domain of distributed applications is developing rapidly. Facilities to support distributed applications have till now been designed on a case by case basis for each specialized user application. A systematic study and a generic facility framework for executing distributed applications are currently nonexistent and progress towards their development would have a significant impact in the seamless use of distributed applications. In line with these objectives, we have developed a component-based facility framework for distributed applications. The facility is defined as a high level conceptual facility which serves as an infrastructure providing services for distributed applications. The conceptual facility includes hardware and software (e.g. sensors, actuators and controllers) as sub-layers, which are dispersed physically. In this paper, we describe the framework, and examples are used to illustrate potential implementations of important concepts. The workflow of a distributed application is also provided. An implementation of a distributed application, with a partial version of the designed facility framework, for the water level control system of a nuclear power plant steam generator is used for evaluation of the concepts. Results convince us that the implementation as well as the facility is feasible. Application performance is shown to be affected by the time delay of data transmission and reasons for such delay are examined. A network quality test provides statistical estimates of data transmission delays due to the network. The results demonstrate that network quality is not a bottleneck for implementation for implementation of the component-based distributed application and support the hypothesis that a feasible implementation of such a conceptual facility framework is possible.",2011,0, 5496,Adaptive Genetic Algorithm for QoS-aware Service Selection,"An adaptive Genetic Algorithm is presented to select optimal web service composite plan from a lot of composite plans on the basis of global Quality-of-Service (QoS) constraints. In this Genetic Algorithm, a population diversity measurement and an adaptive crossover strategy are proposed to further improve the efficiency and convergence of Genetic Algorithm. The probability value of the crossover operation can be set according to the combination of population diversity and individual fitness. The algorithm can get more excellent composite service plan because it accords with the characteristic of web service selection very well. Some simulation results on web service selection with global QoS constraints have shown that the adaptive Genetic Algorithm can gain quickly better composition service plan that satisfies the global QoS requirements.",2011,0, 5497,Software faults prediction using multiple classifiers,"In recent years, the use of machine learning algorithms (classifiers) has proven to be of great value in solving a variety of problems in software engineering including software faults prediction. This paper extends the idea of predicting software faults by using an ensemble of classifiers which has been shown to improve classification performance in other research fields. Benchmarking results on two NASA public datasets show all the ensembles achieving higher accuracy rates compared with individual classifiers. In addition, boosting with AR and DT as components of an ensemble is more robust for predicting software faults.",2011,0, 5498,Phase-based tuning for better utilization of performance-asymmetric multicore processors,"The latest trend towards performance asymmetry among cores on a single chip of a multicore processor is posing new challenges. For effective utilization of these performance-asymmetric multicore processors, code sections of a program must be assigned to cores such that the resource needs of code sections closely matches resource availability at the assigned core. Determining this assignment manually is tedious, error prone, and significantly complicates software development. To solve this problem, we contribute a transparent and fully-automatic process that we call phase-based tuning which adapts an application to effectively utilize performance-asymmetric multicores. Compared to the stock Linux scheduler we see a 36% average process speedup, while maintaining fairness and with negligible overheads.",2011,0, 5499,Using machines to learn method-specific compilation strategies,"Support Vector Machines (SVMs) are used to discover method-specific compilation strategies in Testarossa, a commercial Just-in-Time (JiT) compiler employed in the IBM® J9 Javaâ„?Virtual Machine. The learning process explores a large number of different compilation strategies to generate the data needed for training models. The trained machine-learned model is integrated with the compiler to predict a compilation plan that balances code quality and compilation effort on a per-method basis. The machine-learned plans outperform the original Testarossa for start-up performance, but not for throughput performance, for which Testarossa has been highly hand-tuned for many years.",2011,0, 5500,A comparison study of automatic speech quality assessors sensitive to packet loss burstiness,"The paper delves the behavior rating of new emerging automatic quality assessors of VoIP calls subject to bursty packet loss process. The examined speech quality assessment (SQA) algorithms are able to estimate speech quality of live VoIP calls at run-time using control information extracted from header content of received packets. They are especially designed to be sensitive to packet loss burstiness. The performance evaluation study is performed using a dedicated set-up software-based SQA framework. It offers a personalized packet killer and includes implementation of four SQA algorithms. A speech quality database, which covers a wide range of bursty packet loss conditions, has been created then thoroughly analyzed. Our important findings are the following: (1) all examined automatic bursty-loss aware speech quality assessors achieve a satisfactory correlation under upper (>;20%) and lower (<;10%) ranges of packet loss process (2) They exhibit a clear weakness to assess speech quality under a moderated packet loss process (3) The accuracy of sequence-by-sequence basis of examined SQA algorithms should be addressed in details for further precision.",2011,0, 5501,Software reliability prediction model based on PSO and SVM,"Software reliability prediction classifies software modules as fault-prone modules and less fault-prone modules at the early age of software development. As to a difficult problem of choosing parameters for Support Vector Machine (SVM), this paper introduces Particle Swarm Optimization (PSO) to automatically optimize the parameters of SVM, and constructs a software reliability prediction model based on PSO and SVM. Finally, the paper introduces Principal Component Analysis (PCA) method to reduce the dimension of experimental data, and inputs these reduced data into software reliability prediction model to implement a simulation. The results show that the proposed prediction model surpasses the traditional SVM in prediction performance.",2011,0, 5502,Numerical simulation of FLD based on leaner strain path for fracture on auto panel surface,"In order to emerge the traditional measurement's shortage of auto body panel, we proposed the corrected FLD based on the linear path method and calculation methods. According to the above mentioned programs and the fracture defect diagnosis, we developed a CAE module for the fracture defect analysis based on VC++ environment, and solved the problems which the traditional sheet metal forming CAE software can not accurately predict. And then a forming process of a hood is simulated by applying the proposed method and AUTOFORM. Comparison between the two simulation results is done which shows the method we proposed is better, and then we introduced some methods to optimize the adjustment amount of metal flow and the stamping dies for fracture. Some suggestions are given by investigating the adjustment amount and modification of the stamping die.",2011,0, 5503,Application of GPR and Rayleigh wave prospecting in blasting compaction of ore terminal,"The applications of both ground penetrating radar (GPR) and transient Rayleigh waves techniques were introduced in blasting compaction of ore terminal project in Jiaonan bay. Considered the geological structure and special detection conditions of seawall, a 25MHz GPR antenna was chosen to ensure the adequate penetrating depth, and a field test was carried out to obtain exact propagation velocity of electromagnetic waves in different layers. In transient Rayleigh wave detection, the GeopenRWA software which contains an effective inversion scheme was used to process the data. The result of two methods showed good agreement for determining the thickness of riprap layer, and therefore, has potential for quality control of seawall construction and accurate calculation of earthwork volume.",2011,0, 5504,Assessing Oracle Quality with Checked Coverage,"A known problem of traditional coverage metrics is that they do not assess oracle quality - that is, whether the computation result is actually checked against expectations. In this paper, we introduce the concept of checked coverage - the dynamic slice of covered statements that actually influence an oracle. Our experiments on seven open-source projects show that checked coverage is a sure indicator for oracle quality - and even more sensitive than mutation testing, its much more demanding alternative.",2011,0, 5505,An Empirical Evaluation of Assertions as Oracles,"In software testing, an oracle determines whether a test case passes or fails by comparing output from the program under test with the expected output. Since the identification of faults through testing requires that the bug is both exercised and the resulting failure is recognized, it follows that oracles are critical to the efficacy of the testing process. Despite this, there are few rigorous empirical studies of the impact of oracles on effectiveness. In this paper, we report the results of one such experiment in which we exercise seven core Java classes and two sample programs with branch-adequate, input only(i.e., no oracle) test suites and collect the failures observed by different oracles. For faults, we use synthetic bugs created by the muJava mutation testing tool. In this study we evaluate two oracles: (1) the implicit oracle (or ""null oracle"") provided by the runtime system, and (2) runtime assertions embedded in the implementation (by others) using the Java Modeling Language. The null oracle establishes a baseline measurement of the potential benefit of rigorous oracles, while the assertions represent a more rigorous approach that is sometimes used in practice. The results of our experiments are interesting. First, on a per-method basis, we observe that the null oracle catches less than 11% of the faults, leaving more than 89% uncaught. Second, we observe that the runtime assertions in our subjects are effective at catching about 53% of the faults not caught by null oracle. Finally, by analyzing the data using data mining techniques, we observe that simple, code-based metrics can be used to predict which methods are amenable to the use of assertion-based oracles with a high degree of accuracy.",2011,0, 5506,An Empirical Study on the Relation between Dependency Neighborhoods and Failures,"Changing source code in large software systems is complex and requires a good understanding of dependencies between software components. Modification to components with little regard to dependencies may have an adverse impact on the quality of the latter, i.e., increase their risk to fail. We conduct an empirical study to understand the relationship between the quality of components and the characteristics of their dependencies such as their frequency of change, their complexity, number of past failures and the like. Our study has been conducted on two large software systems: Microsoft VISTA and ECLIPSE. Our results show that components that have outgoing dependencies to components with higher object-oriented complexity tend to have fewer field failures for VISTA, but the opposite relation holds for ECLIPSE. Likewise, other notable observations have been made through our study that (a) confirm that certain characteristics of components increase the risk of their dependencies to fail and (b) some of the characteristics are project specific while some were also found to be common. We expect that such results can be leveraged for use to provide new directions for research in defect prediction, test prioritization and related research fields that utilize code dependencies in their empirical analysis. Additionally, these results provide insights to engineers on the potential reliability impacts of new component dependencies based upon the characteristics of the component.",2011,0, 5507,Cost Optimizations in Runtime Testing and Diagnosis of Systems of Systems,"In practically all development processes tests are used to detect the presence of faults. This is not an exception for critical and high-availability systems. However, these systems cannot be taken offline or duplicated for testing in some cases. This makes runtime testing necessary. This paper presents work aimed at optimizing the three main sources of testing cost: preparation, execution and diagnosis. First, preparation cost is optimized by defining a metric of the runtime testability of the system, used to elaborate an implementation plan of preparative work for runtime testing. Second, the interrelated nature of test execution cost and diagnostic cost is highlighted and a new diagnostic test prioritization is introduced.",2011,0, 5508,Tracking hardware evolution,"Software evolution is the term used to describe the process of developing and updating software systems. Software repositories such as versioning systems and bug tracking systems are used to manage the evolution of software projects. The mining of this information is used to support predictions and improve design and reuse. Integrated circuit development can also benefit from these techniques. Nowadays, both software and hardware development use repositories and bug tracking systems. There are many hardware open source projects such as SUN's OpenSparc and designs at Opencores.org. We propose a methodology to track specific HDL metrics in order to improve design quality. Our results present a case study that correlates HDL metrics and bug proneness of Verilog HDL modules. We also present EyesOn, an open source framework designed to automate historical and complexity metrics tracking of HDL projects.",2011,0, 5509,Occurrence probability analysis of a path at the architectural level,"In this paper, we propose an algorithm to compute the occurrence probability for a given path precisely in an acyclic synthesizable VHDL or software code. This can be useful for the ranking of critical paths and in a variety of problems that include compiler-level architectural optimization and static timing analysis for improved performance. Functions that represent condition statements at the basic blocks are manipulated using Binary Decision Diagrams (BDDs). Experimental results show that the proposed method outperforms the traditional Monte Carlo simulation approach. The later is shown to be non-scalable as the number of inputs increases.",2011,0, 5510,CT Saturation Detection Based on Waveform Analysis Using a Variable-Length Window,"Saturation of current transformers (CTs) can lead to maloperation of protective relays. Using the waveshape differences between the distorted and undistorted sections of fault current, this paper introduces a novel method to quickly detect CT saturation. First, a symmetrical variable-length window is defined for the current waveform. The least error squares technique is employed to process the current inside this window and make two estimations for the current samples exactly before and after the window. CT saturation can be identified based on the difference between these two estimations. The accurate performance of this method is independent of the CT parameters, such as CT remanence and its magnetization curve. Moreover, the proposed method is not influenced by the fault current characteristics, noise, etc., since it is based on the significant differences between the distorted and undistorted fault currents. Extensive simulation studies were performed using PSCAD/EMTDC software and the fast and reliable response of the proposed method for various conditions, including very fast and mild saturation events, was demonstrated.",2011,0, 5511,Simulation Based Functional and Performance Evaluation of Robot Components and Modules,"This paper presents a simulation based test method for functional and performance evaluation of robotic components and modules. In the proposed test method, function test procedure consists of unit, state, and interface tests which assess if the functional specifications of robot component 1 or module 2 are met. As for performance test, simulation environment provides a down scaled virtual work space for performance test of robot module accommodating virtual devices in conformity with the detailed performance specifications of real robot components. The proposed method can be used for verification of reliability of robot modules and components which prevents faults of them before their usage in real applications. In addition, the developed test system can be extended to support various test conditions implying possible cost saving for additional tests.",2011,0, 5512,Qualification and Selection of Off-the-Shelf Components for Safety Critical Systems: A Systematic Approach,"Mission critical systems are increasingly been developed by means of Off-The-Shelf (OTS) items since this allows reducing development costs. Crucial issues to be properly treated are (i) to assess the quality of each potential OTSitem to be used and (ii) to select the one that better fits the system requirements. Despite the importance of these issues, the current literature lacks a systematic approach to perform the previous two operations. The aim of this paper is to present a framework that can overcome this lack. Reasoning from the available product assurance standards for certifying mission critical systems, the proposed approach is based on the customized quality model that describes the quality attributes. Such quality model will guide a proper evaluation of OTS products, and the choice of which product to use is based on the outcomes of such an evaluation process. This framework represents a key solution to have a dominant role in the market of mission critical systems due to the demanding request by manufactures of such systems for an efficient qualification/certification process.",2011,0, 5513,Discussion on questions about using artificial neural network for predicting of concrete property,"Several questions about predicting concrete property using BP artificial neural network have been discussed, including the selection of network structure, the determination of sample capacity and grouping method, the protection from over-fitting, and the comparison on precision of prediction. For the network-structure, it has been found that directly apply the consumption of raw-material and other crucial quality indices as the units of input can bring about a satisfactory result of prediction, in which a single hide layer holds 10 units and the workability along with the strength and durability formed two sub-networks simultaneously. For the sample capacity and grouping method, at least 100 sets of samples are necessary to find the intrinsic regularity, among them 1/3-1/4 should be taken as test samples. A new tactics for error-tracking has been proposed which is verified effective to avoid the over-fitting. The comparison of effectiveness and feasibility between BP neural networks and linear regression algorithm showed that BP neural networks have better performance in accuracy of prediction. Finally, an applicable software has been developed and used as examples to predict 163 sets of mixes for a ready-mixed concrete plant, to show its application in detail.",2011,0, 5514,Simulation of sensing field for electromagnetic tomography system,"Electromagnetic tomography has potential value in process measurement. The frontier of electromagnetic tomography system is the sensor array. Owing to the excited signal acting on the sensor array directly, the excited strategy and the frequency and amplitude of the signal affect the quality of the information that the detected coil acquired from the object space. Furthermore, it would affect the accuracy of the information post extracted from the object field. To improve the sensitivity of the sensor array on the changes of the object field distribution, upgrade the sensitivity and accuracy of the system and guarantee high precision and high stability of the experimental data, use the finite element simulation software COMSOL Multiphysics to analyze the excited strategy and the characteristic of the excitation frequency of electromagnetic tomography. Establish the foundation in optimal using of electromagnetic tomography system.",2011,0, 5515,A distributed cable harness tester based on CAN bus,"This paper discusses a distributed cable harness tester based on CAN bus, which has a few functions such as connection detecting of a wire, diode orientation testing and resistor's impedance testing. In this article, application layer protocal design is researched on in particular in order to improve the tester's performance, and the software design of upper computer and the framework of handware of tester nodes are introduced in details. The tester is designed to ensure quality and reliability of cable harness. Through detecting , early failure products such as breakage circuit, short circuit and wrong conductor arrangement can be rejected. So it improves the efficiency of detection.",2011,0, 5516,The design of circular saw blade dynamic performance detection of high-speed data acquisition system,"The dynamic properties of the circular saw blade when cutting at a high speed influence the cutting quality at a large extent. This paper gives a brief introduction about the employment of advanced sensors and detection methods together with self-detection software system detect circular saw blade vibration quickly and accurately, non-destructively at the condition of high-speed rotating of the circular saw blade. The results of the actual measurement of the different carbide circular saw blades show that this detection system has the quality of high accuracy and high reliability. It is fitful for the production process of circular saw blade body for quality control.",2011,0, 5517,Design of a Novel Noninvasive Blood Glucose Monitor Based on Improved Holography Concave Grating NIR-Spectrometer,"Real-time monitoring of blood glucose concentration (BGC) is an efficient way to prevent diabetes. And the noninvasive detection of BGC has been becoming a popular study point. In order to overcome some limitations and improve the performances of the noninvasive BGC detection based on spectroscopy, a novel blood glucose monitor (BGM) based on an improved NIR-spectrometer is designed in this paper. Meanwhile, a custom-built spectrometer for BGM based on the holography concave grating is developed. In order to eliminate the stray-light and improve the performances of the BGM, the grooves are delineated and black-painted on the inner-wall of the spectrometer. In addition, the linear CCD with combined data acquisition (DAQ) card and the virtual-spectrometer based on Lab VIEW are used as the spectrum acquisition hardware and software-platform unit. Experimental results show that the wavelength range, resolution and spectral quality of the BGM are improved. The wavelength range of the BGM is 300-1000nm; its resolution can reach 1nm. Therefore, the BGM has the potential values in the research of the diabetes and BGC detection.",2011,0, 5518,A new methodology for realistic open defect detection probability evaluation under process variations,"CMOS IC scaling has provided significant improvements in electronic circuit performance. Advances in test methodologies to deal with new failure mechanisms and nanometer issues are required. Interconnect opens are an important defect mechanism that requires detailed knowledge of its physical properties. In nanometer process, variability is predominant and considering only nominal value of parameters is not realistic. In this work, a model for computing a realistic coverage of via open defect that takes into account the process variability is proposed. Correlation between parameters of the affected gates is considered. Furthermore, spatial correlation of the parameters for those gates tied to the defective floating node can also influence the detectability of the defect. The proposed methodology is implemented in a software tool to determine the probability of detection of via opens for some ISCAS benchmark circuits. The proposed detection probability evaluation together with a test methodology to generate favorable logic conditions at the coupling lines can allow a better test quality leading to higher product reliability.",2011,0, 5519,CEDA: Control-Flow Error Detection Using Assertions,"This paper presents an efficient software technique, control-flow error detection through assertions (CEDA), for online detection of control-flow errors. Extra instructions are automatically embedded into the program at compile time to continuously update runtime signatures and to compare them against preassigned values. The novel method of computing runtime signatures results in a huge reduction in the performance overhead, as well as the ability to deal with complex programs and the capability to detect subtle control-flow errors. The widely used C compiler, GCC, has been modified to implement CEDA, and the SPEC benchmark programs were used as the target to compare with earlier techniques. Fault injection experiments were used to demonstrate the effect of control-flow errors on software and to evaluate the fault detection capabilities of CEDA. Based on a new comparison metric, method efficiency, which takes into account both error coverage and performance overhead, CEDA is found to be much better than previously proposed methods.",2011,0, 5520,Towards a software quality assessment model based on open-source statical code analyzers,"In the context of software engineering, quality assessment is not straightforward. Generally, quality assessment is important since it can cut costs in the product life-cycle. A software quality assessment model based on open-source analyzers and quality factors facilitates quality measurement. Our quality assessment model is based on three principles: mapping rules to quality factors, computing scores based on the percentage of rule violating entities (classes, methods, instructions), assessing quality as a weighted mean of the rule attached scores.",2011,0, 5521,A suite of java specific metrics for software quality assessment based on statical code analyzers,"Software quality assessment can cut costs significantly from early development stages. On the other hand quality assessment helps in taking development decisions, checking the effect of fault corrections, estimating maintenance effort. Fault density based quality assessment relying on statical source code analyzers need language specific metrics in order to quantify quality as a number. Thus, we identified and defined informally a suite of Java metrics in order to accomplish our quality assessment goal.",2011,0, 5522,Accurate location of phase-to-earth and phase-to-phase faults on power transmission lines using two-end synchronized measurements,This paper presents a new non-iterative algorithm for accurate location of phase-to-earth and phase-to-phase faults on power transmission lines. Input signals of the fault locator are considered as obtained from two-end synchronized measurements of three-phase currents and voltages. High accuracy of fault location is assured by considering a distributed-parameter line model and processing the signals from the fault interval only. The developed algorithm has been thoroughly tested with use of signals generated by ATP-EMTP software for wide range of simulations. Selected results from such testing are presented and discussed.,2011,0, 5523,Employing S-transform for fault location in three terminal lines,"This paper has proposed a new fault location method for three terminal transmission lines. Once a fault occurs in a transmission line, high frequency transient components through travelling of the waves are appeared in all terminals. In this paper, S-transform is employed to detect the arrival time of these waves to terminals. For simulating various conditions, ATP-EMTP software and to process the proposed fault location method MATLAB software are used. The results have shown the good performance of the algorithm in different conditions.",2011,0, 5524,Evaluation and performance comparison of power swing detection algorithms in presence of series compensation on transmission lines,"Any sudden change in the configuration or the loading of an electrical network causes power swing between the load concentrations of the network. In order to prevent the distance protection from tripping during such conditions, a power swing blocking is utilized. This paper compares and evaluates the performance of several power swing detection algorithms in the presence of series compensation and without it. The decreasing impedance, the Vcosφ and the power derivation methods are ordinary algorithms for power swing detection. A precise model for power swing conditions has been generated by EMTDC/PSCAD software. In this paper this model is used to simulate the algorithms behavior under different operational conditions.",2011,0, 5525,A real-time coarse-to-fine multiview capture system for all-in-focus rendering on a light-field display,"We present an end-to-end system capable of real-time capturing and displaying with full horizontal parallax high-quality 3D video contents on a cluster-driven multiprojector light-field display. The capture component is an array of low-cost USB cameras connected to a single PC. Raw M-JPEG data coming from the software-synchronized cameras are multicast over Gigabit Ethernet to the back-end nodes of the rendering cluster, where they are decompressed and rendered. For all-in-focus rendering, view-dependent depth is estimated on the GPU using a customized multiview space-sweeping approach based on fast Census-based area matching implemented in CUDA. Realtime performance is demonstrated on a system with 18 VGA cameras and 72 SVGA rendering projectors.",2011,0, 5526,An AMI based measurement and control system in smart distribution grid,"To realize some of the smart grid goals, for the distribution system of the rural area, with GPRS communication network, a feeder automation based on AMI is proposed. The three parts of the system are introduced. Integrated with the advanced communication and measurement technology, the proposed system can monitor the operating situation and status of breakers, detect and locate the fault of the feeders. The information from the system will help realizing the advanced distribution operation, such as improve power quality, loss detection, state estimation and so on. The application case in Qingdao utilities in Shandong Province, PR China shows the effectiveness of the proposed system.",2011,0, 5527,Simulation and experimental analysis of a ZigBee sensor network with fault detection and reconfiguration mechanism,"In this paper, the networking architecture of wireless sensor system is studied and proposed by simulation analysis and experimental testing to implement the fault detection and reconfiguration mechanism on ZigBee network. The tool software NS2 is used to simulate and analyze the networking topologies and transmission characteristics for ZigBee network system. By way of simulation analysis of different networking topologies, the system performance of ZigBee sensor network is discussed and recognized individually for the mesh and tree routing algorithms, respectively. Moreover, the CC2430 development kit of Texas Instrument (TI) is used to develop the ZigBee nodes including the coordinator, router, and end-devices to form a specific application scenario for disaster prevention and warning system. By applying the Inter-PAN conception and algorithm, the networking architecture of ZigBee sensor system is proposed and exercised to carry out the fault detection and reconfiguration functions which may increase the reliability of data transmission on ZigBee network. Finally, through the various and practical experiments, the proposed networking topologies are implemented and tested to illustrate and evaluate the system feasibility and performance of proposed ZigBee network for wireless sensor system.",2011,0, 5528,Empirical study of an intelligent argumentation system in MCDM,"Intelligent argumentation based collaborative decision making system assists stakeholders in a decision making group to assess various alternatives under different criterion based on the argumentation. A performance score of each alternative under every criterion in Multi-Criteria Decision Making (MCDM) is represented in a decision matrix and it denotes satisfaction of the criteria by that alternative. The process of determining the performance scores of alternatives in a decision matrix for criterion could be controversial sometimes because of the subjective nature of criterion. We developed a framework for acquiring performance scores in a decision matrix for multi-criteria decision making using an intelligent argumentation and collaborative decision support system we developed in the past [1]. To validate the framework empirically, we have conducted a study in a group of stakeholders by providing them an access to use the intelligent argumentation based collaborative decision making tool over the Web. The objectives of the study are: 1) to validate the intelligent argumentation system for deriving performance scores in multi-criteria decision making, and 2) to validate the overall effectiveness of the intelligent argumentation system in capturing rationale of stakeholders. The results of the empirical study are analyzed in depth and they show that the system is effective in terms of collaborative decision support and rationale capturing. In this paper, we present how the study was carried out and its empirical results.",2011,0, 5529,Software and communications architecture for Prognosis and Health Monitoring of ocean-based power generator,"This paper presents a communications and software architecture in support of Prognosis and Health Monitoring (PHM) applications for renewable ocean-based power generation. The generator/turbine platform is instrumented with various sensors (e.g. vibration, temperature) that generate periodic measurements used to assess the current system health and to project its future performance. The power generator platform is anchored miles offshore and uses a pair of wireless data links for monitoring and control. Since the link is expected to be variable and unreliable, being subject to challenging environmental conditions, the main functions of the PHM system are performed on a computing system located on the surface platform. The PHM system architecture is implemented using web services technologies following MIMOSA OSA-CBM standards. To provide sufficient Quality of Service for mission-critical traffic, the communications system employs application-level queue management with semantic-based filtering for the XML PHM messages, combined with IP packet traffic control and link quality monitoring at the network layer.",2011,0, 5530,The possibility of application the optical wavelength division multiplexing network for streaming multimedia distribution,"In this paper, simulation is used to investigate the possibility of streaming multimedia distribution over optical wavelength division multiplexing (WDM) network with optical burst switching (OBS) nodes. Simulation model is developed using software tool Delsi for Delfi 4.0. Pareto generator is used to model real-time multimedia stream. The wavelength allocation (WA) method and the deflection routing are implemented in OBS nodes. Performance measures, packet loss and delay, are estimated in order to investigate the quality of multimedia service. Statistical analysis of simulation output is performed by estimating the confidence intervals for a given degree according to the Student distribution. Obtained results indicate that optical WDM network manages multimedia contents distribution in efficient way to give a high quality of service.",2011,0, 5531,Applying source code analysis techniques: A case study for a large mission-critical software system,"Source code analysis has been and still is extensively researched topic with various applications to the modern software industry. In this paper we share our experience in applying various source code analysis techniques for assessing the quality of and detecting potential defects in a large mission-critical software system. The case study is about the maintenance of a software system of a Bulgarian government agency. The system has been developed by a third-party software vendor over a period of four years. The development produced over 4 million LOC using more than 20 technologies. Musala Soft won a tender for maintaining this system in 2008. Although the system was operational, there were various issues that were known to its users. So, a decision was made to assess the system's quality with various source code analysis tools. The expectation was that the findings will reveal some of the problems' cause, allowing us to correct the issues and thus improve the quality and focus on functional enhancements. Musala Soft had already established a special unit - Applied Research and Development Center - dealing with research and advancements in the area of software system analysis. Thus, a natural next step was for this unit to use the know-how and in-house developed tools to do the assessment. The team used various techniques that had been subject to intense research, more precisely: software metrics, code clone detection, defect and “code smellsâ€?detection through flow-sensitive and points-to analysis, software visualization and graph drawing. In addition to the open-source and free commercial tools, the team used internally developed ones that complement or improve what was available. The internally developed Smart Source Analyzer platform that was used is focused on several analysis areas: source code modeling, allowing easy navigation through the code elements and relations for different programming languages; quality audit through software metri- s by aggregating various metrics into a more meaningful quality characteristic (e.g. “maintainabilityâ€?; source code pattern recognition - to detect various security issues and “code smellsâ€? The produced results presented information about both the structure of the system and its quality. As the analysis was executed in the beginning of the maintenance tenure, it was vital for the team members to quickly grasp the architecture and the business logic. On the other hand, it was important to review the detected quality problems as this guided the team to quick solutions for the existing issues and also highlighted areas that would impede future improvements. The tool IPlasma and its System Complexity View (Fig. 1) revealed where the business logic is concentrated, which are the most important and which are the most complex elements of the system. The analysis with our internal metrics framework (Fig. 2) pointed out places that need refactoring because the code is hard to modify on request or testing is practically impossible. The code clone detection tools showed places where copy and paste programming has been applied. PMD, Find Bugs and Klockwork Solo tools were used to detect various “code smellsâ€?(Fig. 3). There were a number of occurrences that were indeed bugs in the system. Although these results were productive for the successful execution of the project, there were some challenges that should be addressed in the future through more extensive research. The two aspects we consider the most important are usability and integration. As most of the tools require very deep understanding of the underlying analysis, the whole process requires tight cooperation between the analysis team and the maintenance team. For example, most of the metrics tools available provide specific values for a given metric without any indication what the value means and what is the threshold. Our internal metrics framework aggregates the met",2011,0, 5532,GPU-Based Fast Iterative Reconstruction of Fully 3-D PET Sinograms,"This work presents a graphics processing unit (GPU)-based implementation of a fully 3-D PET iterative reconstruction code, FIRST (Fast Iterative Reconstruction Software for [PET] Tomography), which was developed by our group. We describe the main steps followed to convert the FIRST code (which can run on several CPUs using the message passing interface [MPI] protocol) into a code where the main time-consuming parts of the reconstruction process (forward and backward projection) are massively parallelized on a GPU. Our objective was to obtain significant acceleration of the reconstruction without compromising the image quality or the flexibility of the CPU implementation. Therefore, we implemented a GPU version using an abstraction layer for the GPU, namely, CUDA C. The code reconstructs images from sinogram data, and with the same System Response Matrix obtained from Monte Carlo simulations than the CPU version. The use of memory was optimized to ensure good performance in the GPU. The code was adapted for the VrPET small-animal PET scanner. The CUDA version is more than 70 times faster than the original code running in a single core of a high-end CPU, with no loss of accuracy.",2011,0, 5533,An approach for source code classification to enhance maintainability,"The importance of software development is a design and programming to create a code quality, especially maintainability factor. An approach for source code classification to enhance maintainability is a method to classify and improve code. This approach can increase High cohesion and Low coupling that effect to the internal and the external of software. It can reduce degraded integrity. First of all, we classify source code with software metrics and fuzzy logic and then improve bad smell, ambiguous code with refactoring to be a clean code. The last step, we evaluate the quality of clean code with maintainability measurement.",2011,0, 5534,Bad-smell prediction from software design model using machine learning techniques,Bad-smell prediction significantly impacts on software quality. It is beneficial if bad-smell prediction can be performed as early as possible in the development life cycle. We present methodology for predicting bad-smells from software design model. We collect 7 data sets from the previous literatures which offer 27 design model metrics and 7 bad-smells. They are learnt and tested to predict bad-smells using seven machine learning algorithms. We use cross-validation for assessing the performance and for preventing over-fitting. Statistical significance tests are used to evaluate and compare the prediction performance. We conclude that our methodology have proximity to actual values.,2011,0, 5535,Apad: A QoS Guarantee System for Virtualized Enterprise Servers,"Today's data center often employs virtualization to allow multiple enterprise applications to share a common hosting platform in order to improve resource and space utilization. It could be a greater challenge when the shared server becomes overloaded or enforces priority of the common resource among applications in such an environment. In this paper, we propose Apad, a feedback-based resource allocation system which can dynamically adjust the resource shares to multiple virtual machine nodes in order to meet performance target on shared virtualized infrastructure. To evaluate our system design, we built a test bed hosting several virtual machines which employed Xen Virtual Machine Monitor (VMM), and using Apache server along with its workload-generated tool. Our experiment results indicate that our system is able to detect and adapt to resource requirement that changes over time and allocate virtualized resources accordingly to achieve application-level Quality of Service (QoS).",2011,0, 5536,Identity attack and anonymity protection for P2P-VoD systems,"As P2P multimedia streaming service is becoming more popular, it is important for P2P-VoD content providers to protect their servers identity. In this paper, we first show that it is possible to launch an ""identity attack"": exposing and identifying servers of peer-to-peer video-on-demand (P2P-VoD) systems. The conventional wisdom of the P2P-VoD providers is that identity attack is very difficult because peers cannot distinguish between regular peers and servers in the P2P streaming process. We are the first to show that it is otherwise, and present an efficient and systematic methodology to perform P2P-VoD servers detection. Furthermore, we present an analytical framework to quantify the probability that an endhost is indeed a P2P-VoD server. In the second part of this paper, we present a novel architecture that can hide the identity and provide anonymity protection for servers in P2P-VoD systems. To quantify the protective capability of this architecture, we use the ""fundamental matrix theory"" to show the high complexity of discovering all protective nodes so as to disrupt the P2P-VoD service. We not only validate the model via extensive simulation, but also implement this protective architecture on PlanetLab and carry out measurements to reveal its robustness against identity attack.",2011,0, 5537,Finding Interaction Faults Adaptively Using Distance-Based Strategies,"Software systems are typically large and exhaustive testing of all possible input parameters is usually not feasible. Testers select tests that they anticipate may catch faults, yet many unanticipated faults may be overlooked. This work complements current testing methodologies by adaptively dispensing one-test-at-a-time, where each test is as ""distant"" as possible from previous tests. Two types of distance measures are explored: (1) distance defined in relation to combinations of parameter-values not previously tested together and (2) distance computed as the maximum minimal Hamming distance from previous tests. Experiments compare the effectiveness of these two types of distance-based tests and random tests. Experiments include simulations, as well as examination of instrumented data from an actual system, the Traffic Collision Avoidance System (TCAS). Results demonstrate that the two instantiations of distance-based tests often find more faults sooner and in fewer tests than randomly generated tests.",2011,0, 5538,Self-diagnosis for large scale wireless sensor networks,"Existing approaches to diagnosing sensor networks are generally sink-based, which rely on actively pulling state information from all sensor nodes so as to conduct centralized analysis. However, the sink-based diagnosis tools incur huge communication overhead to the traffic sensitive sensor networks. Also, due to the unreliable wireless communications, sink often obtains incomplete and sometimes suspicious information, leading to highly inaccurate judgments. Even worse, we observe that it is always more difficult to obtain state information from the problematic or critical regions. To address the above issues, we present the concept of self-diagnosis, which encourages each single sensor to join the fault decision process. We design a series of novel fault detectors through which multiple nodes can cooperate with each other in a diagnosis task. The fault detectors encode the diagnosis process to state transitions. Each sensor can participate in the fault diagnosis by transiting the detector's current state to a new one based on local evidences and then pass the fault detector to other nodes. Having sufficient evidences, the fault detector achieves the Accept state and outputs the final diagnosis report. We examine the performance of our self-diagnosis tool called TinyD2 on a 100 nodes testbed.",2011,0, 5539,Multi-field range encoding for packet classification in TCAM,"Packet classification has wide applications such as unauthorized access prevention in firewalls and Quality of Service supported in Internet routers. The classifier containing pre-defined rules is processed by the router for finding the best matching rule for each incoming packet and for taking appropriate actions. Although many software-based solutions had been proposed, high search speed required for Internet backbone routers is not easy to achieve. To accelerate the packet classification, the state-of-the-art ternary content-addressable memory (TCAM) is a promising solution. In this paper, we propose an efficient multi-field range encoding scheme to solve the problem of storing ranges in TCAM and to decrease TCAM usage. Existing range encoding schemes are usually single-field schemes that perform range encoding processes in the range fields independently. Our performance experiments on real-life classifiers show that the proposed multi-field range encoding scheme uses less TCAM memory than the existing single field schemes. Compared with existing notable single-field encoding schemes, the proposed scheme uses 12% ~ 33% of TCAM memory needed in DRIPE or SRGE and 56% ~ 86% of TCAM memory needed in PPC for the classifiers of up to 10k rules.",2011,0, 5540,QoF: Towards comprehensive path quality measurement in wireless sensor networks,"Due to its large scale and constrained communication radius, a wireless sensor network mostly relies on multi-hop transmissions to deliver a data packet along a sequence of nodes. It is of essential importance to measure the forwarding quality of multi-hop paths and such information shall be utilized in designing efficient routing strategies. Existing metrics like ETX, ETF mainly focus on quantifying the link performance in between the nodes while overlooking the forwarding capabilities inside the sensor nodes. The experience on manipulating GreenOrbs, a large-scale sensor network with 330 nodes, reveals that the quality of forwarding inside each sensor node is at least an equally important factor that contributes to the path quality in data delivery. In this paper we propose QoF, Quality of Forwarding, a new metric which explores the performance in the gray zone inside a node left unattended in previous studies. By combining the QoF measurements within a node and over a link, we are able to comprehensively measure the intact path quality in designing efficient multi-hop routing protocols. We implement QoF and build a modified Collection Tree Protocol (CTP). We evaluate the data collection performance in a test-bed consisting of 50 TelosB nodes, and compare it with the original CTP protocol. The experimental results show that our approach takes both transmission cost and forwarding reliability into consideration, thus achieving a high throughput for data collection.",2011,0, 5541,Case-based reasoning and real-time systems: Exploiting successfully poorer solutions,"In the literature of real-time software applications, case-based reasoning (CBR) techniques have been successfully used in order to develop systems able to carry on with their temporal restrictions. This paper presents a mathematical technique for modelling the generation of solutions by a RealTime (RT) system employing a CBR that allows their response times to be bounded. Speaking in general, a system that tries to be adapted to highly dynamic environment needs an efficient integration of high-level processes (deliberative and time-costly, but close-fitting) within low-level (reactive, faster but poorer in quality) processes is necessary. The most relevant aspect of our current approach is that, unexpectedly, the performance of the system do not get worse any time that it retrieves worse cases in situations even when it has enough time to generate better solutions. We concentrate on formal aspects of the proposed integrated CBR-RT system without establishing which should be the most adequate procedure in a subsequent implementation stage. The advantage of the presented scheme is that it does not depend on neither the particular problem nor a concrete environment. It consists in a formal approach that only requires, on one hand, local information about the averaged-time spent by the system in obtaining a solution and, on the other hand, an estimation about their temporal restrictions.",2011,0, 5542,Motion estimation with Second Order Prediction,"To exploit both spatial and temporal correlation of video signal, a Second Order Prediction (SOP) scheme has been presented to eliminate the spatial correlation existing in motion compensated residue. Although SOP achieves better RD performance by adopting residue prediction in Mode Decision (MD) stage, it can be further improved by combining residue prediction into Motion Estimation (ME). Our analysis demonstrates that a more accurate motion vector could be obtained if both ME and MD take residue prediction into account. Implementation of the proposed algorithm into the H.264/AVC reference software demonstrates that the enhanced SOP can achieve up to 18.46% of bit-rate saving at equivalent objective quality.",2011,0, 5543,Test case prioritization for regression testing based on fault dependency,"Test case prioritization techniques involve scheduling test cases for regression testing in an order that increases their effectiveness at meeting some performance goal. This is inefficient to re execute all the test cases in regression testing following the software modifications. Using information obtained from previous test case execution, prioritization techniques order the test cases for regression testing so that most beneficial are executed first thus allows an improved effectiveness of testing. One performance goal, rate of dependency detected among faults, measures how quickly dependency among faults are detected within the regression testing process. An improved rate of fault dependency can provide faster feedback on software and let developers start debugging on the severe faults that cause other faults to appear later. This paper presents the new metric for assessing rate of fault dependency detection and an algorithm to prioritize test cases. Using the new metric the effectiveness of this prioritization is shown comparing it with non-prioritized test case. Analysis proves that prioritized test cases are more effective in detecting dependency among faults.",2011,0, 5544,Analysis of quality of object oriented systems using object oriented metrics,Measurement is fundamental to any engineering discipline. There is considerable evidence that object-oriented design metrics can be used to make quality management decisions. This leads to substantial cost savings in allocation of resources for testing or estimation of maintenance effort for a project. C++ has always been the most preferred language of choice for many object oriented systems and many object oriented metrics have been proposed for it. This paper focuses on an empirical evaluation of object oriented metrics in C++. Two projects have been considered as inputs for the study - the first project is a Library management system for a college and the second is a graphical editor which can be used to describe and create a scene. The metric values have been calculated using a semi automated tool. The resulting values have been analyzed to provide significant insight about the object oriented characteristics of the projects.,2011,0, 5545,MngRisk â€?A decisional framework to measure managerial dimensions of legacy application for rejuvenation through reengineering,"Nowadays legacy system reengineering has emerged as a well-known system evolution technique. The goal of reengineering is to increase productivity and quality of legacy system through fundamental rethinking and radical redesigning of system. A broad range of risk issues and concerns must be addressed to understand and model reengineering process. Overall success of reengineering effort requires to considering three distinctive but connected areas of interest i.e. system domain, managerial domain and technical domain. We present a hierarchical managerial domain risk framework MngRisk to analyze managerial dimensions of legacy system. The fundamental premise of framework is to observe, extract and categories the contextual perspective models and risk clusters of managerial domain. This work contributes for a decision driven framework to identify and assess risk components of managerial domain. Proposed framework provides guidance on interpreting the results obtained from assessment to take decision about when evolution of a legacy system through reengineering is successful.",2011,0, 5546,Service selection based on status identification in SOA,"Service Oriented Architecture (SOA) has become a new software development paradigm because it provides a flexible framework that can help to reduce development cost and time. SOA promises loosely coupled interoperable and composable services. Service selection in dynamic environment is the usage of techniques in selecting and providing quality of services (QoS) to consumers. The dynamic nature of web services is realized by an environment for collecting runtime information of web services. Based on the collected runtime information, several characteristics of service failures and successes are identified and four service statuses are defined. Based on these statuses, an approach to dynamic availability estimation is proposed along with multiple QoS criteria's such as cost and performance which is called Status Identification based service selection (SISS).",2011,0, 5547,Type a uncertainty in jitter measurements in communication networks,"Reliable quality of service measurements are crucial issues to monitor and evaluate the performance of computer and communication networks. To these aims, depending on the applications, several figures of merit have to be measured. They mainly concern with: throughput, available bandwidth, packet jitter, one way delay, round trip delay, and packet loss. To estimate these quantities two general classes of measurement systems (also known as “protocol analyzersâ€? can be considered: Special Purpose Architecture based Protocol Analyzer (SPAPA) and General Purpose Architecture based Protocol Analyzer (GPAPA). Both measurement solutions can be thought as composed by a reference data traffic generator and one or more measurement nodes. All these components are constituted by hardware and software sections which cooperate to achieve the measurement results. Focusing the attention on the uncertainty involved in the QoS indexes measurement, no information is provided by the manufactures for both SPAPA and GPAPA. In addition, the analysis of the literature in the field of computer networks testing, has highlighted that this important aspect is not adequately dealt with or that methodological studies are not yet available. This paper proposes a suitable approach to evaluate the type A measurement uncertainty of SPAPA and GPAPA systems. A general model to study the effect of the involved quantities of influence is presented and a method to quantify their effects is provided. The experimental results show the suitability and the effectiveness of the proposal.",2011,0, 5548,Diagnosis method for suspension-errors detection in electro-dynamic loud-speakers,"In this paper, the wear of the suspensions in a speaker. This phenomenon can be characterized and seen as a variation of half inductance seen from the amplifier, because that value depends directly on the minimum and maximum excursion in the displacement of the moving coil. Having analyzed the possible cases of involvement of the shift from worn suspensions, has raised a board decision by which it is possible to diagnose the state of the suspension by measuring currents and voltages and calculation of half inductance seen from the amplifier.",2011,0, 5549,Pattern and Phonetic Based Street Name Misspelling Correction,"With increasing information reproduced each year, the quality of the data is subject to damage. Variations and errors in the address fields of databases make address-matching strategies problematic. In this paper, first the sources of the error in street names are investigated. Then, a hybrid approach, which is a combination of pattern and phonetic-matching algorithms, is used to fix the problems in the street names. The algorithm is evaluated in terms of its efficiency and correction rates. The newly introduced technique shows an improvement of at least 7% in correction rates compared to the well-known spelling correction algorithms.",2011,0, 5550,Early Availability Requirements Modeling Using Use Case Maps,"The design and implementation of distributed real-time systems is often dominated by non-functional considerations like timing, distribution and fault tolerance. As a result, it is increasingly recognized that non-functional requirements should be considered at the earliest stages of system development life cycle. The ability to model non-functional properties (such as timing constraints, availability, performance and security) at the system requirement level not only facilitates the task of moving towards real-time design, but ultimately supports the early detection of errors through automated validation and verification. This paper introduces a novel approach to describe availability features in Use Case Maps (UCM) specifications. The proposed approach relies on a mapping of availability architectural tactics to UCM components. We illustrate the applicability of our approach using the ISSU (In Service Software Upgrade) feature on IP routers.",2011,0, 5551,Large Scale Monitoring and Online Analysis in a Distributed Virtualized Environment,"Due to increase in number and complexity of the large scale systems, performance monitoring and multidimensional quality of service (QoS) management has become a difficult and error prone task for system administrators. Recently, the trend has been to use virtualization technology, which facilitates hosting of multiple distributed systems with minimum infrastructure cost via sharing of computational and memory resources among multiple instances, and allows dynamic creation of even bigger clusters. An effective monitoring technique should not only be fine grained with respect to the measured variables, but also should be able to provide a high level overview of the distributed systems to the administrator of all variables that can affect the QoS requirements. At the same time, the technique should not add performance burden to the system. Finally, it should be integrated with a control methodology that manages performance of the enterprise system. In this paper, a systematic distributed event based (DEB) performance monitoring approach is presented for distributed systems by measuring system variables (physical/virtual CPU utilization and memory utilization), application variables (application queue size, queue waiting time, and service time), and performance variables (response time, throughput, and power consumption) accurately with minimum latency at a specified rate. Furthermore, we have shown that proposed monitoring approach can be utilized to provide input to an application monitoring utility to understand the underlying performance model of the system for a successful on-line control of the distributed systems for achieving predefined QoS parameters.",2011,0, 5552,Error detection scheme based on fragile watermarking for H.264/AVC,"Compressed video bitstreams are very sensitive to transmission errors. If we lose packets or receive them with errors during transmission, not only the current frame will be corrupted, but also the error will propagate to succeeding frames due to the spatiotemporal predictive coding structure. Error detection and concealment is a good approach to reduce the bad influence on the reconstructed visual quality. To increase concealment efficiency, we need to get some more accurate error detection algorithm. In this paper, we present a new error detection scheme based on a fragile watermarking algorithm to increase the error detection ratio. We verified that the pro posed algorithm generates good performances in PSNR and objective visual quality through the computer simulation by H.324M mobile simulation set.",2011,0, 5553,Failure Avoidance through Fault Prediction Based on Synthetic Transactions,"System logs are an important tool in studying the conditions (e.g., environment misconfigurations, resource status, erroneous user input) that cause failures. However, production system logs are complex, verbose, and lack structural stability over time. These traits make them hard to use, and make solutions that rely on them susceptible to high maintenance costs. Additionally, logs record failures after they occur: by the time logs are investigated, users have already experienced the failures' consequences. To detect the environment conditions that are correlated with failures without dealing with the complexities associated with processing production logs, and to prevent failure-causing conditions from occurring before the system goes live, this research suggests a three step methodology: (i) using synthetic transactions, i.e., simplified workloads, in pre-production environments that emulate user behavior, (ii) recording the result of executing these transactions in logs that are compact, simple to analyze, stable over time, and specifically tailored to the fault metrics of interest, and (iii) mining these specialized logs to understand the conditions that correlate to failures. This allows system administrators to configure the system to prevent these conditions from happening. We evaluate the effectiveness of this approach by replicating the behavior of a service used in production at Microsoft, and testing the ability to predict failures using a synthetic workload on a 650 million events production trace. The synthetic prediction system is able to predict 91% of real production failures using 50-fold fewer transactions and logs that are 10,000-fold more compact than their production counterparts.",2011,0, 5554,Autonomic SLA-Driven Provisioning for Cloud Applications,"Significant achievements have been made for automated allocation of cloud resources. However, the performance of applications may be poor in peak load periods, unless their cloud resources are dynamically adjusted. Moreover, although cloud resources dedicated to different applications are virtually isolated, performance fluctuations do occur because of resource sharing, and software or hardware failures (e.g. unstable virtual machines, power outages, etc.). In this paper, we propose a decentralized economic approach for dynamically adapting the cloud resources of various applications, so as to statistically meet their SLA performance and availability goals in the presence of varying loads or failures. According to our approach, the dynamic economic fitness of a Web service determines whether it is replicated or migrated to another server, or deleted. The economic fitness of a Web service depends on its individual performance constraints, its load, and the utilization of the resources where it resides. Cascading performance objectives are dynamically calculated for individual tasks in the application workflow according to the user requirements. By fully implementing our framework, we experimentally proved that our adaptive approach statistically meets the performance objectives under peak load periods or failures, as opposed to static resource settings.",2011,0, 5555,An evolutionary multiobjective optimization approach to component-based software architecture design,"The design of software architecture is one of the difficult tasks in the modern component-based software development which is based on the idea that develop software systems by assembling appropriate off-the-shelf components with a well-defined software architecture. Component-based software development has achieved great success and been extensively applied to a large range of application domains from realtime embedded systems to online web-based applications. In contrast to traditional approaches, it requires software architects to address a large number of non-functional requirements that can be used to quantify the operation of system. Moreover, these quality attributes can be in conflict with each other. In practice, software designers try to come up with a set of different architectural designs and then identify good architectures among them. With the increasing scale of architecture, this process becomes time-consuming and error-prone. Consequently architects could easily end up with some suboptimal designs because of large and combinatorial search space. In this paper, we introduce AQOSA (Automated Quality-driven Optimization of Software Architecture) toolkit, which integrates modeling technologies, performance analysis techniques, and advanced evolutionary multiobjective optimization algorithms (i.e. NSGA-II, SPEA2, and SMS-EMOA) to improve non-functional properties of systems in an automated manner.",2011,0, 5556,A Configurable Approach to Tolerate Soft Errors via Partial Software Protection,"Compared with hardware-based methods, software-based methods which need not additional hardware costs are regarded as efficient methods to tolerate soft errors. Software-based methods which are implemented by software protection have performance sacrifice. This paper proposes a new configurable approach whose purpose is to balance system reliability and performance, to tolerate soft errors via partial software protection. Those unprotected software regions which are motivated by soft error mask on software level are related to statically dead codes, those codes whose probabilities to be executed are low and some partially dead codes. For those protected codes, we copy every data and operate every operation twice to ensure those data stored into memory are right. Additionally, we ensure every branch instruction can jump to the right address by checking condition and destination address. Finally, our approach is implemented by modification of compiler. System reliability and performance are evaluated with different configurations. Experimental results demonstrate our purpose to balance system reliability and performance.",2011,0, 5557,Exploiting Module Locality to Improve Software Fault Prediction,"Receiving bug reports, developers usually need to spend significant amount of time resolving where to fix the faults. Although previous studies have shown that the revision frequency of a file location is an important measure to reflect the possibility of containing bugs, the frequency-based approaches achieve limited prediction accuracy for file locations having low revision frequencies. Our empirical observations show that the files of low revision frequencies in the same file directory or package of the files of high revision frequencies may be potential bug-fixing candidates for future bug reports. In this paper, we present a novel enhancement by exploiting module locality to improve the frequency-based approaches. Our experiments on three open source projects reveal that module locality can be employed to consistently improve the hit rate of a frequency-based approach and achieve the highest improvement of about 14%.",2011,0, 5558,Guaranteed Seamless Transmission Technique for NGEO Satellite Networks,"Non-geostationary (NGEO) satellite communication systems are able to provide global communication with reasonable latency and low terminal power requirements. However the highly topological dynamics, large delay and error prone links have been a matter of fact in the satellite network studies. This paper proposes a novel Guaranteed Seamless Transmission Technique(GST), which is a Hop-by-Hop scheme enhanced with the End-to-End scheme and associated with a link algorithm, which updates the link load explicitly and sends it back to the sources that use the link. We analyze GST theoretically by adopting a simple fluid model. The good performance of GST, in terms of bandwidth utilization, effective transmission ratio and fairness, is verified via a set of simulations.",2011,0, 5559,A new ontology for fault tolerance in QoS-enabled service oriented systems,Semantic web today is considered to be the extended limit of the World Wide Web and in recent years there has been promising advancements with great potential in the field of semantic computing. We see semantic applications emerging not only in web environment but in different industries to realize its utility in explicit expression of data in enterprises. This proved to have a rather excellent outcome in semantic web implementations but the true power of semantic web lies in service composition. Semantic web service composition could take the most advantage from the semantic data for finding the optimum combination of web services. Quality of Service (QoS) parameters are known as critical factors for selecting the web service with the best attributes but yet remain a statistic and metric form of data. QoS attributes are usually estimated trough formulas and mostly inaccurate which leads to composition failure and raises the need for fault tolerance and its mechanisms. So far there has been little work done to make the fault tolerance more compatible with the semantic web. In this paper we propose a more profound solution for this problem with the use of semantic web technologies for fault tolerance. A new ontology for fault tolerance is introduced which essentially improves the web service composition performance with providing a fault detection and recovery guidance system.,2011,0, 5560,Journey: A Massively Multiplayer Online Game Middleware,"The design of massively multiplayer online games (MMOGs) is challenging because scalability, consistency, reliability, and fairness must be achieved while providing good performance and enjoyable gameplay. This article presents Journey, an MMOG middleware that hides the complexity of dealing with the aforementioned issues from the game programmer. Journey builds on top of a peer-to-peer network infrastructure to provide load-balancing, fault-tolerance, and cheat-detection capabilities centered on object-oriented technology. Experimental results show performance measurements obtained by running Journey on more than 200 machines.",2011,0, 5561,A middleware for reliable soft real-time communication over IEEE 802.11 WLANs,"This paper describes a middleware layer for soft real-time communication in wireless networks devised and realized using standard WLAN hardware and software. The proposed middleware relies on a simple network architecture comprising a number of stations that generate real-time traffic and a particular station, called Scheduler, that coordinates the transmission of the real-time packets using a polling mechanism. The middleware combines EDF scheduling with a dynamic adjustment of the maximum number of transmission attempts, so as to adapt the performance to fluctuations of the link quality, thus increasing the communication reliability, while taking deadlines into account. After describing the basic communication paradigm and the underlying concepts of the proposed middleware, the paper describes the target network configuration and the software architecture. Finally, the paper assesses the effectiveness of the proposed middleware in terms of PER, On-Time Throughput, and Deadline Miss Rate, presenting the results of measurements performed on real testbeds.",2011,0, 5562,Control-flow error detection using combining basic and program-level checking in commodity multi-core architectures,"This paper presents a software-based technique to detect control-flow errors using basic level control-flow checking and inherent redundancy in commodity multi-core processors. The proposed detection technique is composed of two phases of basic and program-level control-flow checking. Basic-level control-flow error detection is achieved through inserting additional instructions into program at design time regarding to control-flow graph. Previous research shows that modern superscalar microprocessors already contain significant amounts of redundancy. Program-level control-flow checking can detect CFEs by leveraging existing microprocessors redundancy. Therefore, the cost of adding extra redundancy for fault tolerance is eliminated. In order to evaluate the proposed technique, three workloads quick sort, matrix multiplication and linked list utilized to run on a multi-core processor, and a total of 6000 transient faults have been injected on the processor. The advantage of the proposed technique in terms of performance and memory overheads and detection capability compared with conventional control-flow error detection techniques.",2011,0, 5563,A Random search based effective algorithm for pairwise test data generation,"Testing is a very important task to build error free software. As the resources and time to market is limited for a software product, it is impossible to perform exhaustive test i.e., to test all combinations of input data. To reduce the number of test cases in an acceptable level, it is preferable to use higher interaction level (t way, where t â‰?2). Pairwise (2-way or t = 2) interaction can find most of the software faults. This paper proposes an effective random search based pairwise test data generation algorithm named R2Way to optimize the number of test cases. Java program has been used to test the performance of the algorithm. The algorithm is able to support both uniform and non-uniform values effectively with performance better than the existing algorithms/tools in terms of number of generated test cases and time consumption.",2011,0, 5564,Regression Test Selection Techniques for Test-Driven Development,"Test-Driven Development (TDD) is characterized by repeated execution of a test suite, enabling developers to change code with confidence. However, running an entire test suite after every small code change is not always cost effective. Therefore, regression test selection (RTS) techniques are important for TDD. Particularly challenging for TDD is the task of selecting a small subset of tests that are most likely to detect a regression fault in a given small and localized code change. We present cost-bounded RTS techniques based on both dynamic program analysis and natural-language analysis. We implemented our techniques in a tool called Test Rank, and evaluated its effectiveness on two open-source projects. We show that using these techniques, developers can accelerate their development cycle, while maintaining a high bug detection rate, whether actually following TDD, or in any methodology that combines testing during development.",2011,0, 5565,A Principled Evaluation of the Effect of Directed Mutation on Search-Based Statistical Testing,"Statistical testing generates test inputs by sampling from a probability distribution that is carefully chosen so that the inputs exercise all parts of the software being tested. Sets of such inputs have been shown to detect more faults than test sets generated using traditional random and structural testing techniques. Search-based statistical testing employs a metaheuristic search algorithm to automate the otherwise labour-intensive process of deriving the probability distribution. This paper proposes an enhancement to this search algorithm: information obtained during fitness evaluation is used to direct the mutation operator to those parts of the representation where changes may be most beneficial. A principled empirical evaluation demonstrates that this enhancement leads to a significant improvement in algorithm performance, and so increases both the cost-effectiveness and scalability of search-based statistical testing. As part of the empirical approach, we demonstrate the use of response surface methodology as an effective and objective method of tuning algorithm parameters, and suggest innovative refinements to this methodology.",2011,0, 5566,An Evaluation of Mutation and Data-Flow Testing: A Meta-analysis,"Mutation testing is a fault-based testing technique for assessing the adequacy of test cases in detecting synthetic faulty versions injected to the original program. The empirical studies report the effectiveness of mutation testing. However, the inefficiency of mutation testing has been the major drawback of this testing technique. Though a number of studies compare mutation to data flow testing, the summary statistics for measuring the magnitude order of effectiveness and efficiency of these two testing techniques has not been discussed in literature. In addition, the validity of each individual study is subject to external threats making it hard to draw any general conclusion based solely on a single study. This paper introduces a novel meta-analytical approach to quantify and compare mutation and data flow testing techniques based on findings reported in research articles. We report the results of two statistical meta-analyses performed on comparing and measuring the effectiveness as well as efficiency of mutation and data-flow testing based on relevant empirical studies. We focus on the results of three empirical research articles selected from the premier venues with their focus on comparing these two testing techniques. The results show that mutation is at least two times more effective than data-flow testing, i.e., odds ratio= 2.27. However, mutation is three times less efficient than data-flow testing, i.e., odds ratio= 2.94.",2011,0, 5567,An Experience Report on Using Code Smells Detection Tools,"Detecting code smells in the code and consequently applying the right refactoring steps when necessary is very important to improve the quality of the code. Different tools have been proposed for code smell detection, each one characterized by particular features. The aim of this paper is to describe our experience on using different tools for code smell detection. We outline the main differences among them and the different results we obtained.",2011,0, 5568,On Investigating Code Smells Correlations,"Code smells are characteristics of the software that may indicate a code or design problem that can make software hard to evolve and maintain. Detecting and removing code smells, when necessary, improves the quality and maintainability of a system. Usually detection techniques are based on the computation of a particular set of combined metrics, or standard object-oriented metrics or metrics defined ad hoc for the smell detection. The paper investigates the direct and indirect correlations existing between smells. If one code smell exists, this can imply the existence of another code smell, or if one smell exists, another one cannot be there, or perhaps it could observe that some code smells tend to go together.",2011,0, 5569,Assessing the Impact of Using Fault Prediction in Industry,"Software developers and testers need realistic ways to measure the practical effects of using fault prediction models to guide software quality improvement methods such as testing, code reviews, and refactoring. Will the availability of fault predictions lead to discovery of different faults, or to more efficient means of finding the same faults? Or do fault predictions have no practical impact at all? In this challenge paper we describe the difficulties of answering these questions, and the issues involved in devising meaningful ways to assess the impact of using prediction models. We present several experimental design options and discuss the pros and cons of each.",2011,0, 5570,An Empirical Study on Object-Oriented Metrics and Software Evolution in Order to Reduce Testing Costs by Predicting Change-Prone Classes,"Software maintenance cost is typically more than fifty percent of the cost of the total software life cycle and software testing plays a critical role in reducing it. Determining the critical parts of a software system is an important issue, because they are the best place to start testing in order to reduce cost and duration of tests. Software quality is an important key factor to determine critical parts since high quality parts of software are less error prone and easy to maintain. As object oriented software metrics give important evidence about design quality, they can help software engineers to choose critical parts, which should be tested firstly and intensely. In this paper, we present an empirical study about the relation between object oriented metrics and changes in software. In order to obtain the results, we analyze modifications in software across the historical sequence of open source projects. Empirical results of the study indicate that the low level quality parts of a software change frequently during the development and management process. Using this relation we propose a method that can be used to estimate change-prone classes and to determine parts which should be tested first and more deeply.",2011,1, 5571,Modeling the Diagnostic Efficiency of Regression Test Suites,"Diagnostic performance, measured in terms of the manual effort developers have to spend after faults are detected, is not the only important quality of a diagnosis. Efficiency, i.e., the number of tests and the rate of convergence to the final diagnosis is a very important quality of a diagnosis as well. In this paper we present an analytical model and a simulation model to predict the diagnostic efficiency of test suites when prioritized with the information gain algorithm. We show that, besides the size of the system itself, an optimal coverage density and uniform coverage distribution are needed to achieve an efficient diagnosis. Our models allow us to decide whether using IG with our current test suite will provide a good diagnostic efficiency, and enable us to define criteria for the generation or improvement of test suites.",2011,0, 5572,A Diagnostic Point of View for the Optimization of Preparation Costs in Runtime Testing,"Runtime testing is emerging as the solution for the validation and acceptance testing of service-oriented systems, where many services are external to the organization, and duplicating the system's components and their context is too complex, if possible at all. In order to perform runtime tests, an additional expense in the test preparation phase is required, both in software development and in hardware. Preparation cost prioritization methods have been based on runtime testability (i.e, coverage) and do not consider whether a good runtime testability is sufficient for a good runtime diagnosis quality in case faults are detected, and whether this diagnosis will be obtained efficiently (i.e., with a low number of test cases). In this paper we show (1) the direct relationship between testability and diagnosis quality, that (2) these two properties do not guarantee an efficient diagnosis, and (3) a measurement that ensures better prediction of efficiency.",2011,0, 5573,Efficient FPGA implementation of a high-quality super-resolution algorithm with real-time performance,"Nowadays, most of the image contents are recorded in high resolution (HR). Nevertheless, there is still a need of low resolution (LR) content presentation when recapturing is unviable. As high resolution content becomes commonplace (with HDTV penetration in US estimated at 65% for 2010) the viewers expectations on quality and presentation become higher and higher. Thus, in order to meet today¿s consumer expectations the LR content quality has to be enhanced before being displayed. Content pre-processing/upscaling methods include interpolation and Super-Resolution Image Reconstruction (SRIR). SRIR hardware implementations are scarce, mainly due to high memory and performance requirements. This is the challenge we address. In this work an efficient super-resolution core implementation with high quality of the super-resolved outcome is presented. In order to provide high outcome quality the implementation is based on state-of-the-art software. The software algorithm is presented and its output quality compared with other state-of-the-art solutions. The results of the comparison show that the software provides superior output quality. When implemented in targeted FPGA device, the system operating frequency was estimated at 109 MHz. This resulted in a performance that allows dynamic 2x super resolution of QCIF YUV 4:2:0p sequences at a frame rate of 25 fps, leading to realtime execution using only on-chip device memory. Post-layout simulations with back-annotated time proved that the hardware implementation is capable of producing the same output quality as the base software.",2011,0, 5574,A real-time 1080p 2D-to-3D video conversion system,"In this paper, we demonstrate a 2D-to-3D video conversion system capable of real-time 1920×1080p conversion. The proposed system generates 3D depth information by fusing cues from edge feature-based global scene depth gradient and texture-based local depth refinement. By combining the global depth gradient and local depth refinement, generated 3D images have comfortable and vivid quality, and algorithm has very low computational complexity. Software is based on a system with a multi-core CPU and a GPU. To optimize performance, we use several techniques including unified streaming dataflow, multi-thread schedule synchronization, and GPU acceleration for depth image-based rendering (DIBR). With proposed method, real-time 1920×1080p 2Dto- 3D video conversion running at 30fps is then achieved.",2011,0, 5575,Detection of Sleeping Cells in LTE Networks Using Diffusion Maps,"In mobile networks emergence of failures is caused by various breakdowns of hardware and software elements. One of the serious failures in radio networks is a Sleeping Cell. In our work one of the possible root causes for appearance of this network failure is simulated in a dynamic network simulator. The main aim of the research is to detect the presence of a Sleeping Cell in the network and to define its location. For this purpose Diffusion Maps data mining technique is employed. The developed fault identification framework is using the performance characteristics of the network, collected during its regular operation, and for that reason it can be implemented in real Long Term Evolution (LTE) networks within the Self-Organizing Networks (SON) concept.",2011,0, 5576,Towards Service Level Engineering for IT Services: Defining IT Services from a Line of Business Perspective,"Today, the management of service quality is posing a major challenge for many service systems formed by providers and their business customers. We argue that the trade-off between service costs and benefits incurred by both of these parties is not sufficiently considered when service quality is stipulated. Many Service Level Agreements are tailored to fine-grained IT services. The impact of service levels defined for these technical services on customers' business processes, however, is difficult to estimate. Thus, it is a major objective to identify IT services that directly affect the performance of customers' business departments. In this research-in-progress paper we present first results of an empirical study aiming at the definition of IT services and corresponding service level indicators from a customer business department perspective. Based on an initial literature research and a number of semi-structured interviews - with users working in different departments of a public IT customer and having different backgrounds and IT knowledge - we have identified a set of common, ""directly business-relevant"" IT services. Thus, we take an important first step towards the application of Service Level Engineering, i.e. the derivation of business-relevant performance metrics and associated cost-efficient target values to precisely identify efficient service quality.",2011,0, 5577,Boundless memory allocations for memory safety and high availability,"Spatial memory errors (like buffer overflows) are still a major threat for applications written in C. Most recent work focuses on memory safety - when a memory error is detected at runtime, the application is aborted. Our goal is not only to increase the memory safety of applications but also to increase the application's availability. Therefore, we need to tolerate spatial memory errors at runtime. We have implemented a compiler extension, Boundless, that automatically adds the tolerance feature to C applications at compile time. We show that this can increase the availability of applications. Our measurements also indicate that Boundless has a lower performance overhead than SoftBound, a state-of-the-art approach to detect spatial memory errors. Our performance gains result from a novel way to represent pointers. Nevertheless, Boundless is compatible with existing C code. Additionally, Boundless provides a trade-off to reduce the runtime overhead even further: We introduce vulnerability specific patching for spatial memory errors to tolerate only known vulnerabilities. Vulnerability specific patching has an even lower runtime overhead than full tolerance.",2011,0, 5578,A methodology for the generation of efficient error detection mechanisms,"A dependable software system must contain error detection mechanisms and error recovery mechanisms. Software components for the detection of errors are typically designed based on a system specification or the experience of software engineers, with their efficiency typically being measured using fault injection and metrics such as coverage and latency. In this paper, we introduce a methodology for the design of highly efficient error detection mechanisms. The proposed methodology combines fault injection analysis and data mining techniques in order to generate predicates for efficient error detection mechanisms. The results presented demonstrate the viability of the methodology as an approach for the development of efficient error detection mechanisms, as the predicates generated yield a true positive rate of almost 100% and a false positive rate very close to 0% for the detection of failure-inducing states. The main advantage of the proposed methodology over current state-of-the-art approaches is that efficient detectors are obtained by design, rather than by using specification-based detector design or the experience of software engineers.",2011,0, 5579,Aaron: An adaptable execution environment,"Software bugs and hardware errors are the largest contributors to downtime, and can be permanent (e.g. deterministic memory violations, broken memory modules) or transient (e.g. race conditions, bitflips). Although a large variety of dependability mechanisms exist, only few are used in practice. The existing techniques do not prevail for several reasons: (1) the introduced performance overhead is often not negligible, (2) the gained coverage is not sufficient, and (3) users cannot control and adapt the mechanism. Aaron tackles these challenges by detecting hardware and software errors using automatically diversified software components. It uses these software variants only if CPU spare cycles are present in the system. In this way, Aaron increases fault coverage without incurring a perceivable performance penalty. Our evaluation shows that Aaron provides the same throughput as an execution of the original application while checking a large percentage of requests - whenever load permits.",2011,0, 5580,A new framework for call admission control in wireless cellular network,"Managing the limited amount of the radio spectrum is an important issue with increasing demand of the same. In recent work, we have introduced MAS (Multi-agent System) for channel assignment problem in wireless cellular networks. Instead of using a base station directly for negotiation, a multi-agent system comprising of software agents was designed to work at base station. The system consists of a collection of layers to take care of local and global scenario. In this paper we propose the combination of MAS with a new call admission control (CAC) mechanism based on fuzzy control. This paper aims to provide improvement in QoS parameters using fuzzy control at call admission level. From the simulation studies it is observed that combined approach of Multi-agent system and fuzzy control at initial level improves channel allocation and other QoS factors in an effective and efficient manner. The simulation results are presented on a benchmark 49 cell environment with 70 channels that validate the performance of this approach.",2011,0, 5581,An epidemic approach to dependable key-value substrates,"The sheer volumes of data handled by today's Internet services demand uncompromising scalability from the persistence substrates. Such demands have been successfully addressed by highly decentralized key-value stores invariably governed by a distributed hash table. The availability of these structured overlays rests on the assumption of a moderately stable environment. However, as scale grows with unprecedented numbers of nodes the occurrence of faults and churn becomes the norm rather than the exception, precluding the adoption of rigid control over the network's organization. In this position paper we outline the major ideas of a novel architecture designed to handle today's very large scale demand and its inherent dynamism. The approach rests on the well-known reliability and scalability properties of epidemic protocols to minimize the impact of churn. We identify several challenges that such an approach implies and speculate on possible solutions to ensure data availability and adequate access performance.",2011,0, 5582,Automated vulnerability discovery in distributed systems,"In this paper we present a technique for automatically assessing the amount of damage a small number of participant nodes can inflict on the overall performance of a large distributed system. We propose a feedback-driven tool that synthesizes malicious nodes in distributed systems, aiming to maximize the performance impact on the overall behavior of the distributed system. Our approach focuses on the interface of interaction between correct and faulty nodes, clearly differentiating the two categories. We build and evaluate a prototype of our approach and show that it is able to discover vulnerabilities in real systems, such as PBFT, a Byzantine Fault Tolerant system. We describe a scenario generated by our tool, where even a single malicious client can bring a BFT system of over 250 nodes down to zero throughput.",2011,0, 5583,DynaPlan: Resource placement for application-level clustering,"Creating a reliable computing environment from an unreliable infrastructure is a common challenge. Application-Level High Availability (HA) clustering addresses this problem by relocating and restarting applications when failures are detected. Current methods of determining the relocation target(s) of an application are rudimentary in that they do not take into account the myriad factors that influence an optimal placement. This paper presents DynaPlan, a method that improves the quality of failover planning by allowing the expression of a wide and extensible range of considerations, such as multidimensional resource consumption and availability, architectural compatibility, security constraints, location constraints, and policy considerations, such as energy-favoring versus performance-favoring. DynaPlan has been implemented by extending the IBM PowerHA clustering solution running on a group of IBM System P servers. In this paper, we describe the design, implementation, and preliminary performance evaluation of DynaPlan.",2011,0, 5584,SOFAS: A Lightweight Architecture for Software Analysis as a Service,"Access to data stored in software repositories by systems such as version control, bug and issue tracking, or mailing lists is essential for assessing the quality of a software system. A myriad of analyses exploiting that data have been proposed throughout the years: source code analysis, code duplication analysis, co-change analysis, bug prediction, or detection of bug fixing patterns. However, easy and straight forward synergies between these analyses rarely exist. To tackle this problem we have developed SOFAS, a distributed and collaborative software analysis platform to enable a seamless interoperation of such analyses. In particular, software analyses are offered as Restful web services that can be accessed and composed over the Internet. SOFAS services are accessible through a software analysis catalog where any project stakeholder can, depending on the needs or interests, pick specific analyses, combine them, let them run remotely and then fetch the final results. That way, software developers, testers, architects, or quality assurance experts are given access to quality analysis services. They are shielded from many peculiarities of tool installations and configurations, but SOFAS offers them sophisticated and easy-to-use analyses. This paper describes in detail our SOFAS architecture, its considerations and implementation aspects, and the current set of implemented and offered Restful analysis services.",2011,0, 5585,Assessing Suitability of Cloud Oriented Platforms for Application Development,"The enterprise data centers and software development teams are increasingly embracing the cloud oriented and virtualized computing platforms and technologies. As a result it is no longer straight forward to choose the most suitable platform which may satisfy a given set of Non-Functional Quality Attributes (NFQA) criteria that is significant for an application. Existing methods such as Serial Evaluation and Consequential Choice etc. are inadequate as they fail to capture the objective measurement of various criteria that are important for evaluating the platform alternatives. In practice, these methods are applied in an ad-hoc fashion. In this paper we introduce three application development platforms: 1) Traditional non-cloud 2) Virtualized and 3) Cloud Aware. We propose a systematic method that allows the stakeholders to evaluate these platforms so as to select the optimal one by considering important criteria. We apply our evaluation method to these platforms by considering a certain (non-business) set of NFQAs. We show that the pure cloud oriented platforms fare no better than the traditional non-cloud and vanilla virtualized platforms in case of most NFQAs.",2011,0, 5586,A Low-Power High-Performance Concurrent Fault Detection Approach for the Composite Field S-Box and Inverse S-Box,"The high level of security and the fast hardware and software implementations of the Advanced Encryption Standard have made it the first choice for many critical applications. Nevertheless, the transient and permanent internal faults or malicious faults aiming at revealing the secret key may reduce its reliability. In this paper, we present a concurrent fault detection scheme for the S-box and the inverse S-box as the only two nonlinear operations within the Advanced Encryption Standard. The proposed parity-based fault detection approach is based on the low-cost composite field implementations of the S-box and the inverse S-box. We divide the structures of these operations into three blocks and find the predicted parities of these blocks. Our simulations show that except for the redundant units approach which has the hardware and time overheads of close to 100 percent, the fault detection capabilities of the proposed scheme for the burst and random multiple faults are higher than the previously reported ones. Finally, through ASIC implementations, it is shown that for the maximum target frequency, the proposed fault detection S-box and inverse S-box in this paper have the least areas, critical path delays, and power consumptions compared to their counterparts with similar fault detection capabilities.",2011,0, 5587,10-Gbps IP Network Measurement System Based on Application-Generated Packets Using Hardware Assistance and Off-the-Shelf PC,"Targeting high-bandwidth applications such as video streaming services, we discuss advanced measurement systems for high-speed 10-Gbps networks. To verify service stability in such high-speed networks, we need to detect network quality under real environmental conditions. For example, test traffic injected into networks under-test for measurements should have the same complex characteristics as the video streaming traffic. For such measurements, we have built Internet protocol (IP) stream measurement systems by using our 10-Gbps network interface card with hardware-assisted active/passive monitor extensions based on low-cost off-the-shelf personal computers (PCs). After showing hardware requirements and our implementation of each hardware-assisted extensions, we report how we build pre-service and in-service network measurement systems to verify the feasibility of our hardware architecture. A traffic-playback system captures packets and stores traffic characteristics data without sampling any packets and then sends them precisely emulating the complex characteristics of the original traffic by using our hardware assistance. The generated traffic is useful as test traffic in pre-service measurement. A distributed in-service network monitoring system collects traffic characteristics at multiple sites by utilizing synchronized precise timestamps embedded in video streaming traffic. The results are presented on the operator's display. We report on their effectiveness by measuring 1.5-Gbps uncompressed high-definition television traffic flowing in the high-speed testbed IP network in Japan.",2011,0, 5588,AMBA to SoCWire network on Chip bridge as a backbone for a Dynamic Reconfigurable Processing unit,"Instruments on spacecrafts or even complete payload data handling units are typically controlled by a dedicated data processing unit. This data processing unit exercises control of subsystems and telecommand processing as well as processing of acquired science data. With increasing detector resolutions and measurement speeds the processing demands are rising rapidly. To fulfill these increasing demands under the constraints of limited power budgets, a dedicated hardware design as a System on Chip (SoC), e.g. in a FPGA, has been shown to be a viable solution. In previous papers we have presented our Network on Chip (NoC) solution SoCWire as a higly efficient and reliable approach for a dynamic reconfigurable architectures. However, the control task still requires a sequential processor which is able to execute software. The LEON processor is a processor that is available in a fault tolerant version suitable for space applications and is accessible as an open source design. This paper presents an efficient solution for a combined NoC and classic processor bus-based communication architecture, i.e. the AHB2SOCW bridge as an efficient connection between a SoCWire network and a LEON processor bus systems. Direct memory access enables the AHB2SOCW bridge to operate efficiently. The resource utilization and obtainable data rates are presented as well as an outlook for the intended target application, which is an efficient SoC controller for a reconfigurable processing platform based on FPGAs.",2011,0, 5589,Earthquake response of an arch bridge under near-fault ground motions,"During the Wenchuan earthquake in 2008, some of the arch bridges in the seismic zone were damaged. To assess the performance of arch bridge under near-fault ground motions, both experimental and numerical studies are conducted in this paper. Firstly, an ambient vibration test of a real typical arch bridge is made to measure the dynamic characteristics of the structure and calibrate the finite-element model. Then the earthquake response of the bridge is made using FE software MIDAS based on recorded ground motion in the Wenchuan earthquake. The study shows that the joint between deck and tie at 1/4 span and 3/4 span, the joint between beam and arch and middle of the beam are weak links under near-fault ground motions.",2011,0, 5590,Study on the relationship between aerosol anthropogenic component and air quality in the city of Wuhan,"Problems with air pollution in urban areas have been known for a long time. The air quality has an enormous impact for people's daily life. The Wuhan Environmental Protection Bureau currently uses ground sensors to monitor air quality in the city of Wuhan. While this method provides accurate information at specific locations, it does not provide a clear indication of conditions over the whole region. Measurements from satellite imagery have the potential to provide timely air quality data for large swaths of land. Satellite remote sensing of air quality has evolved dramatically over the last decade. Global observations are now available for a wide range of species including aerosols, O3, NO2, CO and SO2. Previous studies show strong correlations between MODIS-derived Aerosol Optical Depth (AOD) and surface PM (particulate matter) measurements, which is an important indicator for the air-quality. However, this relationship is not so significant from region to region. The reason is that, although the basic transfer theory in the atmosphere is well understood, its application in urban environments is less than adequate when the domain controlling parameters and their coupling mechanisms are not well defined. Satellite instruments do not measure the aerosol chemical composition needed to discriminate anthropogenic from natural aerosol components. But the ability of new satellite instruments to distinguish fine (submicron) from coarse (supermicron) aerosols over the oceans, serves as a signature of the anthropogenic component and can be used to estimate the fraction of anthropogenic aerosols. In this paper, we developed this method to continental area, the city of Wuhan. By collecting the MOD04 production over the city of Wuhan in 2005, we extracted the aerosol optical thickness(AOT) at the 550nm (τ550nm) and the fine mode fraction (FMF), which were used to calculate out the aerosol anthropogenic component (τant
). The ground-based air quality indexes, such as PM10, NO2 and SO2, were also collected synchronously. The pre-processing of the MOD04 data were performed using external development of the IDL language. Statistical analysis was conducted in MATLAB (version 7.8) software to establish linear regression model of the τant and the index of air-quality relationships. Correspondingly, the relationships between the τ550nm and the index of air-quality were also explored. It showed that the relationships between the former were much stronger than the latter. And the main factor of the air quality in the city of Wuhan was changing from season to season.",2011,0, 5591,H.264/AVC rate control with enhanced ratequantisation model and bit allocation,"Rate control algorithms aim to achieve perceivable video quality for real-time video communication applications under some real-world constraints, such as bandwidth, delay, buffer sizes etc. This study presents an adaptive and efficient rate control framework for low-delay H.264/AVC video communication. At the basic unit level, to realise quantisation decision improvement, a novel rate-qantisation (R-Q) model is proposed for P frame. This P frame R-Q model is established through the ρ domain rate control theory to compute quantisation according to target bit assignment. In addition, an I frame R-Q model is presented to measure frame level coding activity for quantisation decision without performing computationally intensive intra-prediction, which helps alleviating the frame skipping problem. When compared with Joint Video Team (JVT)-G012, a rate control scheme adopted by JVT H.264/AVC reference software JM12.3, the proposed rate control framework shows good performance in terms of improved average luminance peak signal noise ratio (PSNR), reduced number of skipped frames and smaller mismatch between target and actual bit rates.",2011,0, 5592,A cost-effective Software Development and Validation environment and approach for LEON based satellite & payload subsystems,"This paper describes a configurable Software Development and Validation Facility (SDVF), designed to support the lifecycle of spacecraft equipment containing on board software (OBSW) in our case LEON-based units with a cost effective approach. The environment fulfils three main purposes: 1. Software Development and Verification Facility (SDVF) for OBSW 2. Simulation for performance and design feasibility assessment 3. Special Check-Out Equipment (SCOE) for hardware integration and testing With a modular design, it can be modified into different configurations, from a full-virtual environment containing a processor emulator and a set of simulation models, to a processor in the loop (PIL) or complete hardware in the loop (HIL) configuration. As the use case is not limited to the SDVF only, we called the environment ""EGSE-Lite"". The key design characteristics of the system are: 1) Easy configurability to use virtual models or real hardware versions of system components or a mix of them: A central ""Packet Switch"" interconnects the modules and allows easy switch from one configuration to another via a configuration file. This is possible as the modules, either virtual or hardware via their interface driver, shares a common interface protocol. 2) Simple coupling of interfaces among the equipment modules, based on TCP/IP sockets. The solution allows distributing the modules on different host machines if needed. 3) Use of ESA's SCOS-2000 software as Central Check out System (CCS): The integration of SCOS allows easy system level testing of the software, to edit and send commands, verify telemetry results on displays, archive telemetry for offline analysis and enable automated regression testing. It also enforces compatibility of the equipment under development with the final system EGSE or mission control system, if based on SCOS2000. 4) Packet sniffing and recording capability at different levels with capability to inject corrupted packets for robustness validation of the- - system. 5) Use of Eclipse and LEON Integrated Development Environment (LIDE) available from Gaisler Research, providing a development environment with GDB interface for debugging. 6) Possibility to use either a virtual LEON emulator, such as Gaisler Research TSIM, or a hybrid hardware/software emulator, or the final LEON processor board. Use of COTS I/O cards such as PCI-SpaceWire interface card and other standard interfaces to connect to the hardware equipment. The facility has been successfully deployed to CESR, an institute in France for astrophysics and payload development and integration, providing emulation of a LEON3 processor, high fidelity models for X-ray detectors and a NAND based fault tolerant mass memory unit, each replaceable with its hardware counterpart (the processor board was based on Aeroflex LEON3 UT699). The environment is currently being integrated with a LEON2 hybrid hardware/software emulator developed by ASTRIUM under ESA contract (LeonSvf): The objective is to perform a characterization study of the emulator card using computational intensive flight representative software developed for CESR The EGSE-Lite takes full advantage from COTS products (such as Gaisler TSIM), and ESA software products (such as SCOS 2000) to provide a light weight, scalable and cost-efficient solution, for integration and validation of satellite subsystems and payload equipment containing embedded software.",2011,0, 5593,Software component quality-finite mixture component model using Weibull and other mathematical distributions,"Software component quality has a major influence in software development project performances such as lead-time, time to market and cost. It also affects the other projects within the organization, the people assigned into the projects and the organization in general. In this study a finite mixture of several mathematical distributions is used to describe the fault occurrence in the system based on individual software component contribution. Several examples are selected to demonstrate model fitting and comparison between the models. Four case studies are presented and evaluated for modeling software quality in very large development projects within the AXE platform, BICC as a call control protocol in the Ericsson Nikola Tesla R&D.",2011,0, 5594,Identification method research of invalid questionnaire based on partial least squares regression,"The researchers often apply the survey questionnaire as a measurement tool to verify hypothetical propositions and structural models in humanities and social science, therefore, the data quality of survey questionnaire can effect the scientificalness of research propositions and models directly. The data quality of survey questionnaire can be divided into 3 important components in research procedure. Firstly, the quality of survey questionnaire is the foundation for theory research; secondly, the reliability of survey data plays an important role in practical application; thirdly, the scientificalness of analyze data is a guarantee for empirical research. These components are inseparable and conformable, the paper make classification to the data which describes data identification in quantitative and qualitative as classification criterion. Thus, identification method of invalid questionnaire will be divided into 2 steps which are qualitative identification and quantitative one. Identification method for invalid questionnaire based on partial least squares(PLS) regression may apply the basis theory of PLS and SIMCA-P software to form ellipses graphs or ellipsoid ones, which will display specific points outside graphs. In order to delete data of invalid questionnaire, deleting specific points in graphs is needed. The paper uses empirical data to verify the feasibility and scientificness of the method in Customer Satisfaction Index Model. The practice shows the popular value of the method in theoretical research and practical application.",2011,0, 5595,Design of Three Phase Network Parameter Monitoring System based on 71M6513,"In this paper, the design process of Three Phase Network Parameter Monitoring System is analyzed and the overall research scheme and design idea of the system are expounded. This paper mainly studies how to improve the accuracy of input voltage and current. The front-end voltage decreasing, current dropping circuit and remote data transmission circuit are designed. The programs of the data acquisition and calculation are made, real-time detection of power network parameters, energy calculation and the remote data transmission are achieved. Based on .Net platform, the host computer software is designed to receive the data and store them in the database, the mean value of power and energy parameters are calculated by the day, the week and the month, which are available to the user for inquiry. Many parameters such as three-phase voltage, three-phase current, frequency, active power, reactive power and voltage harmonics, can be detected by the system, which will provide a reliable basis for power quality analysis.",2011,0, 5596,Reasoning about Faults in Aspect-Oriented Programs: A Metrics-Based Evaluation,"Aspect-oriented programming (AOP) aims at facilitating program comprehension and maintenance in the presence of crosscutting concerns. Aspect code is often introduced and extended as the software projects evolve. Unfortunately, we still lack a good understanding of how faults are introduced in evolving aspect-oriented programs. More importantly, there is little knowledge whether existing metrics are related to typical fault introduction processes in evolving aspect-oriented code. This paper presents an exploratory study focused on the analysis of how faults are introduced during maintenance tasks involving aspects. The results indicate a recurring set of fault patterns in this context, which can better inform the design of future metrics for AOP. We also pinpoint AOP-specific fault categories which are difficult to detect with popular metrics for fault-proneness, such as coupling and code churn.",2011,0, 5597,Adding Process Metrics to Enhance Modification Complexity Prediction,"Software estimation is used in various contexts including cost, maintainability or defect prediction. To make the estimate, different models are usually applied based on attributes of the development process and the product itself. However, often only one type of attributes is used, like historical process data or product metrics, and rarely their combination is employed. In this report, we present a project in which we started to develop a framework for such complex measurement of software projects, which can be used to build combined models for different estimations related to software maintenance and comprehension. First, we performed an experiment to predict modification complexity (cost of a unity change) based on a combination of process and product metrics. We observed promising results that confirm the hypothesis that a combined model performs significantly better than any of the individual measurements.",2011,0, 5598,Design Defects Detection and Correction by Example,"Detecting and fixing defects make programs easier to understand by developers. We propose an automated approach for the detection and correction of various types of design defects in source code. Our approach allows to automatically find detection rules, thus relieving the designer from doing so manually. Rules are defined as combinations of metrics/thresholds that better conform to known instances of design defects (defect examples). The correction solutions, a combination of refactoring operations, should minimize, as much as possible, the number of defects detected using the detection rules. In our setting, we use genetic programming for rule extraction. For the correction step, we use genetic algorithm. We evaluate our approach by finding and fixing potential defects in four open-source systems. For all these systems, we found, in average, more than 80% of known defects, a better result when compared to a state-of-the-art approach, where the detection rules are manually or semi-automatically specified. The proposed corrections fix, in average, more than 78%of detected defects.",2011,0, 5599,An Empirical Study of the Impacts of Clones in Software Maintenance,"The impacts of clones on software maintenance is a long-lived debate on whether clones are beneficial or not. Some researchers argue that clones lead to additional changes during the maintenance phase and thus increase the overall maintenance effort. Moreover, they note that inconsistent changes to clones may introduce faults during evolution. On the other hand, other researchers argue that cloned code exhibits more stability than non-cloned code. Studies resulting in such contradictory outcomes may be a consequence of using different methodologies, using different clone detection tools, defining different impact assessment metrics, and evaluating different subject systems. In order to understand the conflicting results from the studies, we plan to conduct a comprehensive empirical study using a common framework incorporating nine existing methods that yielded mostly contradictory findings. Our research strategy involves implementing each of these methods using four clone detection tools and evaluating the methods on more than fifteen subject systems of different languages and of a diverse nature. We believe that our study will help eliminate tool and study biases to resolve conflicts regarding the impacts of clones on software maintenance.",2011,0, 5600,Towards accurate and agile link quality estimation in wireless sensor networks,"Link quality estimation has been an important research topic in the wireless sensor networking community and researchers have developed a large number of different methods to estimate link quality. The commonly used CC2420 radio provides simple signal quality indicators. These are agile in that they react fast to changing link quality but they are inaccurate under complicated channel conditions. More sophisticated link quality estimators combine these simple metrics with packet reception statistics collected by the network stack. These approaches compensate the hardware-based metrics to a limited degree but they compromise agility and incur extra overhead. In this paper, we take a novel approach and develop a number of link quality metrics using a software defined radio. We evaluate our metrics under several channel conditions. The results show that they have good accuracy and can handle complicated channel conditions if combined properly.",2011,0, 5601,An integrated apparatus to measure Mallampati score for the characterization of Obstructive Sleep Apnea (OSA),"Obstructive Sleep Apnea (OSA) affects as many as 1 in every 5 adults and has the potential to cause serious long-term health complications such as, cardiovascular disease, stroke, hypertension and the consequent reduced quality of life. Studies have shown that the probability of having OSA increases with a higher BMI irrespective of gender and that there is a definite link concerning the race of the patient and having OSA. This paper describes the design of an integrated apparatus to collect Mallampati scores with little human intervention and perform automatic processing of various parameters. The system permits life-cycle studies on patients with OSA and other sleep disorders.",2011,0, 5602,NMSIS - SCEN,"The network laboratories management and maintenance processes consist of several tasks which may interfere with the normal network operation. The NMSIS - Network Management System with Imaging Support is a management tool that enables the remote installation of software images, without compromising the network performance. This article describes a new NMSIS module, the SCEN. By integrating the Simple Event Correlator (SEC) and extending the alarms correlation, SCEN enables the detection and isolation of faults in a network. With SCEN, the NMSIS proved effectiveness in alarms processing, fault identification and clearing, in scenarios where link, systems and services failures occur.",2011,0, 5603,Building the pillars for the definition of a data quality model to be applied to the artifacts used in the Planning Process of a Software Development Project,"The success of a software development project is mainly dependent on the quality of the used artifacts along the project; this quality is reliant on the contents of the artifacts, as well as the level of quality of the data values corresponding to the metadata that describe the artifacts. In order to assess both kind of qualities, it should be therefore taken into account the artifacts' structure and metadata. This paper proposes a DQ model that can be used as a reference by project managers to assess and, if necessary, improve the quality of the data values corresponding to the metadata describing the artifacts used in the process of planning a software development project. For our research, we have identified the corresponding artifacts from those described as part of the Planning Process defined in international standard ISO / IEC 12207:2008. We have aligned these found artifacts with those proposed by PMBOK, in order to better depict their structure; and finally, we are to build our data quality model upon the DQ dimensions proposed by Strong, D. M., Y. W. Lee and Wang, R. in “Data Quality in Context.â€?Comm. of the ACM 1997 40 (5): 103-110. We all of these elements, we intend to optimize the performance of the software development process by improving the project management process.",2011,0, 5604,The design of software process management and evaluation system in small and medium software enterprises,"Software process improvement is the necessary choice through which small software enterprises improve their software quality and productivity. Aiming at the practical circumstances of the enterprises, the key technologies of software process management and estimation were studied in this paper and a process control platform was built based on the technologies. It is beneficial to improve overall integral level of software and promote the steady and healthy development of software industry.",2011,0, 5605,Design of a highly adaptive auto-test software platform based on common data interface,"In military and industrial sectors, automatic test system remarkably reduces maintenance cost of certain equipments. As is depicted in the thesis, the highly adaptive automatic test software architecture is presented with multiple common data interfaces, which not only enables rapid prototyping of automatic test procedure, but also collectively facilitate information sharing and data exchange in the whole lifecycle.",2011,0, 5606,Linking a physical arc model with a black box arc model and verification,"The arc behavior in the current zero region is critical in the case of very steep rising TRV, such as after clearing a short-line fault. Therefore, intensive and abundant short-line fault tests (L90) of a 245 kV SF6 circuit breaker were performed at the KEMA High Power Laboratory. For the purpose of a comparative analysis three different sets of data were obtained during the tests: 1) High-resolution measurements of near current-zero arc current and voltage were carried out. The current zero measurement system (CZM) works as a standalone system in addition to the standard laboratory data acquisition system. The arc conductance shortly before current zero and the arc voltage extinction peak give a clear indication of the interrupting capability of the breaker under test. 2) From the measured traces of every individual test, arc parameters (3 time constants and 3 cooling-power constants) were extracted for the composite black box arc model, which has been developed by KEMA High Power Laboratory and is based on more than 1000 high-resolution measurements during tests of commercial high-voltage circuit breakers. Its aim is to simulate interruption phenomenon in SF6 gas, evaluate performance of HV SF6 circuit breakers in testing and enable the prediction of the performance under conditions other than those tested. 3) After each test, using specially developed computer software, based on a simplified physical enthalpy flow arc model, the values of the arcing contact distance, gas mass flow through the nozzle throat and pressure inside the compression cylinder were calculated. The values of these characteristic quantities at the current zero are relevant indicators for successful interruption. In the comparative analysis, mathematical relations and statistical correlations between the evaluated parameters of the composite black box arc model and the characteristic output quantities are established and discussed. The link has been verified by MatLAB simulation of every indi- - vidual test. This approach enables acceptable prediction of interruption success in a similar circuit and with a similar interrupter without SLF tests and CZM.",2011,0, 5607,An Evaluation of QoE in Cloud Gaming Based on Subjective Tests,"Cloud Gaming is a new kind of service, which combines the successful concepts of Cloud Computing and Online Gaming. It provides the entire game experience to the users remotely from a data center. The player is no longer dependent on a specific type or quality of gaming hardware, but is able to use common devices. The end device only needs a broadband internet connection and the ability to display High Definition (HD) video. While this may reduce hardware costs for users and increase the revenue for developers by leaving out the retail chain, it also raises new challenges for service quality in terms of bandwidth and latency for the underlying network. In this paper we present the results of a subjective user study we conducted into the user-perceived quality of experience (QoE) in Cloud Gaming. We design a measurement environment, that emulates this new type of service, define tests for users to assess the QoE, derive Key Influence Factors (KFI) and influences of content and perception from our results.",2011,0, 5608,Continuous software reliability models,"For large scale software systems with a huge number of codes, it may be often useful to regard the fault-counting processes observed in testing phase as continuous-state stochastic processes. In this paper we introduce two types of continuous-state software reliability models; diffusion processes with time-dependent drift and non-stationary gamma wear processes, and compare them with the usual non-homogeneous Poisson processes. We expect that our models can provide the better goodness-of-fit performance than the existing software reliability models in many cases and provide convenient assessment tools to estimate the software reliability.",2011,0, 5609,Research on key techniques of simulation and detection for fire control systemon the basis of database,"Fire control system is the core component of the x-type artillery weapon system. Because of the position distribution, work control process and signal transfer relationship of the units in the fire control system, the traditional detective devices are limited when detecting it. This paper describe how to simulate the work environment of the fire control system on the basis of which the equipment units are detected. Realizing the management of sending and reception of serial port information by building the datagram management model and communication protocol management model adopting database techniques in simulation system. On the basis of the purpose that it is used to detect the units, it is hardly necessary to realize the ballistic curve solving and manipulation & aiming solving according to the equipment principle. Also, database technique is employed to build up the database of ballistic curve solving and manipulation & aiming solving in line with which the functions of ballistic curve solving and manipulation & aiming solving are completed in simulation system. Then the performance testing is realized through comparing the resolution results with the standard results in database by querying the database.",2011,0, 5610,Arabic calligraphy recognition based on binarization methods and degraded images,"Optical Font Recognition is one of the main challenges in this time. The available methods of optical font recognition are deal with the recent documents and fonts types. However, there are neglected in dealing with the historical and regarded documents. Moreover, they have neglected languages that are not belong into Asian or Latin. Regarding to those types of documents, we proposed a new framework of optical font recognition for Arabic calligraphy. We enhance binarization method based on previous works. By introducing that, we achieve better quality images at the preprocessing stage. Then we generate text block before passing mailing to post-processing stages. Then, we extract the features based on edge direction matrixes. In the classification stage, we apply backpropagation neural network to identify the font type of the calligraphy. We observe that our proposal method achieve better performance in both preprocessing and post processing.",2011,0, 5611,Suffix tree-based approach to detecting duplications in sequence diagrams,"Models are core artefacts in software development and maintenance. Consequently, quality of models, especially maintainability and extensibility, becomes a big concern for most non-trivial applications. For some reasons, software models usually contain some duplications. These duplications had better be detected and removed because the duplications may reduce maintainability, extensibility and reusability of models. As an initial attempt to address the issue, the author propose an approach in this study to detecting duplications in sequence diagrams. With special preprocessing, the author convert 2-dimensional (2-D) sequence diagrams into an 1-D array. Then the author construct a suffix tree for the array. With the suffix tree, duplications are detected and reported. To ensure that every duplication detected with the suffix tree can be extracted as a separate reusable sequence diagram, the author revise the traditional construction algorithm of suffix trees by proposing a special algorithm to detect the longest common prefixes of suffixes. The author also probe approaches to removing duplications. The proposed approach has been implemented in DuplicationDetector. With the implementation, the author evaluated the proposed approach on six industrial applications. Evaluation results suggest that the approach is effective in detecting duplications in sequence diagrams. The main contribution of the study is an approach to detecting duplications in sequence diagrams, a prototype implementation and an initial evaluation.",2011,0, 5612,Transmission control policy design for decentralized detection in tree topology sensor networks,"A Wireless Sensor Network (WSN) deployed for detection applications has the distinguishing feature that sensors cooperate to perform the detection task. Therefore, the decoupled design approach that is typically used to design communication networks, where each network layer is designed independently, does not lead to the desired optimal detection performance. Cross-layer design has been recently explored for the design of MAC protocols for parallel topology (single hop) networks, but little work has been done on the integration of communication and information fusion for tree networks. In this work, we design the optimal Transmission Control Policy (TCP) that coordinates the communication between sensor nodes connected in a tree configuration, in order to optimize the detection performance. We integrate the Quality of Information (QoI), Channel State Information (CSI), and Residual Energy Information (REI) for each sensor into the system model. We formulate a constrained nonlinear optimization problem to find the optimal TCP design variables. We solve the optimization problem using a hierarchical approach where smaller local optimization problems are solved by each parent node to find the optimal TCP design variables for its child nodes. We compare our design with the max throughput and decoupled design approaches.",2011,0, 5613,Table of contents,The following topics are dealt with: hybrid semi-blind digital image watermarking technique; face recognition;database schemas; intelligent agent based job search system; retinal image segmentation; intelligent information retrieval lifecycle architecture; illumination-reflectance; unified power quality conditioner; universal scalability law; network performance evaluation; super high voltage transmission line electric field; signal processing methods; cost reduction; pavement pothole detection; SOCKx; application layer network switching framework; automated pavement distress inspection; transmission simulator development; keyword spotting experiment; multipath routing; environmental sensor network data packaging; predictive multichannel feedback active noise control; 2-channel adaptive speech enhancement; permission-based security model; dielectric electroactive polymer energy harvesting system forward path design; sEMG based real-time embedded force control strategy; tree-type wireless acquisition network; aligned aluminum-doped zinc oxide nanorod arrays; conditioning circuit development; thin film transistors fabrication; robust license plate localization; collaborative learning; frequency domain symbol synchronization; agent based TDMA schedule; malicious messages identification; customer survey data processing; trace element detection; fuzzy rule extraction; component packaging footprint; wireless network coding scheme; secure data transmission; model-based software-defined radio; clinical decision support system; low power 6-T CNFET SRAM cell; intelligent extended clustering genetic algorithm; low power process monitoring circuit; distributed intrusion detection scheme; time frequency analysis techniques; collaborative product design; capsule image segmentation; reconfigurable architectures; 9T SRAM design; turbo codec performance evaluation; architectural design tool; application commerce; foetal electrocardiogram extraction; route anomaly detection; yellowj- cket wast nest construction; free space laser communication; energy harvesting system; low power skin sensor; discrete cosine transform-based reconfigurable system design; energy aware adaptive clustering; automated electronic pen; and remote avian monitoring system.,2011,0, 5614,From village greens to volcanoes: Using wireless networks to monitor air quality,"Summary form only given. Air quality mapping is a rapidly growing need as people become aware of the acute and long term risks of poor air quality. Both spatial and temporal mapping is required to understand, predict and improve air quality. The collision of several technologies has led to an opportunity to create affordable wireless networks to monitor air quality: short range radio on a chip; GPS and internet maps; GSM integrated chips; improved batteries and solar power; Python and other internet languages; and low power, low cost gas sensors with ppb resolution. Bringing these technologies together requires collaboration between electronics engineers, mathematicians, software programmers, atmospheric chemists and sensor technology providers. And add in local politicians because wireless networks need permission. We will discuss implementation and results of successful trials and look at what is now underway: Newcastle, Cambridge and London trials (MESSAGE); Cambridge UK real time air quality mapping (2010); Heathrow airport air quality network (2011); volcanic ash mapping; landfill site monitoring networks; indoor air quality studies; and rural air measurements. Each application has its own requirements of power, wireless protocol, air monitoring needs, data analysis and presentation of results.",2011,0, 5615,Quantification of Software Quality Parameters Using Fuzzy Multi Criteria Approach,"Software quality is the measure of appropriateness of the design of the software and how well it adheres to that design. There are some metrics and measurements to determine the software quality. Software quality measurement is possible only by quantifying the characteristics affecting the software quality. For measuring the quality, the parameters or quality factors are considered that vary over a domain of discourse. The quality factors stated in ISO/IEC 9126 model are used in this paper. Due to the unpredictable nature of these factors or attributes fuzzy approach has been used to estimate the software quality.",2011,0, 5616,Research on the definition and model of software testing quality,"Aim of software testing is to find out software defects and evaluate the software quality. In order to explain the software testing can really elevate the software quality, it is necessary to assess the quality of software testing itself. This paper discusses the definition of software testing quality, and further builds the framework model of software testing quality (STQFM) and the metrics model of software testing quality (SFTMM). The availability of the models is verified by example application.",2011,0, 5617,A dynamic software binary fault injection system for real-time embedded software,"Dynamic fault injection is an efficient technique for reliability evaluation. In this paper, a new fault injection system called DDSFIS (debug-based dynamic software fault injection system) is presented. DDSFIS is designed to be adaptable to real-time embedded systems. Locations of injections and faults are detected and injected automatically without recompiling. The tool is highly portable between different platforms since it relies on the GNU tool chains. The effectiveness and performance of DDSFIS is validated by experiments.",2011,0, 5618,Nonlinear Unsharp Masking for Mammogram Enhancement,"This paper introduces a new unsharp masking (UM) scheme, called nonlinear UM (NLUM), for mammogram enhancement. The NLUM offers users the flexibility 1) to embed different types of filters into the nonlinear filtering operator; 2) to choose different linear or nonlinear operations for the fusion processes that combines the enhanced filtered portion of the mammogram with the original mammogram; and 3) to allow the NLUM parameter selection to be performed manually or by using a quantitative enhancement measure to obtain the optimal enhancement parameters. We also introduce a new enhancement measure approach, called the second-derivative-like measure of enhancement, which is shown to have better performance than other measures in evaluating the visual quality of image enhancement. The comparison and evaluation of enhancement performance demonstrate that the NLUM can improve the disease diagnosis by enhancing the fine details in mammograms with no a priori knowledge of the image contents. The human-visual-system-based image decomposition is used for analysis and visualization of mammogram enhancement.",2011,0, 5619,Performance optimization of error detection based on speculative reconfiguration,"This paper presents an approach to minimize the average program execution time by optimizing the hardware/software implementation of error detection. We leverage the advantages of partial dynamic reconfiguration of FPGAs in order to speculatively place in hardware those error detection components that will provide the highest reduction of execution time. Our optimization algorithm uses frequency information from a counter-based execution profile of the program. Starting from a control flow graph representation, we build the interval structure and the control dependence graph, which we then use to guide our error detection optimization algorithm.",2011,0, 5620,Transmission Control Policy design for decentralized detection in sensor networks,"A Wireless Sensor Network (WSN) deployed for detection applications has the distinguishing feature that sensors cooperate to perform the detection task. Therefore, the decoupled design approach typically used to design communication networks, where each network layer is designed independently, does not lead to the desired optimal detection performance. Recent work on decentralized detection has addressed the design of MAC and routing protocols for detection applications by considering independently the Quality of Information (QoI), Channel State Information (CSI), and Residual Energy Information (REI) for each sensor. However, little attention has been given to integrate the three quality measures (QoI, CSI, REI) in the complete system design. In this work, we pursue a cross-layer approach to design a QoI, CSI, and REI-aware Transmission Control Policy (TCP) that coordinates communication between local sensors and the fusion center, in order to maximize the detection performance. We formulate and solve a constrained nonlinear optimization problem to find the optimal TCP design variables. We compare our design with the decoupled approach, where each layer is designed separately, in terms of the delay for detection and WSN lifetime.",2011,0, 5621,BS-SVM multi-classification model in the application of consumer goods,"Quality and safety of consumer products have drawn wide attention from scholars in related domain, this issue is based on the subject of the quality and safety of consumer goods, in accordance with characteristics of cases, and put forward a hierarchical support vector machine classification algorithm based on the relative separability of the feature space, to solve the low classification performance and high rate of misclassification of the existing algorithms. The weight of Binary Search Tree is the separability of samples, determining the order of categories by a selective set of training samples to construct SVM classifier and the final formation of a binary classification of the larger interval multi-valued SVM classifier tree. Simulation results show that the method has a faster test speed, relatively perfect good classification accuracy and generalization performance.",2011,0, 5622,Improving rating estimation in recommender using demographic data and expert opinions,"Memory-based collaborative filtering algorithms have been widely adopted in many popular recommender systems, however the rating data are very sparse, which affects prediction accuracy greatly. To solve this problem, we use expert opinions to improve prediction accuracy. Firstly, we propose a novel similarity measure in order to highlight users' background. Then, combining users' ratings with expert opinions, the prediction get a right balance in both expert professional opinions and similar users. Finally, since SVD-based collaborative filtering algorithms shows good performance on the prevention of noise, we use it to smooth the prediction. Experiments of MovieLens have shown that our proposed method improves recommendation quality obviously.",2011,0, 5623,Development modeling of software based on performance process,"In particular, an early integration of performance specifications in the SDP has been recognized during the last few years as an effective approach to improve the overall quality of a software. Performance related problems are becoming more and more strategic in the software development, especially recently with the advent of Web Services and related business-oriented composition techniques (software as a service, Web 2.0, orchestration, choreography, etc.). The goal of our work is the definition of a software development process that integrates performance evaluation and prediction. The software performance engineering development process (SPEDP) we specify is focused on performance, which plays a key role driving the software development process, thus implementing a performance/QoS-driven (software) development process. More specifically, in this paper our aim is to formally define the SPEDP design process, posing particular interest on the basis, on the first step of SPEDP, the software/system architecture design, modeling and/or representation. We define both the diagrams to use and show how to model the structure of the software architecture, its behavior and performance requirements. This is the first mandatory step for the automation of the SPEDP into a specific tool.",2011,0, 5624,Translation of application-level terms to resource-level attributes across the Cloud stack layers,"The emergence of new environments such as Cloud computing highlighted new challenges in traditional fields like performance estimation. Most of the current cloud environments follow the Software, Platform, Infrastructure service model in order to map discrete roles / providers according to the offering in each “layerâ€? However, the limited amount of information passed from one layer to the other has raised the level of difficulty in translating user-understandable application terms from the Software layer to resource specific attributes, which can be used to manage resources in the Platform and Infrastructure layers. In this paper, a generic black box approach, based on Artificial Neural Networks is used in order to perform the aforementioned translation. The efficiency of the approach is presented and validated through different application scenarios (namely FFMPEG encoding and real-time interactive e-Learning) that highlight its applicability even in cases where accurate performance estimation is critical, as in cloud environments aiming to facilitate real-time and interactivity.",2011,0, 5625,Improving quality in a distributed IP telephony system by the use of multiplexing techniques,"Nowadays, many enterprises use Voice over Internet Protocol (VoIP) in their telephony systems. Companies with offices in different countries or geographical areas can build a central managed telephony system sharing the lines of their gateways in order to increase admission probability and to save costs on international calls. So it is convenient to introduce a system to ensure a minimum QoS (Quality of Service) for conferences, and one of these solutions is CAC (Call Admission Control). In this work we study the improvements in terms of admission probability and conversation quality (R-factor) which can be obtained when RTP multiplexing techniques are introduced, as in this scenario there will be multiple conferences with the same origin and destination offices. Simulations have been carried out in order to compare simple RTP and TCRTP (Tunneling Multiplexed Compressed RTP). The results show that these parameters can be improved while maintaining an acceptable quality for the conferences if multiplexing is used.",2011,0, 5626,Influence of traffic management solutions on Quality of Experience for prevailing overlay applications,"Different sorts of peer-to-peer (P2P) applications emerge every day and they are becoming more and more popular. The performance of such applications may be measured by means of Quality of Experience (QoE) metrics. In this paper, the factors that influence these metrics are surveyed. Moreover, the impact of economic traffic management solutions (e.g., proposed by IETF) on perceived QoE for the dominant overlay applications is assessed. The possible regulatory issues regarding QoE for P2P applications are also mentioned.",2011,0, 5627,Evaluating the efficiency of data-flow software-based techniques to detect SEEs in microprocessors,"There is a large set of software-based techniques that can be used to detect transient faults. This paper presents a detailed analysis of the efficiency of dataflow software-based techniques to detect SEU and SET in microprocessors. A set of well-known rules is presented and implemented automatically to transform an unprotected program into a hardened one. SEU and SET are injected in all sensitive areas of MIPS-based microprocessor architecture. The efficiency of each rule and a combination of them are tested. Experimental results are used to analyze the overhead of data-flow techniques allowing us to compare these techniques in the respect of time, resources and efficiency in detecting this type of faults. This analysis allows us to implement an efficient fault tolerance method that combines the presented techniques in such way to minimize memory area and performance overhead. The conclusions can lead designers in developing more efficient techniques to detect these types of faults.",2011,0, 5628,Using an FPGA-based fault injection technique to evaluate software robustness under SEEs: A case study,"Microprocessor-based system's robustness under Single Event Effects is a very current concern. A widely adopted solution to make robust a microprocessor-based system consists in modifying the software application by adding redundancy and fault detection capabilities. The efficiency of the selected software-based solution must be assessed. This evaluation process allows the designers to choose the more suitable robustness technique and check if the hardened system achieves the expected dependability levels. Several approaches with this purpose can be found in the literature, but their efficiency is limited in terms of the number of faults that can be injected, as well as the level of accuracy of the fault injection process. In this paper, we propose FPGA-based fault injection techniques to evaluate software robustness methods under Single Event Upset (SEU) as well as Single Event Transient (SET). Experimental results illustrate the benefits of using the proposed fault injection method, which is able to evaluate a high amount of faults of both types of events.",2011,0, 5629,On-line BIST for performance failure prediction under aging effects in automotive safety-critical applications,"Electronic design of high-performance digital systems in nano-scale CMOS technologies under Process, power supply Voltage, Temperature and Aging (PVTA) variations is a challenging process. Such variations induce abnormal timing delays leading to systems errors, harmful in safety-critical applications. Performance Failure Prediction (PFP), instead of error detection, becomes necessary, particularly in the presence of aging effects. In this paper, an on-line BIST methodology for PFP in a standard cell design flow is proposed. The methodology is based on abnormal delay detection associated with critical signal paths. PVTA-aware aging sensors are designed. Multilevel simulation is used. Functional and structural test pattern generation is performed, targeting the detection of critical path delay faults. A sensor insertion technique is proposed, together with an up-graded version of a proprietary software tool, DyDA. Finally, a novel strategy for gate-level Aging fault injection is proposed, using the concept of an Aging de-rating factor. Results are presented for a Serial Parallel Interface (SPI) controller, designed with commercial UMC 130nm CMOS technology and Faradayâ„?cell library. Only seven sensors are required to monitor unsafe performance operation, due to Negative Bias Thermal Instability (NBTI)-induced aging.",2011,0, 5630,Improved generalized predictive control in polymerization process,"The polymerization process is the basis stage of PAN based carbon fiber production, and its temperature control affects directly the quality and yield of the last products. However, the polymerization process releases a lot of heat rapidly and it has the serious time delay character. These make it is very complex to control the polymerization process. The paper firstly analyzes the polymerization process model, gives its CARIMA parametric equation, provides the original data excited by the generalized binary noise (GBN) signal, and identify the model using the recursive least square algorithm with fading memory. Secondly, the paper introduces an improved generalized predictive control (GPC) method, which has stronger fault tolerance and robustness with little process overshooting. At last, a system is realized in the cascade frame based on the control layer and the monitoring layer. The former realizes the model identification and the recursive computation in the improved generalized predictive control, and sends the results to the latter, and the latter realizes PID with dead-zone control of the mixed water temperature control using the results of the main regulator. The practice shows that the polymerize temperature cascade system runs well and has evident effect with effective control.",2011,0, 5631,Reasonable coal pillar size design in Xiaoming mine,"The pillar size has great influence not only on the stability of surrounding rock, but also on the recovery rate of coal resources, which directly affects the economic benefits of coal. According to XiaoMing tiefa coalmining group geological conditions and the use of coal pillar, it adopts field measurement, numerical simulation and theory calculation methods to get small coal pillar width and reasonable wide size of coal pillar. Use rock stratum detect to find out the fissures, faults, broken area; analyze the coal pillar stress and displacement distribution of the different pillar widths (3, 5, 8, 10,15and20m) by FLAC2D simulation of numerical simulation software, determine a reasonable size finally.lt provides a guarantee of security for the exploitation of the mine.",2011,0, 5632,Design and realization of lake water quality detection system based on wireless sensor networks,"With the increasing pollution of lake water environment and frequent outbreaks of blue-green algae, it is very significant to establish one kind of distributed real-time monitoring network on all typical locations of the lake. In this way, the pollution of lake water environment can be effectively controlled. Nowadays, the stability of data reporting and how to maximize the use of limited power are the biggest problems to the design of real-time monitoring networks. In this paper, according to characteristics of lake monitoring, the design of lake water quality detection system based on wireless sensor network is presented. Key technologies and issues were discussed in the paper based on lake monitoring sensor network topology design, network protocol design and node hardware and software design. Also the application of lake water quality detection system in the Taihu lake was discussed.",2011,0, 5633,Performance Evaluation of FACS-MP CAC System Priority Algorithm for Wireless Cellular Networks,"Call Admission Control (CAC) is one of the resource management functions, which regulates network access to ensure QoS provisioning. However, the decision for CAC is very challenging issue due to user mobility, limited radio spectrum, and multimedia traffic characteristics. In our previous work, we implemented a fuzzy based CAC system called FACS-MP, which consider many priorities. In this paper, we evaluate by simulations the performance of priority algorithm which is part of FACS-MP system. From the simulations results, we conclude that the FACS-MP make a good differentiation of different services based on different priorities.",2011,0, 5634,An Atomatic Fundus Image Analysis System for Clinical Diagnosis of Glaucoma,"Glaucoma is a serious ocular disease and leads blindness if it couldn't be detected and treated in proper way. The diagnostic criteria for glaucoma include intraocular pressure measurement, optic nerve head evaluation, retinal nerve fiber layer and visual field defect. The observation of optic nerve head, cup to disc ratio and neural rim configuration are important for early detecting glaucoma in clinical practice. However, the broad range of cup to disc ratio is difficult to identify early changes of optic nerve head, and different ethnic groups possess various features in optic nerve head structures. Hence, it is still important to develop various detection techniques to assist clinicians to diagnose glaucoma at early stages. In this study, we developed an automatic detection system which contains two major phases: the first phase performs a series modules of digital fundus retinal image analysis including vessel detection, vessel in painting, cup to disc ratio calculation, and neuro-retinal rim for ISNT rule, the second phase determines the abnormal status of retinal blood vessels from different aspect of view. In this study, the novel idea of integrating image in painting and active contour model techniques successfully assisted the correct identification of cup and disk regions. Several clinical fundus retinal images containing normal and glaucoma images were applied to the proposed system for demonstration.",2011,0, 5635,Mining unstructured log files for recurrent fault diagnosis,"Enterprise software systems are large and complex with limited support for automated root-cause analysis. Avoiding system downtime and loss of revenue dictates a fast and efficient root-cause analysis process. Operator practice and academic research have shown that about 80% of failures in such systems have recurrent causes; therefore, significant efficiency gains can be achieved by automating their identification. In this paper, we present a novel approach to modelling features of log files. This model offers a compact representation of log data that can be efficiently extracted from large amounts of monitoring data. We also use decision-tree classifiers to learn and classify symptoms of recurrent faults. This representation enables automated fault matching and, in addition, enables human investigators to understand manifestations of failure easily. Our model does not require any access to application source code, a specification of log messages, or deep application knowledge. We evaluate our proposal using fault-injection experiments against other proposals in the field. First, we show that the features needed for symptom definition can be extracted more efficiently than does related work. Second, we show that these features enable an accurate classification of recurrent faults using only standard machine learning techniques. This enables us to identify accurately up to 78% of the faults in our evaluation data set.",2011,0, 5636,Evaluation of Experiences from Applying the PREDIQT Method in an Industrial Case Study,"We have developed a method called PREDIQT for model-based prediction of impacts of architectural design changes on system quality. A recent case study indicated feasibility of the PREDIQT method when applied on a real-life industrial system. This paper reports on the experiences from applying the PREDIQT method in a second and more recent case study - on an industrial ICT system from another domain and with a number of different system characteristics, compared with the previous case study. The analysis is performed in a fully realistic setting. The system analyzed is a critical and complex expert system used for management and support of numerous working processes. The system is subject to frequent changes of varying type and extent. The objective of the case study has been to perform an additional and more structured evaluation of the PREDIQT method and assess its performance with respect to a set of success criteria. The evaluation argues for feasibility and usefulness of the PREDIQT-based analysis. Moreover, the study has provided useful insights into the weaknesses of the method and suggested directions for future research and improvements.",2011,0, 5637,Visual Recognition Using Density Adaptive Clustering,"Visual codebook based texture analysis and image recognition is popular for its robustness to affine transformation and illumination variation. It is based on the affine invariable descriptors of local patches extracted by region detector, and then represents the image by histogram of the codebook constructed by the feature vector quantization. The most commonly used vector quantization method is k-means. But due to the limitations of predefined number of clusters and local minimum update rule, we show that k-means would fail to code the most discriminable descriptors. Another defect of k-means is that the computational complexity is extremely high. In this paper, we proposed a nonparametric vector quantization method based on mean shift, and use locality-sensitive hashing (LSH) to reduce the cost of the nearest neighborhood query in the mean-shift iterations. The performance of proposed method is demonstrated in several image classification tasks. We also show that the Information Gain or Mutual Information based feature selection based on our codebook further improves the performance.",2011,0, 5638,Statistical approach for finding sensitive positions for condition based monitoring of reciprocating air compressors,"Reciprocating air compressors are one of the most popular and widely used machines in industry today. Timely detection of fault occurring in these machines becomes very critical since it influences the plant performance by virtue of system reliability, operating efficiency and maintenance cost. Monitoring of faults by identifying sensitive positions through sensory output forms vital part of everyday manufacturing. Health monitoring of a reciprocating air compressor using PC based data acquisition system and timely identification of potential faults can prevent failures of the entire system. Various transducers, data acquisition DAQ hardware and relevant software forms the basic components of the health monitoring system. Prior to acquiring the data for health monitoring and consequently the fault diagnosis, it is crucial to locate the sensitive positions on the machine. This paper proposes a scheme to determine the sensitive positions on a machine in healthy condition, based on computation of statistical parameters such as Peak value, Standard Deviation, RMS value, Variance and Cross Correlation.",2011,0, 5639,Software energy estimation based on statistical characterization of intermediate compilation code,"Early estimation of embedded software power consumption is a critical issue that can determine the quality and, sometimes, the feasibility of a system. Architecture-specific, cycle-accurate simulators are valuable tools for fine-tuning performance of critical sections of the application but are often too slow for the simulation of entire systems. This paper proposes a fast and statistically accurate methodology to evaluate the energy performance of embedded software and describes the associated toolchain. The methodology is based on a static characterization of the target instruction set to allow estimation on an equivalent, target-independent intermediate code representation.",2011,0, 5640,RVC-based time-predictable faulty caches for safety-critical systems,"Technology and Vcc scaling lead to significant faulty bit rates in caches. Mechanisms based on disabling faulty parts show to be effective for average performance but are unacceptable in safety critical systems where worst-case execution time (WCET) estimations must be safe and tight. The Reliable Victim Cache (RVC) deals with this issue for a large fraction of the cache bits. However, replacement bits are not protected, thus keeping the probability of failure still high. This paper proposes two mechanisms to tolerate faulty bits in replacement bits and keep time-predictability by extending the RVC. Our solutions offer different tradeoffs between cost and complexity. In particular, the Extended RVC (ERVC) has low energy and area overheads while keeping complexity at a minimum. The Reliable Replacement Bits (RRB) solution has even lower overheads at the expense of some more wiring complexity.",2011,0, 5641,Software-based control flow error detection and correction using branch triplication,"Ever Increasing use of commercial off-the-shelf (COTS) processors to reduce cost and time to market in embedded systems has brought significant challenges in error detection and recovery methods employing in such systems. This paper presents a software based control flow error detection and correction technique, so called branch TMR (BTMR), suitable for use in COTS-based embedded systems. In BTMR method, each branch instruction is triplicated and a software interrupt routine is invoked to check the correctness of the branch instruction. During the execution of a program, when a branch instruction is executed, it is compared with the second redundant branch in the interrupt routine. If a mismatch is detected, the third redundant branch instruction is considered as the error-free branch instruction; otherwise, no error has occurred. The main advantage of BTMR over previously proposed control flow checking (CFC) methods is its ability to correct CFEs as well as protection of indirect branch instructions. The BTMR method is evaluated on LEON2 embedded processor. The results show that, error correction coverage is about 96%, while memory overhead and performance overhead of BTMR is about 28% and 10%, respectively.",2011,0, 5642,Application research on temperature WSN nodes in switchgear assemblies based on TinyOS and ZigBee,"Temperature detection can timely discover the potential faults in switchgear assemblies. Appling a Wireless Sensor Networks(WSN) in on-line monitoring system is an effective measure to realize Condition Based Maintenance(CBM) for intelligent switchgear assemblies. A design solution of a temperature wireless sensor network node with CC2430 chip based on ZigBee is presented. The thermocouple is used as main sensor unit to detect the temperature-rise of key points in switchgear assemblies in real time, and the digital temperature sensor is adopted as subsidiary sensor unit to detect the environment temperature, so that the temperature of the points at high voltage and high current can be measured effectively and accurately. TinyOS, the embedded operating system adopted widely for WSN node, is researched and transplanted into CC2430, therefore the node software can be programmed and updated remotely. On this basis, the temperature wireless sensor prototypes based on ZigBee and/or TinyOS with CC2430 are designed and developed, and tested via the WSN experimental platform which is built by us. The experimental result shows that the temperature wireless sensor node can meet the requirement of the system function, and has the features such as low power dissipation, small volume, stable performance and long lifespan etc., and can be broadly applied in on-line temperature measuring and monitoring of high-voltage and low-voltage switchgear assemblies.",2011,0, 5643,Towards improved survivability in safety-critical systems,"Performance demand of Critical Real-Time Embedded (CRTE) systems implementing safety-related system features grows at an exponential rate. Only modern semiconductor technologies can satisfy CRTE systems performance needs efficiently. However, those technologies lead to high failure rates, thus lowering survivability of chips to unacceptable levels for CRTE systems. This paper presents SESACS architecture (Surviving Errors in SAfety-Critical Systems), a paradigm shift in the design of CRTE systems. SESACS is a new system design methodology consisting of three main components: (i) a multicore hardware/firmware platform capable of detecting and diagnosing hardware faults of any type with minimal impact on the worst-case execution time (WCET), recovering quickly from errors, and properly reconfiguring the system so that the resulting system exhibits a predictable and analyzable degradation in WCET; (ii) a set of analysis methods and tools to prove the timing correctness of the reconfigured system; and (iii) a white-box methodology and tools to prove the functional safety of the system and compliance with industry standards. This new design paradigm will deliver huge benefits to the embedded systems industry for several decades by enabling the use of more cost-effective multicore hardware platforms built on top of modern semiconductor technologies, thereby enabling higher performance, and reducing weight and power dissipation. This new paradigm will further extend the life of embedded systems, therefore, reducing warranty and early replacement costs.",2011,0, 5644,Optimal placement of power quality monitors in distribution systems using the topological monitor reach area,"Installation of power quality monitors (PQMs) at all buses in a power distribution network is uneconomical and it needs to be minimized. This paper presents a PQM positioning technique which finds the optimal number and location of PQMs in power distribution systems for voltage sag detection. The IEEE 34-node and 69 bus test systems were modeled in DIgSILENT software to obtain the topological monitor reach area matrix for various types of faults. Then, an optimization problem is formulated and solved using genetic algorithm to find the minimum number of PQMs in the distribution system which can guarantee the observability of the whole system. Finally, the sag severity index has been used to find the best location to install monitors in the distribution system.",2011,0, 5645,Integrating the S-PQDA software tool in the utility power quality management system,"This paper presents a smart power quality data a analyzer (S-PQDA) or power quality diagnosis software (PQDS) tool that performs power quality (PQ) diagnosis on the PQ disturbance data recorded by an online PQ monitoring system. The software tool enables power utility engineers to perform automatic PQ disturbance detection, classification and diagnosis of the disturbances. The PQDS also assists the power utility engineers in identifying the existence of incipient faults due to partial discharges in the cable compartment. The overall accuracy of the software in performing PQ diagnosis is 96.4%.",2011,0, 5646,Software Testing and Verification in Climate Model Development,"Over the past 30 years, most climate models have grown from relatively simple representations of a few atmospheric processes to complex multidisciplinary systems. Computer infrastructure over that period has gone from punchcard mainframes to modern parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Verification processes for model implementations rely almost exclusively on some combination of detailed analyses of output from full climate simulations and system-level regression tests. Besides being costly in terms of developer time and computing resources, these testing methodologies are limited in the types of defects they can detect, isolate, and diagnose. Mitigating these weaknesses of coarse-grained testing with finer-grained unit tests has been perceived as cumbersome and counterproductive. Recent advances in commercial software tools and methodologies have led to a renaissance of systematic fine-grained testing. This opens new possibilities for testing climate-modeling-software methodologies.",2011,0, 5647,A dynamic workflow simulation platform,"In numeric optimization algorithms errors at application level considerably affect the performance of their execution on distributed infrastructures. Hours of execution can be lost only due to bad parameter configurations. Though current grid workflow systems have facilitated the deployment of complex scientific applications on distributed environments, the error handling mechanisms remain mostly those provided by the middleware. In this paper, we propose a collaborative platform for the execution of scientific experiments in which we integrate a new approach for treating application errors, using the dynamicity and exception handling mechanisms of the YAWL workflow management system. Thus, application errors are correctly detected and appropriate handling procedures are triggered in order to save as much as possible of the work already executed.",2011,0, 5648,A dependable system based on adaptive monitoring and replication,"A multi agent system (MAS) has recently gained public attention as a method to solve competition and cooperation in distributed systems. However, MAS's vulnerability due to the propagation of failures prevents it from being applied to a large-scale system. This paper proposes a method to improve the reliability and efficiency of distributed systems. Specifically, the paper deals with the issue of fault tolerance. Distributed systems are characterized by a large number of agents, who interact according to complex patterns. The effects of a localized failure may spread across the whole network, depending on the structure of the interdependences between agents. The method monitors messages between agents to detect undesirable behaviors such as failures. Collecting the information, the method generates global information of interdependence between agents and expresses it in a graph. This interdependence graph enables us to detect or predict undesirable behaviors. This paper also shows that the method can optimize performance of a MAS and improve adoptively its reliability under complicated and dynamic environment by applying the global information acquired from the interdependence graph to a replication system. The advantages of the proposed method are illustrated through simulation experiments based on a virtual auction market.",2011,0, 5649,Software design of rail transit train braking system fault diagnosis based on MATLAB/VB,"With the requirements of safety and reliability in brake system increasingly enhanced in urban transit train. For brake system the performance of prediction system and fault diagnosis must be increased at the same time. MATLAB is a high performance mathematics software integrated data analysis, numeric processing and graphic display, combined with the high powered software in VB graphical user interface, the convenience of developing fault diagnosis system not only has shorter time but also has good performance characteristics. This paper focus on the software design of brake system fault diagnosis in urban rail transit train, attempt to do some theoretical research for timely and effectively avoid for brake system in operation of the train traffic accident caused by failure.",2011,0, 5650,Model-Driven Design of Performance Requirements,"Obtaining the expected performance of a workflow is much simpler if the requirements for each of its tasks are well defined. However, most of the time, not all tasks have well-defined requirements, and these must be derived by hand. This can be an error-prone and time consuming process for complex workflows. In this work, we present an algorithm which can derive a time limit for each task in a workflow, using the available task and workflow expectations. The algorithm assigns the minimum time required by each task and distributes the slack according to the weights set by the user, while checking that the task and workflow expectations are consistent with each other. The algorithm avoids having to evaluate every path in the workflow by building its results incrementally over each edge. We have implemented the algorithm in a model handling language and tested it against a naive exhaustive algorithm which evaluates all paths. Our incremental algorithm reports equivalent results in much less time than the exhaustive algorithm.",2011,0, 5651,Improving the Modifiability of the Architecture of Business Applications,"In the current rapidly changing business environment, organizations must keep on changing their business applications to maintain their competitive edges. Therefore, the modifiability of a business application is critical to the success of organizations. Software architecture plays an important role in ensuring a desired modifiability of business applications. However, few approaches exist to automatically assess and improve the modifiability of software architectures. Generally speaking, existing approaches rely on software architects to design software architecture based on their experience and knowledge. In this paper, we build on our prior work on automatic generation of software architectures from business processes and propose a collection of model transformation rules to automatically improve the modifiability of software architectures. We extend a set of existing product metrics to assess the modifiability impact of the proposed model transformation rules and guide the quality improvement process. Eventually, we can generate software architecture with desired modifiability from business processes. We conduct a case study to illustrate the effectiveness of our transformation rules.",2011,0, 5652,A Hierarchical Security Assessment Model for Object-Oriented Programs,"We present a hierarchical model for assessing an object-oriented program's security. Security is quantified using structural properties of the program code to identify the ways in which `classified' data values may be transferred between objects. The model begins with a set of low-level security metrics based on traditional design characteristics of object-oriented classes, such as data encapsulation, cohesion and coupling. These metrics are then used to characterise higher-level properties concerning the overall readability and writ ability of classified data throughout the program. In turn, these metrics are then mapped to well-known security design principles such as `assigning the least privilege' and `reducing the size of the attack surface'. Finally, the entire program's security is summarised as a single security index value. These metrics allow different versions of the same program, or different programs intended to perform the same task, to be compared for their relative security at a number of different abstraction levels. The model is validated via an experiment involving five open source Java programs, using a static analysis tool we have developed to automatically extract the security metrics from compiled Java byte code.",2011,0, 5653,A Reliability Model for Complex Systems,"A model of software complexity and reliability is developed. It uses an evolutionary process to transition from one software system to the next, while complexity metrics are used to predict the reliability for each system. Our approach is experimental, using data pertinent to the NASA satellite systems application environment. We do not use sophisticated mathematical models that may have little relevance for the application environment. Rather, we tailor our approach to the software characteristics of the software to yield important defect-related predictors of quality. Systems are tested until the software passes defect presence criteria and is released. Testing criteria are based on defect count, defect density, and testing efficiency predictions exceeding specified thresholds. In addition, another type of testing efficiency-a directed graph representing the complexity of the software and defects embedded in the code-is used to evaluate the efficiency of defect detection in NASA satellite system software. Complexity metrics were found to be good predictors of defects and testing efficiency in this evolutionary process.",2011,0, 5654,A Methodology of Model-Based Testing for AADL Flow Latency in CPS,"AADL (Architecture Analysis and Design Language) is a kind of model-based real-time CPS (Cyber-Physical System) modeling language, which has been widely used in avionics and space areas. The current challenges have been raised up on how to test CPS model described in AADL dynamically and find design fault at the design phase to iterate and refine the model architecture. This paper mainly tests the flow latency in design model based on PDA (Push-Down Automata). It abstracts the properties of flow latency in CPS model, and translates them into PDA in order to assess the latency in simulation. Meanwhile, this paper presents a case study of pilotless aircraft cruise control system to prove the feasibility of dynamic model-based testing on model performances and achieve the architecture iteration and refining aim.",2011,0, 5655,Automatic Regression Test Selection Based on Activity Diagrams,"Regression testing is the most common method for ensuring the quality of changed software. As UML models are widely used as design blueprints, model-based test case generation techniques can be used in regression test cases generation. However, test cases obtained from these techniques are usually represented as sequences of actions in abstract models, and heavy human efforts are needed to transform them into test cases accepted by software for execution. To reduce this effort, we present an approach to automatically generate executable test cases for regression testing based on activity diagrams. It combines a technique of activity diagrams based regression test cases classification to pick up retestable test cases, and a technique of feedback-directed test cases generation to generate test cases for new behaviors in changed software. We implement the tool MDRTGen to show the performance of the approach. The experiments show the good performance of the approach in improving the efficiency of test cases generation and decreasing the cost in regression testing.",2011,0, 5656,Towards Synthesizing Realistic Workload Traces for Studying the Hadoop Ecosystem,"Designing cloud computing setups is a challenging task. It involves understanding the impact of a plethora of parameters ranging from cluster configuration, partitioning, networking characteristics, and the targeted applications' behavior. The design space, and the scale of the clusters, make it cumbersome and error-prone to test different cluster configurations using real setups. Thus, the community is increasingly relying on simulations and models of cloud setups to infer system behavior and the impact of design choices. The accuracy of the results from such approaches depends on the accuracy and realistic nature of the workload traces employed. Unfortunately, few cloud workload traces are available (in the public domain). In this paper, we present the key steps towards analyzing the traces that have been made public, e.g., from Google, and inferring lessons that can be used to design realistic cloud workloads as well as enable thorough quantitative studies of Hadoop design. Moreover, we leverage the lessons learned from the traces to undertake two case studies: (i) Evaluating Hadoop job schedulers, and (ii) Quantifying the impact of shared storage on Hadoop system performance.",2011,0, 5657,An Energy-Aware SaaS Stack,"We present a multi-agent system on top of the IaaS layer consisting of a scheduler agent and multiple worker agents. Each job is controlled by an autonomous worker agent, which is equipped with application specific knowledge (e.g., performance functions) allowing it to estimate the type and number of necessary resources. During runtime, the worker agent monitors the job and adapts its resources to ensure the specified quality of service - even in noisy clouds where the job instances are influenced by other jobs. All worker agents interact with the scheduler agent, which takes care of limited resources and does a cost-aware scheduling by assigning jobs to times with low energy costs. The whole architecture is self-optimizing and able to use public or private clouds.",2011,0, 5658,Power-Aware Resource Scheduling in Base Stations,"Base band stations for Long Term Evolution (LTE) communication processing tend to rely on over-provisioned resources to ensure that peak demands can be met. These systems must meet user Quality of Service expectations, but during non-peak workloads, for instance, many of the cores could be placed in low-power modes. One key property of such application-specific systems is that they execute frequent, short-lived tasks. Sophisticated resource management and task scheduling approaches suffer intolerable overhead costs in terms of time and expense, and thus lighter-weight and more efficient strategies are essential to both saving power and meeting performance expectations. To this end, we develop a flexible, non-propietary LTE workload model to drive our resource management studies. Here we describe our experimental infrastructure and present early results that underscore the promise of our approach along with its implications on future hardware/software codesign.",2011,0, 5659,"""You Want to do What?"" Breaking the Rules to Increase Customer Satisfaction",Customer satisfaction. Every customer support organization strives to increase and maintain a higher level of customer satisfaction. This is a story of how one software organization attempted to increase customer satisfaction by increasing predictability and response times to reported defects. Along the way they were able to establish trust with the customer support organization by implementing a process that seemed counterintuitive to its stakeholders.,2011,0, 5660,Reverse Seam Carving,"Seam carving is an effective operator supporting content-aware resizing for both image reduction and expansion. However, repeated seam removing and inserting processes lead to excessively distortion image when imposed on seam insertion then removal operations or the other way around. With considering the relationship between seam removing and inserting processes, we present an ameliorated energy function to minimize aliasing. ""Forward Energy"" is an effective improvement only to image reduction. Moreover, we propose a novel ""Visual Points"" structure which distinguish the ""Forward Energy"" of seam insertion from that of seam removal, and improve seam insertion operations greatly. Qualitative and quantitative experiments show that the proposed method can achieve high quality as compared to existed methods.",2011,0, 5661,A pattern-oriented methodology for conceptual modeling evaluation and improvement,"Conceptual models are of prime importance to ensure a high level of quality in designing information systems. It has been witnessed that the majority of information systems (IS) change requests result due to deficient functionalities in the information systems. Therefore, a good analysis and design method should guarantee that conceptual models are correct and complete and easy to understand, as they are the communicating mediator between the users and the development team. Similarly, if models are complex then their extension or the incorporation of missing requirements gets very difficult for the designers. Our approach evaluates the conceptual models on multiple levels of granularity in addition to providing the corrective actions or transformations for improvement. We propose quality patterns to help the non-expert users in evaluating their models with respect to their quality goal. This paper also illustrates our approach by describing an evaluation and improvement process using a case study.",2011,0, 5662,Flexible Distributed Capacity Allocation and Load Redirect Algorithms for Cloud Systems,"In Cloud computing systems, resource management is one of the main issues. Indeed, in any time instant resources have to be allocated to handle effectively workload fluctuations, while providing Quality of Service (QoS) guarantees to the end users. In such systems, workload prediction-based autonomic computing techniques have been developed. In this paper we propose capacity allocation techniques able to coordinate multiple distributed resource controllers working in geographically distributed cloud sites. Furthermore, capacity allocation solutions are integrated with a load redirection mechanism which forwards incoming requests between different domains. The overall goal is to minimize the costs of the allocated virtual machine instances, while guaranteeing QoS constraints expressed as a threshold on the average response time. We compare multiple heuristics which integrate workload prediction and distributed non-linear optimization techniques. Experimental results show how our solutions significantly improve other heuristics proposed in the literature (5-35% on average), without introducing significant QoS violations.",2011,0, 5663,Database-Agnostic Transaction Support for Cloud Infrastructures,"In this paper, we present and empirically evaluate the performance of database-agnostic transaction (DAT) support for the cloud. Our design and implementation of DAT is scalable, fault-tolerant, and requires only that the data store provide atomic, row-level access. Our approach enables applications to employ a single transactional data store API that can be used with a wide range of cloud data store technologies. We implement DAT in AppScale, an open-source implementation of the Google App Engine cloud platform, and use it to evaluate DAT's performance and the performance of a number of popular key-value stores.",2011,0, 5664,SkelCL - A Portable Skeleton Library for High-Level GPU Programming,"While CUDA and OpenCL made general-purpose programming for Graphics Processing Units (GPU) popular, using these programming approaches remains complex and error-prone because they lack high-level abstractions. The especially challenging systems with multiple GPU are not addressed at all by these low-level programming models. We propose SkelCL - a library providing so-called algorithmic skeletons that capture recurring patterns of parallel computation and communication, together with an abstract vector data type and constructs for specifying data distribution. We demonstrate that SkelCL greatly simplifies programming GPU systems. We report the competitive performance results of SkelCL using both a simple Mandelbrot set computation and an industrial-strength medical imaging application. Because the library is implemented using OpenCL, it is portable across GPU hardware of different vendors.",2011,0, 5665,Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing,"Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores, and this situation will only become more dire as we reach exascale computing. Exacerbating this situation, some of these faults will not be detected, manifesting themselves as silent errors that will corrupt memory while applications continue to operate but report incorrect results. This paper introduces RedMPI, an MPI library residing in the profiling layer of any standards-compliant MPI implementation. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring code changes to application source code. By providing redundancy, RedMPI is capable of transparently detecting corrupt messages from MPI processes that become faulted during execution. Furthermore, with triple redundancy RedMPI ""votes'' out MPI messages of a faulted process by replacing corrupted results with corrected results from unfaulted processes. We present an evaluation of RedMPI on an assortment of applications to demonstrate the effectiveness and assess associated overheads. Fault injection experiments establish that RedMPI is not only capable of successfully detecting injected faults, but can also correct these faults while carrying a corrupted application to successful completion without propagating invalid data.",2011,0, 5666,Analyzing Fault-Impact Region of Composite Service for Supporting Fault Handling Process,"A fault situation occurs to a service needs to be well analyzed and handled in order to ensure the reliability of composite service. The analysis can be driven by understanding the impact caused by the faulty service on the other services as well as the entire composition. Existing works have given less attention to this issue, in particular, the temporal impact situation caused by the fault. Thus, we propose an approach to analyzing the temporal impact and generating the impact region. The region can be utilized by the handling mechanism to prioritize the services to be repaired. The approach begins by estimating the updated temporal behavior of the composite service after the fault situation occurs, followed by identifying the potential candidates of the impact region. The concept of temporal negative impact is introduced to support the identification activity. Intuitively, the approach can assist in reducing the number of service changes in handling the fault situation.",2011,0, 5667,Reputation-Driven Web Service Selection Based on Collaboration Network,"Most of trustworthy web service selection simply focus on individual reputation and ignore the collaboration reputation between services. To enhance the collaboration trust during web service selection, a reputation model called collaboration reputation is proposed. The reputation model is built on web service collaboration network(WSCN), which is constructed in terms of the composite service execution log. Thus, the WSCN aims to maintain the trustworthy collaboration alliance among web services, In WSCN, the collaboration reputation can be assessed by two metrics, one called invoking reputation is computed by recommendation, which is selected from the community structure hiding in WSCN, the other is assessed by the invoked web service. In addition, the web service selection based on WSCN is designed.",2011,0, 5668,Measuring robustness of Feature Selection techniques on software engineering datasets,"Feature Selection is a process which identifies irrelevant and redundant features from a high-dimensional dataset (that is, a dataset with many features), and removes these before further analysis is performed. Recently, the robustness (e.g., stability) of feature selection techniques has been studied, to examine the sensitivity of these techniques to changes in their input data. In this study, we investigate the robustness of six commonly used feature selection techniques as the magnitude of change to the datasets and the size of the selected feature subsets are varied. All experiments were conducted on 16 datasets from three real-world software projects. The experimental results demonstrate that Gain Ratio shows the least stability on average while two different versions of ReliefF show the most stability. Results also show that making smaller changes to the datasets has less impact on the stability of feature ranking techniques applied to those datasets.",2011,0, 5669,An image process way of globular transfer in MIG welding,"This paper carried out an image process way about feature extraction of droplet transfer. The adopted materials are that welding wire diameter is 1.2mm and sheet steel thickness is 3.0mm. In order to obtain high quality images of dynamic droplet, a special filtering lens for image filtering with CCD was designed and then an algorithm of image was developed. Under MATLAB software platform, the algorithm compared median filtering, images sharpening, morphological operation, threshold segmentation, edge detection and so on. Because of images' magnification scale coefficients of original image, this paper calculated the coefficient according to characteristic parameters such as wire's diameter. Results proved that the image processing method could deal with the obtained droplet images and get their critical features effectively.",2011,0, 5670,Research on fault pattern recognition for aircraft fuel system with its performance simulation,"This paper presents application research on fault pattern recognition for aircraft fuel system based on its performance simulation. Fault pattern recognition method is used to perform fault detection, fault isolation, fault prognostics and so on in complicated system, which is basic research contents of prognostics and health management (PHM). Since the hardware of fuel system for an aircraft is too expensive in laboratory, it is better to build simulation software to research on the method of fault pattern recognition for fuel system of an aircraft. The simulation software of fault pattern recognition is able to carry out performance simulation for the aircraft fuel flow parameters such as pressure variation, flow rate in the fuel system pipe network both on the fuel system normal condition and its malfunction condition. When a fault pattern of a component such as radial pump, ball valve happens, the nodes in which maximum pressure variation has happen in the fuel system pipe network are selected, and the virtual sensor is arranged in these nodes. Through the signal correlation comparison of the virtual pressure sensors under the normal condition and malfunction condition, the fault pattern is recognized. Since the fault pattern recognition methods is at its very initiative stage, the false alarm rate is not low. Nevertheless, the simulation software is able to verify and study the new concept fault pattern recognition methods, which will supply technical support for the onboard aircraft PHM system in the future.",2011,0, 5671,Automatic Library Generation for BLAS3 on GPUs,"High-performance libraries, the performance-critical building blocks for high-level applications, will assume greater importance on modern processors as they become more complex and diverse. However, automatic library generators are still immature, forcing library developers to manually tune library to meet their performance objectives. We are developing a new script-controlled compilation framework to help domain experts reduce much of the tedious and error-prone nature of manual tuning, by enabling them to leverage their expertise and reuse past optimization experiences. We focus on demonstrating improved performance and productivity obtained through using our framework to tune BLAS3 routines on three GPU platforms: up to 5.4x speedups over the CUBLAS achieved on NVIDIA GeForce 9800, 2.8x on GTX285, and 3.4x on Fermi Tesla C2050. Our results highlight the potential benefits of exploiting domain expertise and the relations between different routines (in terms of their algorithms and data structures).",2011,0, 5672,Mesh-based overlay enhancing live video quality in pull-based P2P systems,"Nowadays, Peer-to-Peer (P2P) live streaming applications have attracted great interest. Despite the fact that numerous systems have been proposed in the past, there are still problems concerning delay and quality requirements of live video distribution. In this paper, we consider a pull-based P2P live video streaming system where the video is disseminated over an overlay network. We propose a mesh-based overlay construction mechanism that enhances the received video quality while minimizing, as much as possible, the play-out delay. The principal feature provides each newcomer with a set of neighbors holding almost the same video segments and enough available transmission capacities to deal with its requests. A particular algorithm has been designed to estimate the peer's available capacities. The results of simulations show our mechanism efficiency in heterogeneous systems.",2011,0, 5673,A public cryptosystem from R-LWE,"Recently Vadim Lyubashevsky etc. built LWE problem on ring and proposed a public cryptosystem based on R-LWE, which, to a certain extent, solved the defect of large public key of this kind, but it didn't offer parameter selections and performance analysis in detail. In this paper an improved scheme is proposed by sharing a ring polynomial vector that makes public key as small as 1/m of the original scheme in multi-user environments. In additions, we introduce a parameter r to control both the private key space size and decryption errors probability, which greatly enhances the flexibly and practicality. The correctness, security and efficiency are analyzed in detail and choice of parameters is studied, at last concrete parameters are recommended for the new scheme.",2011,0, 5674,A fast fuzzy stator condition monitoring algorithm using FPGA,"This paper presents a new methodology for realtime detection of stator faults in induction motors. It is based on the evaluation of the magnitudes of three phase current signals. The severity of fault is indicated by designed fuzzy system. The algorithm consists of two stages: feature extraction and classification. All acquired current signals are based on IEEE-754 single precision floating point format. In the feature extraction stage, floating point based features were obtained and then converted to the integer format. In the classification stage, all coding are written in VHDL using integer data type format. The proposed method classifies the stator related faults at higher level speed with better accuracy than conventional offline method. This is the first time that FPGA based fuzzy system has been employed to induction motor faults. With the proposed method, it is clearly identified the motor condition as either healthy or damaged, allowing one to perform a simple real time detection. A light computation load is required in order to perform the calculation of the algorithm. In addition, the implemented FPGA design allows the system reconfiguration to attain specific measurement conditions on different types of motors without hardware modifications.",2011,0, 5675,Application of virtual instrument techonoly in electric courses teaching,"This paper analyzed the problems existed in practice teaching process of electric courses and put forward a new view which is to apply virtual instrument (VI) technology in theses courses. On the basis of a simple instruction for labVIEW's VI design function, an instance which designed by labVIEW to emulate and detect harmonic signals was described in detail. The instance has been used in the course of power quality and obtained a good result. More and more instances were designed based VI and used in electric courses, it can promote the combination between theory and practice and improve teaching level of these courses.",2011,0, 5676,Using Residue Number Systems for improving QoS and error detection & correction in Wireless Sensor Networks,"Wireless Sensor Networks have potential of significantly enhancing our ability to monitor and interact with our physical environment. Realizing a fault tolerant operation is critical to the success of WSNs. The integrity of data has tremendous effects on performance of any data acquisition system. Noise and other disturbances can often degrade the information or data acquired from these systems. Devising a fault-tolerant mechanism in wireless sensor networks is very important due to the construction and deployment characteristics of these low powered sensing devices. Moreover, due to the low computation and communication capabilities of the sensor nodes, the fault-tolerant mechanism should have a very low computation overhead. In this paper, we present a novel method to improve Performance and Fault Detection & Correction in Wireless Sensor Networks by using of Residue Number Systems.",2011,0, 5677,Localized approach to distributed QoS routing with a bandwidth guarantee,"Localized Quality of Service (QoS) routing has been recently proposed as a promising alternative to the currently deployed global routing schemes. In localized routing schemes, routing decisions are taken solely based on locally collected statistical information rather than global state information. This approach significantly reduces the overheads associated with maintaining global state information at each node, which in turn improves the overall routing performance. In this paper we introduce a Localized Distributed QoS Routing algorithm (LDR), which integrates the localized routing scheme into distributed routing. We compare the performance of our algorithm against other existing localized routing algorithm, namely Credit Based Routing (CBR); and against the contemporary global routing algorithm, the Widest Shortest Path (WSP). With the aid of simulations, we show that the proposed algorithm outperforms the others in terms of the overall network blocking probability.",2011,0, 5678,Performance comparison of Multiple Description Coding and Scalable video coding,"For a video server, providing a good quality of service to highly diversified users is a challenging task because different users have different link conditions and different requirement of quality and demand. Multiple Description Coding (MDC) and Scalable video coding (SVC) are the two technical methods for quality adaptation to operate over a wide range of quality of service in heterogeneous requirements. Both are techniques of coding a video sequence in a way that multiple levels of quality can be obtained depending on the parts of the video bit stream that are received. For Scalable video coding special protection is mad for the base layer using Forward Error Protection while the streams (descriptions) in multiple description coding has been tested to simulate the advantages of diversity systems where each sub-stream has an equal probability of being correctly decoded. In this paper, the performance comparison of the two coding approaches is made by using DCT coefficients in generating the base layer and enhancement Layers for SVC and Descriptions for MDC with respect to their perspective achievements in image quality and Compression Ratio. Simulation results show that MDC out performs SVC in both cases.",2011,0, 5679,ANN observer for on-line estimation of synchronous generator dynamic parameters,"This paper presents a new method for implementing Artificial Neural Network (ANN) observers in estimating and identifying synchronous generator dynamic parameters based on one statistic feature extraction from the operating data using obtained measurements from time zone information. Required data for training the neural network observers are obtained through off-line simulations of a synchronous generator operating in a one-machine-infinite-bus environment. Optimal components of the patterns are segregated from many learning patterns based on a new method called ""normalized variance"". Nominal values of parameters are used as a deviance index in the machine model. Finally, neural network is tested through online simulated measurements in order to estimate and indentify synchronous generator dynamic parameters.",2011,0, 5680,Statistical prediction modeling for software development process performance,"With the advent of the information age and more intense competitions among IT companies, it is more important to assess the quality of software development processes and products by not only measuring outcomes but also predicting outcomes. On the basis of analysis and experiments about software development processes and products, a study of the process performance modeling has been conducted with statistical analyzing past and current process performance data. In this paper, we present a statistical prediction modeling for Software Development Process Performance. For predicting delivered defects effectively, a simple case study is illustrated, and several suggestions on planning and controlling management brought from this model are analyzed in detail. At the end, the conclusions with a discussion of future research consideration are pointed out in this paper.",2011,0, 5681,High-precision detection device of motor speed sensor based image recognition research,"A device is developed in this paper, which is used to detect whether the motor speed sensor meets the technical specifications, and has the ability of real-time displaying the current actual speed and automatic centering. In order to achieve the high-precision automatic centering of the device, image recognition technology is used in the device, and using ARM and servo control system to detect, recognize, high precision position control and high speed completion. It provides a strong protection for the sensor of speed to enter the market, gives a strong technical basis for the motor manufacturing and motor efficiency, and offers a reliable technical support and quality supervision for the measurement of industry and quality inspection departments in China.",2011,0, 5682,A novel method of topic detection and tracking for BBS,"Topic Detection and Tracking (TDT) has been studied for years, but most existing research is oriented to news web pages. Compared to news web pages, texts in Bulletin Board System (BBS) are more complicated and filled with user participation. In this paper, we propose a novel method of TDT for BBS, which mainly includes: a representation posts selection procedure based on post quality ranking and an efficient topic clustering algorithm based on candidate topic set. Experiment results demonstrate that our method significantly improves the performance of TDT in BBS environment on both accuracy and time complexity.",2011,0, 5683,Classifying feature description for software defect prediction,"To overcome the limitation of numeric feature description of software modules in Software defect prediction, we propose a novel module description technology, which employs the classifying feature, rather than numerical feature to describe the software module. Firstly, we construct independent classifier on each software metric. Then the classifying results in each feature are used to represent every module. We apply two different feature classifier algorithms (based on mean criterion and minimum error rate criterion, respectively) to obtain the classifying feature description of software modules. By using the proposed description technology, the discrimination of each metric is enlarged distinctly. Also, classifying feature description is simpler compared to numeric description, which would accelerate the speed of prediction model learning and reduce the storage space of massive data sets. Experiment results on four NASA data sets (CM1, KC1, KC2 and PC1) demonstrate the effectiveness of classifying feature description, and our algorithms can significantly improve the performance of software defect prediction.",2011,0, 5684,Design of an interactive 3D medical visualization system,"Recent advances in computer technologies in terms of both hardware and software have already made the development of an interactive 3D medical visualization system viable. Volume rendering as an important visualization technique can truly represent the information within the 3D datasets. However, it lacks of desirable interactivity. In order to deal with this problem, we introduce an interactive framework which is capable of the medical image visualization based on high-quality volume rendering. The framework implements various interactive functions with volume rendering techniques, such as displaying arbitrary cross sections and picking objects in 3D space. The system is evaluated by 4 continuous clinical patient data from Computed Tomography Angiograph (CTA) scans, and its performance is estimated by sophisticated surgeons.",2011,0, 5685,A novel method based on adaptive median filtering and wavelet transform in noise images,"An image is often corrupted by much noise visible or invisible while being collected, acquired, coded and transmitted. Noise impairs the quality of the received image severely and may cause a big problem for further image processing. In order to improve the quality of an image, the noise must be removed when the image is preprocessed, and the important signal features should be retained as much as possible. The methodology of image denoising is studied in this paper base on median filter and wavelet theory, in the setting of additive salt and pepper noise or white Gaussian noise. In the first phase, Canny edge detection is used in edge detection of the noise image to get the basic outline; In the second phase, an ameliorative adaptive median filter(adaptive variational threshold value) is used to remove salt noise; In the third phase, Coiflet wavelet system used to remove pepper noise; at last, processing images linking to get final images.",2011,0, 5686,The research on the breakup of river ice cover prediction based on artificial neural network model,"According to the urgent need of ice protection and reduction in Inner Mongolia section, the breakup of river ice cover prediction model is set up with artificial neural network theory. Focusing on the BaYanGaoLe, SanHuHeKou and TouDaoGuai, the specific breakup of river ice cover dates are forecasted, the predictive results meet ice prevention requirements, attain the prediction standards, provide references for command scheduling and decision in the Inner Mongolia section of Yellow River.",2011,0, 5687,Set diagnosis on the power station's devices by fuzzy neural network,"Current situation for energy system which is consist from thermo power station faced on diagnosis of generator fault set without break down maintenance. In ageing thermo power plants, large turbo generator's retrofits or main drive to upgrade was poor reliability associated with increase maintenance cost. Introduction of higher competition in the power market and subsequent introduction of new environmental constraints in our energy system, installed plants life time extension with overall performance improvement through upgrade or retrofit of main components is today a valuable. Also, set fault diagnosis during life time and predictive maintenance can be defined as collecting information from machines as they operate to aid in making decisions about their health, repair and possible improvements in order to reach maximum reliability, before any unplanned break down. As the turbo-generator fault set occurs when sensors should be put on bearing of these to detect vibration signal for extracting fault symptoms, but the relationships between faults and fault symptoms are too complex to get enough accuracy for industry application. In this paper, a new diagnosis method based on fuzzy neural network is proposed and a fuzzy neural network system is structured by associating the fuzzy set theory with neural network technology.",2011,0, 5688,Depth map optimization based on adaptive chrominance image segmentation,"The development of Free Viewpoint Video (FVV) technique is the most promising multimedia processing fields in recent years. Depth map prediction is the key method in FVV. There are many methods to estimate depth map. Among them, combined temporal and interview prediction method is the most practical which can save much data processing bandwidth, without additional hardware resources and computation time consumption. But block estimation based depth map has much block effect and prediction noises, especially at the object boundary and large continuous areas. This paper presents an adaptive chrominance image segmentation method to process the depth map prediction noises getting by combined temporal and interview prediction method. Experiment results show that this method can enhance the depth map quality efficiently with less calculation complexity, and is practical for hardware circuit realization and high performance real time processing.",2011,0, 5689,R-largest order statistics for the prediction of bursts and serious deteriorations in network traffic,"Predicting bursts and serious deteriorations in Internet traffic is important. It enables service providers and users to define robust quality of service metrics to be negotiated in service level agreements (SLA). Traffic exhibits the heavy tail property for which extreme value theory is the perfect setting for the analysis and modeling. Traditionally, methods from EVT, such as block maxima and peaks over threshold were applied, each treating a different aspect of the prediction problem. In this work, the r-largest order statistics method is applied to the problem. This method is an improvement over the block maxima method and makes more efficient use of the available data by selecting the r largest values from each block to model. As expected, the quality of estimation increased with the use of this method; however, the fit diagnostics cast some doubt about the applicability of the model, possibly due to the dependence structure in the data.",2011,0, 5690,Project Management Methodologies: Are they sufficient to develop quality software,"This paper considers whether the use of a project management methodology PRINCE is sufficient to achieve quality information systems. Without the additional use of effort estimation methods and the user of measure that can predict and help control quality during system development. How to make sure the management of software quality to be effective is a critical issue that software development organizations have to face. Software quality management is a series of activities to direct and control the software quality, includes establishment of the quality policy and quality goals, quality planning, quality control, quality assurance and quality improvement. Professional bodies have paid more attention to software standards. Meanwhile, many countries are participating in significant consolidation and coordination effort.",2011,0, 5691,Automated generation of FRU devices inventory records for xTCA devices,"The Advanced Telecommunications Computing Architecture (ATCA) and Micro Telecommunications Computing Architecture (μTCA) standards, intended for high-performance applications, offer an array of features that are compelling from the industry use perspective, like high reliability (99,999%) or hot-swap support. The standards incorporate the Intelligent Platform Management Interface (IPMI) for the purpose o advanced diagnostics and operation control. This standard imposes support for non-volatile Field Replaceable Unit (FRU) information for specific components of an ATCA/μTCA-based system, which would typically include description of a given component. The Electronic Keying (EK) mechanism is capable of using this information for ensuring more reliable cooperation of the components. The FRU Information for the ATCA/μTCA implementation elements may be of sophisticated structure. This paper focuses on a software tool facilitating the process of assembling this information, the goal of which is to make it more effective and less error-prone.",2011,0, 5692,"Virtual instrument based online monitoring, real-time detecting and automatic diagnosis management system for multi-fiber lines","Along with the popularity of the optical fiber communication, virtual instrument based online monitoring real-time detecting and automatic diagnosis management system on multi-fiber is proposed in this paper. To manage fiber lines and landmarks, simplified landmark map are presented based on the landmark list. Multi-fiber lines are online monitored, which may have the different or same parameters. When the optical power of any monitored line exceeds the set threshold, the subsystem gives a low power alarm, detecting subsystem will be started automatically to detect alarmed line. After data processing and analysis of detection results, then draws the result curve in the virtual instrument panel. User can locate plot of the curve, and zoom in event curve based on the event list. It utilizes detection results, event list, landmark list and simplified landmarks map synthetically, and presents comprehensive diagnosis conclusion of detection based on the map. To avoid fault, the system predicts fault in future through the analysis of a period of the historical detection result. This system has the advantages of excellent stability, powerful analysis, friendly interface, and convenient operation.",2011,0, 5693,Fault diagnosis of automobile engine based on support vector machine,"Support vector machine (SVM) based on classification is applied for fault diagnosis of the automotive engine. The basic idea is to identify the information by using the trained SVM model to classify new fault samples. The data from the engine simulation model by AMESim software are fault features extracted, and these fault characteristic parameters have statistical property and specific physical meaning. Principal component analysis (PCA) is used to reduce the dimensions and redundancy of the data, and then these data are normalized as the input of SVM. The proposed method achieves accurate fault classification because SVM has good performance of classification and generalization ability, which is verified by the result of combining MATLAB/SIMULIK and AMESim. And the simulation results indicates that the proposed SVM based fault diagnosis method has achieved the better performance than the Artificial Neural Network, meeting the requirements of real-time diagnosis of the automotive engine.",2011,0, 5694,Operation parameters optimization of butadiene extraction distillation based on neural network,"In this paper, based on the material balance, we used Aspen plus software to make sensitivity analysis of separation performance of the key component and the operational parameters. We quantitatively analyze the influence between the component parameters and the process. At last, we used the neural network to forecast the solvent ratio on diffident C4 feed and calculated the optimization solvent. At the end of the paper, we used the actual production data to test the validity of the model. On the view of reducing the energy consumption and ensuring the product quality, the research result told us that solvent ratio could be reduced nearly one percentage point.",2011,0, 5695,Fuzzy models with GIS for water quality diatom-indicator classification,"The level of certain or set of physico-chemical parameter(s) determinates the optimal living condition for the organism. These thresholds in biology are express using categories of classes. One such category which consists from five classes is water quality (WQ) category based on Saturated Oxygen. Due to natural or human-made pressure on the ecosystem, bottom line as that we have to monitor the levels. Many of the diatoms are very sensitive to certain parameters and for these diatoms ecological indicator reference are known in the literature. But, there are many other unknown ecological references to be identified especially for the newly discovered diatoms. The diatom measurement data is noisy, usually comprises from several dimensions and also has non-linear relationship between the environmental parameters and the diatoms abundance. All these properties of the diatom dataset do not meet the assumptions of conventional statistical procedures. To overcome these problems, this research aims to a fuzzy classification approach to describe this diatom-indicator relationship. Using the approach, each diatom will be classified and together with the Geographic Information System (GIS) software, will be represented on map based on Saturated Oxygen parameter. The results from the model are verified with the known ecological reference found in the literature. Even more important, the proposed method has added some new ecological references that do not exist. This fuzzy approach can be modified for defining new WQ category classes based not only for Saturated Oxygen parameters, but also for metals and other physico-chemical parameters.",2011,0, 5696,H.264/AVC rate control scheme with frame coding complexity optimized selection,"Rate control regulates the output bit rate of a video encoder to obtain optimum visual quality within the available network bandwidth and maintain buffer fullness within a specified tolerance range. The quadratic rate-distortion model is widely adopted in H.264/AVC rate control. However the quadratic model is not precise in terms of the estimation and computation of frame coding complexity. To achieve excellent rate control performance, the paper proposes a novel rate control scheme based on the optimized selection for frame coding complexity. Simulation results indicate that, compared with JM13.2, the new rate control algorithm gets more accurate rate regulation, provides robust buffer control, and efficiently reduces frame skipping. Furthermore, the PSNR gains for QCIF sequences are high up to 0.99dB under all inter-frame encoding formats.",2011,0, 5697,Effects of hydraulic pressure loading paths on the forming of automobile panels by hydro-mechanical deep drawing based on numerical simulation,"As a part of automobile panel, outer door has a few characteristics, such as complex shape, big structure size and high quality forming needs. As a result, outer door needs multiple process and high costs by conventional drawing. In order to get rid of crack and forming deficiency in the stamping process, the numerical simulations of hydro-mechanical deep drawing (HDD) process was carried out. By using large nonlinear dynamic explicit analytical software ETA/Dynaform5.6, the effects of chamber pressure variations on the formability, and the effects of different loading paths on the wall-thickness distribution of the part were analyzed. The results indicate that under the reasonable chamber pressure loading path, the HDD can effectively control crack and forming deficiency, improve the thickness distribution uniformity, increase the die fittingness and stiffness. Complicated shapes car panel can be formed by one process with high quality.",2011,0, 5698,An activity-based genetic algorithm approach to multiprocessor scheduling,"In parallel and distributed computing, development of an efficient static task scheduling algorithm for directed acyclic graph (DAG) applications is an important problem. The static task scheduling problem is NP-complete in its general form. The complexity of the problem increase when task scheduling is to be done in a heterogeneous environment, where the processors in the network may not be identical and take different amounts of time to execute the same task. This paper presents an activity-based genetic task scheduling algorithm for the tasks run on the network of heterogeneous systems and represented by Directed Acyclic Graphs (DAGs). First, a list scheduling algorithm is incorporated in the generation of the initial population of a GA to represent feasible operation sequences and diminish coding space when compared to permutation representation. Second, the algorithm assigns an activity to each task which is assigned on the processor, and then the quality of the solution will be improved by adding the activity and the random probability in the crossover and mutation operator. The performance of the algorithm is illustrated by comparing with the existing effectively scheduling algorithms.",2011,0, 5699,Improving system health monitoring with better error processing,"To help identify unexpected software events and impending hardware failures, developers typically incorporate error-checking code in their software to detect and report them. Unfortunately, implementing checks with reporting capabilities that give the most useful results comes at a price. Such capabilities should report the exact nature of impending failures and additionally limit reporting to only the first occurrence of an error to prevent flooding the error log with the same message. They must report when an existing error or fault is replaced by another error of a different nature or value. They must recognize what makes occasional faults allowable and they must reset themselves upon recovery from a reported failure so the checking process can begin anew. They must also report recovery from previously reported failures that appear to have healed themselves. Since the price associated with providing all these features is limited by budget and schedule, system reliability and health monitoring often suffer. However, there are practical techniques that can simplify the effort associated with incorporating such error detection and reporting. When done properly, they can greatly improve system reliability and health monitoring by finding potentially hidden problems during development and can also greatly improve system maintainability by providing concise running descriptions of problems when things go wrong particularly when minor errors might otherwise go unnoticed. In addition, preventative maintenance can be greatly aided by applying error detection techniques to performance monitoring in the absence of errors. Many of the techniques described in this paper take advantage of simple classes to do bookkeeping tasks such as updating and tracking statistical analysis of errors and error reporting. The paper highlights several of these classes and gives examples from actual applications.",2011,0, 5700,A novel and fast moving window based technique for transformer differential protection,"In this paper, a new technique is presented to discriminate between magnetizing inrush currents and internal fault currents in power transformers. The proposed technique uses two moving windows which both of them estimate magnitude of differential current of power transformer by different methods. The first moving window method is based on full-cycle Fourier algorithm (long window) and the second one is originated from least error square method that has five sample point input data (short window). The proposed technique presents a new criterion for discrimination between inrush currents and internal fault currents using the difference between output values of the two moving windows. Obtained results demonstrate precise operation of the proposed algorithm for different conditions such as CT saturation condition. PSCAD/EMTDC software is used for evaluating performance of the proposed algorithm.",2011,0, 5701,Voltage sag and swell compensation with DVR based on asymmetrical cascade multicell converter,"This paper deals with a dynamic voltage restorer (DVR) as a solution to compensate the voltage sags and swells and to protect sensitive loads. In order to apply the DVR in the distribution systems with voltage in range of kilovolts, series converter as one of the important components of DVR should be implemented based on the multilevel converters which have the capability to handle voltage in the range of kilovolts and power of several megawatts. So, in this paper a configuration of DVR based on asymmetrical cascade multicell converter is proposed. The main property of this asymmetrical CM converter is increase in the number of output voltage levels with reduced number of switches. Also, the pre-sag compensation strategy and the proposed voltage sag/swell detection and DVR reference voltages determination methods based on synchronous reference frame (SRF) are adopted as the control system. The proposed DVR is simulated using PSCAD/EMTDC software and simulation results are presented to validate its effectiveness.",2011,0, 5702,Numerical simulation of springback base on formability index for auto panel surface,"In order to emerge the traditional measurement's shortage of auto body panel, we proposed the method based on formability index. According to the above mentioned programs and the springback defect diagnosis, we developed a CAE module for the springback defect analysis based on VC++ environment, and solved the problems which the traditional sheet metal forming CAE software can not accurately predict. And then a forming process of U-beam is simulated by applying the proposed method which shows the method we proposed is accurate, and then we introduced some methods to optimize the adjustment amount of metal flow and the stamping dies for springback. Some suggestions are given by investigating the adjustment amount and modification of the stamping die.",2011,0, 5703,Keynote talk 2: How to build an industrial R&D center in Vietnam: A case study,"Summary form only given. To be competitive in today's market, the IT industry faces many challenges in the development and maintenance of enterprise information systems. Engineering these largescaled systems efficiently requires making decisions about a number of issues. In addition, customers expectations imply continuous software delivery in predictable quality. The operation such systems demands for transparency of the software in regard to lifecycle, change and incident management as well as cost efficiency. Addressing these challenges, we learned how to benefit from traditional industries. Contrary to the fact that the IT business calls itself gladly an industry, the industrialization of software engineering in most cases moves on a rather modest level. Industrialization means not only to build a solution or product on top of managed and well-defined processes, but also to have access to structured information about the current conditions of manufacturing at any time. Comparably with test series and assembly lines of the automobile industry, each individual component and each step from the beginning of manufacturing up to the final product should be equipped with measuring points for quality and progress. Even one step further the product itself, after it has left the factory, should be able to continuously provide analytic data for diagnostic reasons. Information is automatically collected and builds the basic essentials for process control, optimization and continuous improvement of the software engineering process. This presentation shows by means of a practical experience report how AdNovum managed to build its software engineering based on a well-balanced system of processes, continuous measurement and control â€?as well as a healthy portion of pragmatism. We implemented an efficient and predictable software delivery pipeline",2011,0, 5704,HPLC-fingerprint study on medicinal Folium Fici Microcarpae of Guangxi,"HPLC-fingerprint of Folium Fici Microcarpae of guangxi was studied in the paper. By reverse-phase high performance liquid chromatography, the herbal leaves of Ficus microcarpa taken from 10 sites in Guangxi province were analyzed. The analysis was conducted under the following conditions: an Ultimate ODS (5 μm, 4.6 mm × 250 mm) chromatographic column and acetonitrile-0.05% phosphoric acid solution were selected for gradient elution; detection wavelength was at 280 nm; flow rate at 0.5 mL/min; and analysis period of 120 min. Twelve chromatographic peaks were determined through similarity analysis under the chosen chromatographic conditions to identify the characteristic fingerprint peaks of Folium Fici Microcarpae. The method used to establish the fingerprints is deemed of good stability and reproducibility. Not only is it applicable for quality control of Folium Fici Microcarpae, but also an effective means for fundamental study of valid substances.",2011,0, 5705,A Fast and Effective Control Scheme for the Dynamic Voltage Restorer,"A novel control scheme for the dynamic voltage restorer (DVR) is proposed to achieve fast response and effective sag compensation capabilities. The proposed method controls the magnitude and phase angle of the injected voltage for each phase separately. Fast least error squares digital filters are used to estimate the magnitude and phase of the measured voltages. The utilized least error squares estimated filters considerably reduce the effects of noise, harmonics, and disturbances on the estimated phasor parameters. This enables the DVR to detect and compensate voltage sags accurately, under linear and nonlinear load conditions. The proposed control system does not need any phase-locked loops. It also effectively limits the magnitudes of the modulating signals to prevent overmodulation. Besides, separately controlling the injected voltage in each phase enables the DVR to regulate the negative- and zero-sequence components of the load voltage as well as the positive-sequence component. Results of the simulation studies in the PSCAD/EMTDC software environment indicate that the proposed control scheme 1) compensates balanced and unbalanced voltage sags in a very short time period, without phase jump and 2) performs satisfactorily under linear and nonlinear load conditions.",2011,0, 5706,Governance and Cost Reduction through Multi-tier Preventive Performance Tests in a Large-Scale Product Line Development,"Experience has shown that maintaining software system performance in a complex product line development is a constant challenge, already achieved performance is often degraded over time because proper quality gates are rarely defined or implemented. The established practice of performance verification tests on an integrated software baseline is essential to ensure final quality of the delivered products, but is late if performance degradations already crept in. Maintenance of performance in software baselines requires an additional preventive approach. The faulty software changes that degrade performance can be identified (performance quality gates) before these changes can flow into the baseline and subsequently get rejected. This ensures that the software baseline maintains a consistent performance leading to more predictable schedules and development costs. For a complex software family involving parallel and dependent sub-projects of domain platforms and end user applications, these performance quality gates need to be established at multiple levels.",2011,0, 5707,Realtime diagnostic prognostic solution for life cycle management of thermomechanical system,Data based approach and methodology for real time diagnosis and prognosis solutions for thermomechanical systems is discussed. Coated turbine blade operation is emulated to sensor online temperature data as the real-time inputs for the software code developed. An algorithm is presented first and extended sampling based statistical hypothesis tests are used for anomaly detection tests. Paired t-test and rank sum hypotheses test are found to be appropriate for different data set combinations. Matlab statistical package is used for the software code. The algorithm and methodology work well with laboratory temperature data and seeded faults. Few limitations of the developed system code are identified.,2011,0, 5708,Unit test case design metrics in test driven development,"Testing is a validation process that determines the conformance of the software's implementation to its specification. It is an important phase in unit test case design and is even more important in object-oriented systems. We want to develop test case designing criteria that give confidence in unit testing of object-oriented system. The main aim testing can be viewed as a means of assessing the existing quality of the software to probe the software for defect and fix them. We also want to develop and executing our test case automatically since this decreases the effort (and cost) of software development cycle (maintenance), and provide re-usability in Test Driven Development framework. We believe such approach is necessary for reaching the levels of confidence required in unit testing. The main goal of this paper is to assist the developers/testers to improve the quality of the ATCUT by accurately design the test case for unit testing of object-oriented software based on the test results. A blend of unit testing assisted by the domain knowledge of the test case designer is used in this paper to improve the design of test case. This paper outlines a solution strategy for deriving Automated Test Case for Unit Testing (ATCUT) metrics from object-oriented metrics via TDD concept.",2011,0, 5709,Software reliability growth models based on Marshall-Olkin generalized exponential families of distributions,"Software has been becoming an important and inevitable tool in our modern day to day life. It finds numerous applications such as space, telecommunications technology, military, nuclear plants, air traffic and medical monitoring control. To meet the continuing demand for high quality software, an enormous multitude of Software reliability Growth models have been proposed and adopted in recent years. A new family of distributions has been introduced by incorporating an additional parameter and applied to yield a new two parameter extension of exponential distribution [1]. Parikh et al. [2] call such a family of distributions as Marshall and Olkin Generalized Exponential (MOGE) distributions and studied its inferential problems. In this paper we propose NHPP software reliability Growth models based on MOGE and validate them through different metrics using real data sets.",2011,0, 5710,An improved model of software reliability growth under time-dependent learning effects,"Over the last two decades, various software reliability growth models (SRGM) have been proposed, and there has been a gradual but marked shift in the balance between software reliability and software testing cost in recent years. Chiu and Huang (2008) provided a Software Reliability Growth Model from the Perspective of Learning Effects, which is able to reasonably describe the S-shaped and exponential-shaped types of behaviors simultaneously, and offers better performance when fitting different data with consideration of the learning effects. However, this earlier model assumes that the learning effects are constant. In contrast, this paper discusses a software reliability growth model with time-dependent learning effects.",2011,0, 5711,A System for Nuclear Fuel Inspection Based on Ultrasonic Pulse-Echo Technique,"Nuclear Pressurized Water Reactor (PWR) technology has been widely used for electric energy generation. The follow-up of the plant operation has pointed out the most important items to optimize the safety and operational conditions. The identification of nuclear fuel failures is in this context. The adoption of this operational policy is due to recognition of the detrimental impact that fuel failures have on operating cost, plant availability, and radiation exposure. In this scenario, the defect detection in rods, before fuel reloading, has become an important issue. This paper describes a prototype of an ultrasonic pulse-echo system designed to inspect failed rods (with water inside) from PWR. This system combines development of hardware (ultrasonic transducer, mechanical scanner and pulser-receiver instrumentation) as well as of software (data acquisition control, signal processing and data classification). The ultrasonic system operates at center frequency of 25 MHz and failed rod detection is based on the envelope amplitude decay of successive echoes reverberating inside the clad wall. The echoes are classified by three different methods. Two of them (Linear Fisher Discriminant and Neural Network) have presented 93% of probability to identify failed rods, which is above the current accepted level of 90%. These results suggest that a combination of a reliable data acquisition system with powerful classification methods can improve the overall performance of the ultrasonic method for failed rod detection.",2011,0, 5712,Test-Driving Static Analysis Tools in Search of C Code Vulnerabilities,"Recently, a number of tools for automated code scanning came in the limelight. Due to the significant costs associated with incorporating such a tool in the software lifecycle, it is important to know what defects are detected and how accurate and efficient the analysis is. We focus specifically on popular static analysis tools for C code defects. Existing benchmarks include the actual defects in open source programs, but they lack systematic coverage of possible code defects and the coding complexities in which they arise. We introduce a test suite implementing the discussed requirements for frequent defects selected from public catalogues. Four open source and two commercial tools are compared in terms of their effectiveness and efficiency of their detection capability. A wide range of C constructs is taken into account and appropriate metrics are computed, which show how the tools balance inherent analysis tradeoffs and efficiency. The results are useful for identifying the appropriate tool, in terms of cost-effectiveness, while the proposed methodology and test suite may be reused.",2011,0, 5713,Predicting Timing Performance of Advanced Mechatronics Control Systems,"Embedded control is a key product technology differentiator for many high-tech industries, including ASML. The strong increase in complexity of embedded control systems, combined with the occurrence of late changes in control requirements, results in many timing performance problems showing up only during the integration phase. The fallout of this is extremely costly design iterations, severely threatening the time-to-market and time-to-quality constraints. This paper reports on the industrial application at ASML of the Y-chart method to attack this problem. Through the largely automated construction of executable models of a wafer scanner's mechatronics control application and platform, ASML was able to obtain high-level overview early on in the development process. The system wide insight in timing bottlenecks gained this way resulted in more than a dozen improvement proposals yielding significant performance gains. These insights also led to a new development roadmap of the mechatronics control execution platform.",2011,0, 5714,On the Consensus-Based Application of Fault Localization Techniques,"A vast number of software fault localization techniques have been proposed recently with the growing realization that manual debugging is time-consuming, tedious and error-prone, and fault localization is one of the most expensive debugging activities. While some of these techniques perform better than one another on a large number of data sets, they do not do so on all data sets and therefore, the actual quality of fault localization can vary considerably by using just one technique. This paper proposes the use of a consensus-based strategy that combines the results of multiple fault localization techniques, to consistently provide high quality performance, irrespective of data set. Empirical evidence based on case studies conducted on three sets of programs (the seven programs of the Siemens suite, and the gzip and make programs) and three different fault localization techniques suggests that the consensus-based strategy holds merit and generally provides close to the best, if not the best, results. Additionally the consensus-based strategy makes use of techniques that all operate on the same set of input data, minimizing the overhead. It is also simple to include or exclude techniques from consensus, making it an easily extensible, or alternatively, tractable strategy.",2011,0, 5715,Ontology-Based Reliability Evaluation for Web Service,"Reliability has become a major quality metric for Web service. However, current reliability evaluation approaches lack a formal semantic representation and the support of incomplete or uncertain information. We propose a Web service reliability ontology (WSRO) serving as a basis to characterize the knowledge of Web service. And based on WSRO, a mapping to the probability graphical model is constructed. The Web service reliability evaluation results are obtained by the causality reasoning. Some evaluation results reveal that our approach is applicable and effective.",2011,0, 5716,Software Reliability Prediction for Open Source Software Adoption Systems Based on Early Lifecycle Measurements,"Various OSS(Open Source Software)s are being modified and adopted into software products with their own quality level. However, it is difficult to measure the quality of an OSS before use and to select the proper one. These difficulties come from OSS features such as a lack of bug information, unknown development schedules, and variable documentations. Conventional software reliability models are not adequate to assess the reliability of a software system in which an OSS is being adopted as a new add-on feature because the OSS can be modified while Commercial Off-The-Shelf (COTS) software cannot. This paper provides an approach to assessing the software reliability of OSS adopted software system in the early stage of a software life cycle. We identify the software factors that affect the reliability of software system using the COCOMOII modeling methodology and define the module usage as a module coupling measure. We build the fault count models using the multivariate linear regression and performed the model evaluation. Early software reliability assessment in OSS adoption helps to make an effective development and testing strategies for improving the reliability of the whole system.",2011,0, 5717,Evaluating an Interactive-Predictive Paradigm on Handwriting Transcription: A Case Study and Lessons Learned,"Transcribing handwritten text is a laborious task which currently is carried out manually. As the accuracy of automatic handwritten text recognizers improves, post-editing the output of these recognizers could be foreseen as a possible alternative. Alas, the state-of-the-art technology is not suitable to perform this kind of work, since current approaches are not accurate enough and the process is usually both inefficient and uncomfortable for the user. As alternative, an interactive-predictive paradigm has gained recently an increasing popularity, mainly due to promising empirical results that estimate considerable reductions of user effort. In order to assess whether these empirical results can lead indeed to actual benefits, we developed a working prototype and conducted a field study remotely. Thirteen regular computer users tested two different transcription engines through the above-mentioned prototype. We observed that the interactive-predictive version allowed to transcribe better (less errors and fewer iterations to achieve a high-quality output) in comparison to the manual engine. Additionally, participants ranked higher such an interactive-predictive system in a usability questionnaire. We describe the evaluation methodology and discuss our preliminary results. While acknowledging the known limitations of our experimentation, we conclude that the interactive-predictive paradigm is an efficient approach for transcribing handwritten text.",2011,0, 5718,Towards quantitative software reliability assessment in incremental development processes,"The iterative and incremental development is becoming a major development process model in industry, and allows us for a good deal of parallelism between development and testing. In this paper we develop a quantitative software reliability assessment method in incremental development processes, based on the familiar non-homogeneous Poisson processes. More specifically, we utilize the software metrics observed in each incremental development and testing, and estimate the associated software reliability measures. In a numerical example with a real incremental developmental project data, it is shown that the estimate of software reliability with a specific model can take a realistic value, and that the reliability growth phenomenon can be observed even in the incremental development scheme.",2011,0, 5719,The impact of fault models on software robustness evaluations,"Following the design and in-lab testing of software, the evaluation of its resilience to actual operational perturbations in the field is a key validation need. Software-implemented fault injection (SWIFI) is a widely used approach for evaluating the robustness of software components. Recent research [24, 18] indicates that the selection of the applied fault model has considerable influence on the results of SWIFI-based evaluations, thereby raising the question how to select appropriate fault models (i.e. that provide justified robustness evidence). This paper proposes several metrics for comparatively evaluating fault models's abilities to reveal robustness vulnerabilities. It demonstrates their application in the context of OS device drivers by investigating the influence (and relative utility) of four commonly used fault models, i.e. bit flips (in function parameters and in binaries), data type dependent parameter corruptions, and parameter fuzzing. We assess the efficiency of these models at detecting robustness vulnerabilities during the SWIFI evaluation of a real embedded operating system kernel and discuss application guidelines for our metrics alongside.",2011,0, 5720,Assessing programming language impact on development and maintenance: a study on c and c++,"Billions of dollars are spent every year for building and maintaining software. To reduce these costs we must identify the key factors that lead to better software and more productive development. One such key factor, and the focus of our paper, is the choice of programming language. Existing studies that analyze the impact of choice of programming language suffer from several deficiencies with respect to methodology and the applications they consider. For example, they consider applications built by different teams in different languages, hence fail to control for developer competence, or they consider small-sized, infrequently-used, short-lived projects. We propose a novel methodology which controls for development process and developer competence, and quantifies how the choice of programming language impacts software quality and developer productivity. We conduct a study and statistical analysis on a set of long-lived, widely-used, open source projects - Firefox, Blender, VLC, and MySQL. The key novelties of our study are: (1) we only consider projects which have considerable portions of development in two languages, C and C++, and (2) a majority of developers in these projects contribute to both C and C++ code bases. We found that using C++ instead of C results in improved software quality and reduced maintenance effort, and that code bases are shifting from C to C++. Our methodology lays a solid foundation for future studies on comparative advantages of particular programming languages.",2011,0, 5721,Socio-technical developer networks: should we trust our measurements?,"Software development teams must be properly structured to provide effectiv collaboration to produce quality software. Over the last several years, social network analysis (SNA) has emerged as a popular method for studying the collaboration and organization of people working in large software development teams. Researchers have been modeling networks of developers based on socio-technical connections found in software development artifacts. Using these developer networks, researchers have proposed several SNA metrics that can predict software quality factors and describe the team structure. But do SNA metrics measure what they purport to measure? The objective of this research is to investigate if SNA metrics represent socio-technical relationships by examining if developer networks can be corroborated with developer perceptions. To measure developer perceptions, we developed an online survey that is personalized to each developer of a development team based on that developer's SNA metrics. Developers answered questions about other members of the team, such as identifying their collaborators and the project experts. A total of 124 developers responded to our survey from three popular open source projects: the Linux kernel, the PHP programming language, and the Wireshark network protocol analyzer. Our results indicate that connections in the developer network are statistically associated with the collaborators whom the developers named. Our results substantiate that SNA metrics represent socio-technical relationships in open source development projects, while also clarifying how the developer network can be interpreted by researchers and practitioners.",2011,0, 5722,Dealing with noise in defect prediction,"Many software defect prediction models have been built using historical defect data obtained by mining software repositories (MSR). Recent studies have discovered that data so collected contain noises because current defect collection practices are based on optional bug fix keywords or bug report links in change logs. Automatically collected defect data based on the change logs could include noises. This paper proposes approaches to deal with the noise in defect data. First, we measure the impact of noise on defect prediction models and provide guidelines for acceptable noise level. We measure noise resistant ability of two well-known defect prediction algorithms and find that in general, for large defect datasets, adding FP (false positive) or FN (false negative) noises alone does not lead to substantial performance differences. However, the prediction performance decreases significantly when the dataset contains 20%-35% of both FP and FN noises. Second, we propose a noise detection and elimination algorithm to address this problem. Our empirical study shows that our algorithm can identify noisy instances with reasonable accuracy. In addition, after eliminating the noises using our algorithm, defect prediction accuracy is improved.",2011,0, 5723,Bringing domain-specific languages to digital forensics,"Digital forensics investigations often consist of analyzing large quantities of data. The software tools used for analyzing such data are constantly evolving to cope with a multiplicity of versions and variants of data formats. This process of customization is time consuming and error prone. To improve this situation we present DERRIC, a domain-specific language (DSL) for declaratively specifying data structures. This way, the specification of structure is separated from data processing. The resulting architecture encourages customization and facilitates reuse. It enables faster development through a division of labour between investigators and software engineers. We have performed an initial evaluation of DERRIC by constructing a data recovery tool. This so-called carver has been automatically derived from a declarative description of the structure of JPEG files. We compare it to existing carvers, and show it to be in the same league both with respect to recovered evidence, and runtime performance.",2011,0, 5724,An industrial case study on quality impact prediction for evolving service-oriented software,"Systematic decision support for architectural design decisions is a major concern for software architects of evolving service-oriented systems. In practice, architects often analyse the expected performance and reliability of design alternatives based on prototypes or former experience. Model-driven prediction methods claim to uncover the tradeoffs between different alternatives quantitatively while being more cost-effective and less error-prone. However, they often suffer from weak tool support and focus on single quality attributes. Furthermore, there is limited evidence on their effectiveness based on documented industrial case studies. Thus, we have applied a novel, model-driven prediction method called Q-ImPrESS on a large-scale process control system consisting of several million lines of code from the automation domain to evaluate its evolution scenarios. This paper reports our experiences with the method and lessons learned. Benefits of Q-ImPrESS are the good architectural decision support and comprehensive tool framework, while one drawback is the time-consuming data collection.",2011,0, 5725,Topic-based defect prediction: NIER track,"Defects are unavoidable in software development and fixing them is costly and resource-intensive. To build defect prediction models, researchers have investigated a number of factors related to the defect-proneness of source code, such as code complexity, change complexity, or socio-technical factors. In this paper, we propose a new approach that emphasizes on technical concerns/functionality of a system. In our approach, a software system is viewed as a collection of software artifacts that describe different technical concerns/-aspects. Those concerns are assumed to have different levels of defect-proneness, thus, cause different levels of defectproneness to the relevant software artifacts. We use topic modeling to measure the concerns in source code, and use them as the input for machine learning-based defect prediction models. Preliminary result on Eclipse JDT shows that the topic-based metrics have high correlation to the number of bugs (defect-proneness), and our topic-based defect prediction has better predictive performance than existing state-of-the-art approaches.",2011,0, 5726,Pragmatic prioritization of software quality assurance efforts,"A plethora of recent work leverages historical data to help practitioners better prioritize their software quality assurance efforts. However, the adoption of this prior work in practice remains low. In our work, we identify a set of challenges that need to be addressed to make previous work on quality assurance prioritization more pragmatic. We outline four guidelines that address these challenges to make prior work on software quality assurance more pragmatic: 1) Focused Granularity (i.e., small prioritization units), 2) Timely Feedback (i.e., results can be acted on in a timely fashion), 3) Estimate Effort (i.e., estimate the time it will take to complete tasks), and 4) Evaluate Generality (i.e., evaluate findings across multiple projects and multiple domains). We present two approaches, at the code and change level, that demonstrate how prior approaches can be more pragmatic.",2011,0, 5727,Using software evolution history to facilitate development and maintenance,"Much research in software engineering have been focused on improving software quality and automating the maintenance process to reduce software costs and mitigating complications associated with the evolution process. Despite all these efforts, there are still high cost and effort associated with software bugs and software maintenance, software still continues to be unreliable, and software bugs can wreak havoc on software producers and consumers alike. My dissertation aims to advance the state-of-art in software evolution research by designing tools that can measure and predict software quality and to create integrated frameworks that helps in improving software maintenance and research that involves mining software repositories.",2011,0, 5728,Test blueprint: an effective visual support for test coverage,"Test coverage is about assessing the relevance of unit tests against the tested application. It is widely acknowledged that a software with a ""good"" test coverage is more robust against unanticipated execution, thus lowering the maintenance cost. However, insuring a coverage of a good quality is challenging, especially since most of the available test coverage tools do not discriminate software components that require a ""strong"" coverage from the components that require less attention from the unit tests. HAPAO is an innovative test covepage tool, implemented in the Pharo Smalltalk programming language. It employs an effective and intuitive graphical representation to visually assess the quality of the coverage. A combination of appropriate metrics and relations visually shapes methods and classes, which indicates to the programmer whether more effort on testing is required. This paper presents the essence of HAPAO using a real world case study.",2011,0, 5729,Fifth international workshop on software clones: (IWSC 2011),"Software clones are identical or similar pieces of code, design or other artifacts. Clones are known to be closely related to various issues in software engineering, such as software quality, complexity, architecture, refactoring, evolution, licensing, plagiarism, and so on. Various characteristics of software systems can be uncovered through clone analysis, and system restructuring can be performed by merging clones. The goals of this workshop are to bring together researchers and practitioners from around the world to evaluate the current state of research and applications, discuss common problems, discover new opportunities for collaboration, exchange ideas, envision new areas of research and applications, and explore synergies with similarity analysis in other areas and disciplines.",2011,0, 5730,zEnergy: An open source project for power quality assessment and monitoring,"In this paper, a new open source project focused on power quality assessment and monitoring for low voltage power systems is presented. Power quality (PQ) is a crucial matter for proper and reliable operation of industrial or home electrical appliances. In order to improve PQ techniques, efforts are made to develop smart sensors that can report near real-time data. Proprietary software and hardware on dedicated computers or servers processes these data and shows relevant information through tables or graphics. In this situation, interoperability, compatibility and scalability are not possible because of the lack of open protocols. In our work, we introduce zEnergy, an open source platform for computing, storing and managing all of the information generated from smart sensors. We apply the most up-to-date algorithms developed for power quality, event detection, and harmonic analysis or power metering. zEnergy makes use of cutting-edge web technologies such as HTML5, CSS3 and Javascript to provide user-friendly interaction and powerful capabilities for the analysis, measurement and monitoring of power systems. All software used in our work is open source, running on Linux.",2011,0, 5731,Modular Fault Injector for Multiple Fault Dependability and Security Evaluations,"The increasing level of integration and decreasing size of circuit elements leads to greater probabilities of operational faults. More sensible electronic devices are also more prone to external influences by energizing radiation. Additionally not only natural causes of faults are a concern of today's chip designers. Especially smart cards are exposed to complex attacks through which an adversary tries to extract knowledge from a secured system by putting it into an undefined state. These problems make it increasingly necessary to test a new design for its fault robustness. Several previous publications propose the usage of single bit injection platforms, but the limited impact of these campaigns might not be the right choice to provide a wide fault attack coverage. This paper first introduces a new in-system fault injection strategy for automatic test pattern injection. Secondly, an approach is presented that provides an abstraction of the internal fault injection structures to a more generic high level view. Through this abstraction it is possible to support the task separation of design and test-engineers and to enable the emulation of physical attacks on circuit level. The controller's generalized interface provides the ability to use the developed controller on different systems using the same bus system. The high level of abstraction is combinable with the advantage of high performance autonomous emulations on high end FPGA-platforms.",2011,0, 5732,Compatibility Study of Compile-Time Optimizations for Power and Reliability,"Historically compiler optimizations have been used mainly for improving embedded systems performance. However, for a wide range of today's power restricted, battery operated embedded devices, power consumption becomes a crucial problem that is addressed by modern compilers. Biomedical implants are one good example of such embedded systems. In addition to power, such devices need to also satisfy high reliability levels. Therefore, performance, power and reliability optimizations should all be considered while designing and programming implantable systems. Various software optimizations, e.g., during compilation, can provide the necessary means to achieve this goal. Additionally the system can be configured to trade-off between the above three factors based on the specific application requirements. In this paper we categorize previous works on compiler optimizations for low power and fault tolerance. Our study considers differences in instruction count and memory overhead, fault coverage and hardware modifications. Finally, the compatibility of different methods from both optimization classes is assessed. Five compatible pairs that can be combined with few or no limitations have been identified.",2011,0, 5733,Measurement system of α β surface radioactive contamination based on Virtual Training,"A Virtual Training System of α β surface radioactive contamination measurement is constructed based on Virtual Reality. Firstly, 3-d geometric models about radioactive environment, radioactive environment of surface device, α β detector, surface contamination instrument are constructed, the general radioactive law based on face radioactive source is presented. These models are driving in general program platform using human-computer interaction technology. Furthermore, there are several functions about database sharing, fault diagnosis, examining evaluation and on-time help which realize the whole system virtual training. Program show this training system avoid “nuclear fearâ€?psychology and improve staff's training effect.",2011,0, 5734,Design of nuclear measuring instrument fault diagnosis system based on circuit characteristic test,"The circuits of nuclear measuring instruments are very complicated, and the testing and fault inspection of the nuclear measuring instrument system is difficult. To solve this problem, the fault diagnosis system was designed by combing circuit characteristic test technology with virtual instrument, virtual testing, test data analysis and data management technology. The method based on circuit characteristic test, which is comparing the characteristic curves of tested circuit with the one of the corresponding normal circuit, is applied to the fault diagnosis of nuclear measuring instrument. The fault diagnosis system consists of hardware part and software part. The hardware part includes computer, data acquisition module, arbitrary waveform generator (AWG) module, interface circuit module, electric relay module, etc; the software is made up of computer management software module, control software module and testing functional software module and so on. By the fault diagnosis system, it can test the circuit characteristic of any circuit module of the nuclear measuring instruments, diagnoses and finds out the faulted parts of the nuclear measuring instruments quickly and shows the diagnosing results on the display.",2011,0, 5735,The study on stability and reliability of the nuclear detector,"Because of measurement environment and its own performance defect, the poor stability of nuclear radiation detector used on portable nuclear instrument leads to spectrum drift and decrease of energy resolution, which causes reduction of the detect efficiency. This results in lower measurement precision and accuracy of the system. Generally, the stabilization method based on hardware and software was applied. It is difficult to solve the problem about Spectrum drift caused by multiple factors. To slove the nonlinear problems caused by interference source, Multi-sensor Data Fusion Technique is adopt to design the intelligent nuclear detector with the Self-compensation Technique. It shows that the results of the project can improve the stability and reliability of the fieldwork measurement in system of the nuclear instruments. Except that it will improve the adaptive ability of the nuclear instruments to measuring environment.",2011,0, 5736,Optimal Model-Based Policies for Component Migration of Mobile Cloud Services,"Two recent trends are major motivators for service component migration: the upcoming use of cloud-based services and the increasing number of mobile users accessing Internet-based services via wireless networks. While cloud-based services target the vision of Software as a Service, where services are ubiquitously available, mobile use leads to varying connectivity properties. In spite of temporary weak connections and even disconnections, services should remain operational. This paper investigates service component migration between the mobile client and the infrastructure-based cloud as a means to avoid service failures and improve service performance. Hereby, migration decisions are controlled by policies. To investigate component migration performance, an analytical Markov model is introduced. The proposed model uses a two-phased approach to compute the probability to finish within a deadline for a given reconfiguration policy. The model itself can be used to determine the optimal policy and to quantify the gain that is obtained via reconfiguration. Numerical results from the analytic model show the benefit of reconfigurations and the impact of different reconfigurations applied to three service types, as immediate reconfigurations are in many cases not optimal, a threshold on time before reconfiguration can take place is introduced to control reconfiguration.",2011,0, 5737,Real-time ampacity and ground clearance software for integration into smart grid technology,"Output from a real-time sag, tension and ampacity program was compared with measurements collected on an outdoor test span. The test site included a laser range-finder, load cells and weather station. A fiber optic distributed temperature sensing system was routed along the conductor and thermocouples were attached to the conductor's surface. Nearly 40 million data points were statistically compared with the computer output. The program provided results with a 95% confidence interval for conductor temperatures within ±10°C and sags within ±0.3m for a conductor temperature of 75°C. Test data were also used to determine the accuracy of the IEEE Standard 738 models. The computer program and the Standard 738 transient model gave comparable temperatures for temperatures up to 160°C. Measured temperatures were used to estimate the radial and axial temperature gradients in the ACSR conductor. The effect of insulators and instrumentation attached to the conductor on the local conductor temperature was determined. The real-time rating program is an alternative to installing instrumentation on the conductor for measuring tension, sag or temperature. The program avoids the problems of installing and maintaining expensive instrumentation on the conductor, and it will provide accurate information on the conductor's temperature and ground clearance in real-time.",2011,0, 5738,The Study of OFDM ICI Cancellation Schemes in 2.4 GHz Frequency Band Using Software Defined Radio,"In Orthogonal Frequency Division Multiplexing (OFDM), frequency offset is a common problem that causes inter-carrier-interference (ICI) that degrades the quality of the transmitted signal. Many theoretical studies of the different ICI-cancellation schemes have been reported earlier by many authors. The need for experimental verification of the theoretically predicted results in 2.4 GHz frequency band is important. One of most widely used systems is Wi-Fi (IEE 802.11b) that makes use of this frequency band for short range wireless communication with throughput as high as 11 Mbps. In this work, several new ICI cancellation schemes have been tested in 2.4 GHz frequency using open source Software Defined Radio (SDR) namely GNU Radio. The GNU Radio system used in the experiment had two Universal Software Radio Peripheral (USRP N210) modules connected to a computer. Both the USRP units had one-daughterboard (XCVR2450) each for transmission and reception of radio signals. The input data to the USRP was prepared in compliance with IEEE-802.11b specification. The experimental results were compared with the theoretical results of the new Inter-Carrier Interference (ICI) cancellation schemes. The comparison of the results revealed that the new schemes are suitable for high performance transmission. The results of this paper open up new opportunities of using OFDM in heavily congested 2.4 GHz and 5 GHz bands (WiFi5: IEEE 802.a) for error free data transmission. The schemes also can be used in other frequencies where channels are heavily congested.",2011,0, 5739,An Online Fault Management Method for Live Media Applications,"It is an important task to provide the QoS-insured live media application in dealing with the fault effectively. The paper suggests a classification method to perform online fault management in the Live Media Service Grid (LMSN), which adopts online fault management, and can achieve low-overhead online learning, and adaptively adjust measurement sampling rates based on the states of monitored components, and maintain a limited size of historical training data to achieve efficient real-time fault control. The method can achieve more efficient fault management than conventional reactive and proactive approaches.",2011,0, 5740,A Novel Multipath Routing Protocol for MANETs,"This paper proposes a novel multipath routing protocol for MANETs. The proposed protocol is a variant of the single path AODV routing protocol. The proposed multipath routing protocol establishes node-disjoint paths that have the lowest delays based on the interaction of many factors from different layers. Other delay aware MANETs routing protocols don't consider the projected contribution of the source node that is requesting a path into the total network load. The implication is that end to end delay obtained through the RREQ is not accurate any more. On the contrary of its predecessors, the proposed protocol takes into consideration the projected contribution of the source node into the computation of end to end delay. To obtain an accurate estimate of path delay, the proposed multipath routing protocol employs cross-layer communications across three layers; PHY, MAC and Routing layers to achieve link and channel-awareness and creates an update packet to keep the up to date status of the paths in terms of lowest delay. The performance of the proposed protocol investigated and compared against the single path AODV and multipath AOMDV protocols through simulation using OPNET. Results have shown that our multipath routing protocol outperforms both protocols in terms of average throughput, end to end delay and packet dropped.",2011,0, 5741,A New Face Detection Method with GA-BP Neural Network,"In this paper, the BP neural network improved by the genetic algorithm (GA) is applied to the problem of human face detection. GA is used to optimize the initial weights of the BP neural network to make full use of its global optimization and local accurate searching of the BP algorithm. Matlab Software and its neural network toolbox are used to simulate and compute. The experiment results show that the GA-BP neural network has a good performance for face detection. Furthermore, compared with the conventional BP algorithm, the GA-BP learning algorithm has more rapid convergence and better assessment accuracy of detecting quality.",2011,0, 5742,A self-healing architecture for web services based on failure prediction and a multi agent system,"Failures during web service execution may depend on a wide variety of causes. One of those is loss of Quality of Service (QoS). Failures during web service execution impose heavy costs on services-oriented architecture (SOA). In this paper, we seek to achieve a self-healing architecture to reduce failures in web services. We believe that failure prediction prevents the occurrence of failures and enhances the performance of SOA. The proposed architecture consists of three agents: Monitoring, Diagnosis and Repair. Monitoring agent measures quality parameters in communication level and predicts future values of quality parameter by Time Series Forecasting (TSF) with the help of Neural Network (NN). Diagnosis agent analyzes current and future QoS parameters values for diagnose web service failures. Based on its algorithm, the Diagnosis agent detects failures and faults in web services executions. Repair agent manages repair actions by using Selection agent.",2011,0, 5743,TMM Appraisal Assistant Tool,"Software testing is an important component in the software development life cycle, which leads to have the high quality of software product. Therefore, software industry has focused on improving the testing process for better performance. The Testing Maturity Model (TMM) is one choice to apply for improving the testing process. It guides organization about framework of software testing. The TMM Assessment Model (TMM-AM) is a test process assessment model following the TMM. The TMM-AM consists of processes to assess test capability of organizations. Currently, each organization has various limitations such as cost, effort, time, and know-how. The assessment process lacks of tools to be performed. How to help them to improve their testing process is our key point. This paper proposes a supporting tool based on the TMM-AM which each organization can assess its testing process by itself. The tool can identify test maturity level for an organization and suggest procedures to reach its goal.",2011,0, 5744,Advances in noise radar design,"Applicability of Noise Radar Technology for design of advanced radar sensors has been shown. Pulse-coherent mode and high quality SAR imaging is realized with the help of Noise Radar. Software defined Noise Radar and stepped-delay concept were also implemented. Noise Radar Technology addresses the increasing performance demands placed on both military and civilian radar systems by enhancing their reliable operation in congested, unfriendly or hostile environments without performance degradations caused by electromagnetic interference. We have shown that part of these requirements may be met already today and we hope that our further research will show its new capabilities in design of advanced radar.",2011,0, 5745,Next-generation massively parallel short-read mapping on FPGAs,"The mapping of DNA sequences to huge genome databases is an essential analysis task in modern molecular biology. Having linearized reference genomes available, the alignment of short DNA reads obtained from the sequencing of an individual genome against such a database provides a powerful diagnostic and analysis tool. In essence, this task amounts to a simple string search tolerating a certain number of mismatches to account for the diversity of individuals. The complexity of this process arises from the sheer size of the reference genome. It is further amplified by current next-generation sequencing technologies, which produce a huge number of increasingly short reads. These short reads hurt established alignment heuristics like BLAST severely. This paper proposes an FPGA-based custom computation, which performs the alignment of short DNA reads in a timely manner by the use of tremendous concurrency for reasonable costs. The special measures to achieve an extremely efficient and compact mapping of the computation to a Xilinx FPGA architecture are described. The presented approach also surpasses all software heuristics in the quality of its results. It guarantees to find all alignment locations of a read in the database while also allowing a freely adjustable character mismatch threshold. On the contrary, advanced fast alignment heuristics like Bowtie and Maq can only tolerate small mismatch maximums with a quick deterioration of the probability to detect existing valid alignments. The performance comparison with these widely used software tools also demonstrates that the proposed FPGA computation achieves its guaranteed exact results in very competitive time.",2011,0, 5746,Design of a high performance FPGA based fault injector for real-time safety-critical systems,"Fault injection methods have long been used to assess fault tolerance and safety. However, many conventional fault injection methods face significant shortcomings, which hinder their ability to execute fault injections on target real-time safety-critical systems. We demonstrate a novel fault injection system implemented on a commercial Field-Programmable Gate Array board. The fault injector is unobtrusive to the target system as it utilizes only standardized On-Chip-Debugger (OCD) interfaces present on most current processors. This effort resulted in faults being injected orders of magnitude faster than by utilizing a commercial OCD debugger, while incorporating novel features such as concurrent injection of faults into distinct target processors. The effectiveness of this high performance fault injector was successfully demonstrated on a tightly synchronized commercial real-time safety-critical system used in nuclear power applications.",2011,0, 5747,Smarter Architecture & Engineering: Game changer for requirements management: A position paper,"This paper addresses the challenges of large commercial and government that struggle with requirements understanding, increasing delivery quality and estimating costs. The paper discusses Smarter Architecture & Engineering (SmarterAE) as an approach that applies scrutiny, metrics and analysis to architecture, requirements management and engineering as is historically given to development, testing and operations. SmarterAE facilitates business agility by capturing the essence of the enterprise business into an actionable business architecture. It treats IT architecture, requirements, engineering as key business processes and accelerates acceptance among users, business stakeholders, development, testing and operations. The approach is based on Best Practices and encourages asset reuse and management.",2011,0, 5748,Networked fault detection of nonlinear systems,"This paper addresses Fault Detection (FD) problem of a class of nonlinear systems which are monitored via the communications networks. A sufficient condition is derived which guarantees exponential mean-square stability of the proposed nonlinear NFD systems in the presence of packet drop, quantization error and unwanted exogenous inputs such as disturbance and noise. A Linear Matrix Inequality (LMI) is obtained for the design of the fault detection filter parameters. Finally, the effectiveness of the proposed NFD technique is extensively assessed by using an experimental testbed that has been built for performance evaluation of such systems with the use of IEEE 802.15.4 Wireless Sensor Networks (WSNs) technology. An algorithm is presented to handle floating point calculus when connecting the WSNs to the engineering design softwares such as Matlab.",2011,0, 5749,Software/Hardware Framework for Generating Parallel Long-Period Random Numbers Using the WELL Method,"The Well Equidistributed Long-period Linear (WELL) algorithm is proven to have better characteristics than the Mersenne Twister (MT), one of the most widely used long-period pseudo-random number generators (PRNGs). In this paper, we propose a hardware architecture for efficient implementation of WELL. Our design achieves a throughput of 1 sample-per-cycle and runs as fast as 449.4 MHz on a Xilinx XC6VLX240T FPGA. This performance is 7.6-fold faster than a dedicated software implementation, and is comparable to a MT hardware generator built on the same device. It takes up 633 LUTs, 537 Flip-Flops and 4 BRAMs, which is only 0.5% of the device. Furthermore, we design a software/hardware framework that is capable of dividing the WELL stream into an arbitrary number of independent parallel sub-streams. With support from software, this framework can obtain speedup roughly proportional to the number of parallel cores. The quality of the random numbers generated by our design is verified by the standard statistical test suites Diehard and TestU01. We also apply our framework to a Monte-Carlo simulation for estimating p. Experimental results verify the correctness of our framework as well as the better characteristics of the WELL algorithm.",2011,0, 5750,Detecting and diagnosing application misbehaviors in ‘on-demandâ€?virtual computing infrastructures,"Numerous automated anomaly detection and application performance modeling and management tools are available to detect and diagnose faulty application behavior. However, these tools have limited utility in `on-demand' virtual computing infrastructures because of the increased tendencies for the applications in virtual machines to migrate across un-comparable hosts in virtualized environments and the unusually long latency associated with the training phase. The relocation of the application subsequent to the training phase renders the already collected data meaningless and the tools need to re-initiate the learning process on the new host afresh. Further, data on several metrics need to be correlated and analyzed in real time to infer application behavior. The multivariate nature of this problem makes detection and diagnosis of faults in real time all the more challenging as any suggested approach must be scalable. In this paper, we provide an overview of a system architecture for detecting and diagnosing anomalous application behaviors even as applications migrate from one host to another and discuss a scalable approach based on Hotelling's T2 statistic and MYT decomposition. We show that unlike existing methods, the computations in the proposed fault detection and diagnosis method is parallelizable and hence scalable.",2011,0, 5751,Calibration and validation of software performance models for pedestrian detection systems,"In recent years, road vehicles have seen a tremendous increase on driver assistance systems like lane departure warning, traffic sign recognition, or pedestrian detection. The development of efficient and cost-effective electronic control units that meet the necessary real-time performance for these systems is a complex challenge. Often, Electronic System-Level design tackles the challenge by simulation-based performance evaluation, although, the quality of system-level performance simulation approaches is not yet evaluated in detail. In this paper, we present the calibration and validation of a system-level performance simulation model. For evaluation, an automotive pedestrian detection algorithm is studied. Especially the varying number of pedestrians has a significant impact to the system performance and makes the prediction of execution time difficult. As test cases we used typical sequences and corner cases recorded by an experimental car. Our evaluation results indicate that prediction of execution times with an average error of 3.1% and a maximum error of 7.9% can be achieved. Thereby, simulated and measured execution times of a software implementation are compared.",2011,0, 5752,System-Level Online Power Estimation Using an On-Chip Bus Performance Monitoring Unit,"Quality power estimation is a basis of efficient power management of electronic systems. Indirect power measurement, such as power estimation using a CPU performance monitoring unit (PMU), is widely used for its low cost and area overheads. However, the existing CPU PMUs only monitor the core and cache activities, which result in a significant accuracy limitation in the system-wide power estimation including off-chip memory devices. In this paper, we propose an on-chip bus (OCB) PMU that directly captures on-chip and off-chip component activities by snooping the OCB. The OCB PMU stores the activity information in separate counters, and online software converts counter values into actual power values with simple first-order linear power models. We also introduce an optimization algorithm that minimizes the energy model to reduce the number of counters in the OCB PMU. We compare the accuracy of the power estimation using the proposed OCB PMU with real hardware measurement and cycle-accurate system-level power estimation, and demonstrate high estimation accuracy compared with CPU PMU-based estimation method.",2011,0, 5753,A Framework to Manage Knowledge from Defect Resolution Process,"This paper presents a framework for the management, the processing and the reuse, of information relative to defects. This framework is based on the fact that each defect triggers a resolution process in which information about the detected incident (i.e. the problem) and about the applied protocol to resolve it (i.e. the solution) is collected. These different types of information are the cornerstone of the optimization of corrective and preventive processes for new defects. Experimentations show that our prototype provides a very satisfactory quality of results with good performances.",2011,0, 5754,A Discrete Event Simulation Model to Evaluate Changes to a Software Project Delivery Process,"Process simulation modeling has been successfully applied to software development processes to support management decisions on issues related to process management and process improvement. In this work, we develop a discrete event simulation model to study an IT department's project delivery process and estimate the effect on its performance after applying a change to it. We focus on three important performance indicators of project management processes: cost, time and quality. The model uses statistical data analysis techniques and concepts of earned value management to simulate the performance of process activities, model the linkages between them, as well as capture the uncertainty that exists in real-life software processes. Design of experiments is used to organize the experiments and evaluate the results. Finally, two use case scenarios are presented to demonstrate how the simulation model can support decision making during a process improvement initiative. The project delivery process of a major mobile telecommunications provider was used as a template to design the simulation, calibrate parameters, and test the validity of the model.",2011,0, 5755,Virtual Machine Provisioning Based on Analytical Performance and QoS in Cloud Computing Environments,"Cloud computing is the latest computing paradigm that delivers IT resources as services in which users are free from the burden of worrying about the low-level implementation or system administration details. However, there are significant problems that exist with regard to efficient provisioning and delivery of applications using Cloud-based IT resources. These barriers concern various levels such as workload modeling, virtualization, performance modeling, deployment, and monitoring of applications on virtualized IT resources. If these problems can be solved, then applications can operate more efficiently, with reduced financial and environmental costs, reduced under-utilization of resources, and better performance at times of peak load. In this paper, we present a provisioning technique that automatically adapts to workload changes related to applications for facilitating the adaptive management of system and offering end-users guaranteed Quality of Services (QoS) in large, autonomous, and highly dynamic environments. We model the behavior and performance of applications and Cloud-based IT resources to adaptively serve end-user requests. To improve the efficiency of the system, we use analytical performance (queueing network system model) and workload information to supply intelligent input about system requirements to an application provisioner with limited information about the physical infrastructure. Our simulation-based experimental results using production workload models indicate that the proposed provisioning technique detects changes in workload intensity (arrival pattern, resource demands) that occur over time and allocates multiple virtualized IT resources accordingly to achieve application QoS targets.",2011,0, 5756,Safe software processing by concurrent execution in a real-time operating system,"The requirements for safety-related software systems increases rapidly. To detect arbitrary hardware faults, there are applicable coding mechanism, that add redundancy to the software. In this way it is possible to replace conventional multi-channel hardware and so reduce costs. Arithmetic codes are one possibility of coded processing and are used in this approach. A further approach to increase fault tolerance is the multiple execution of certain critical parts of software. This kind of time redundancy is easily realized by the parallel processing in an operating system. Faults in the program flow can be monitored. No special compilers, that insert additional generated code into the existing program, are required. The usage of multi-core processors would further increase the performance of such multi-channel software systems. In this paper we present the approach of program flow monitoring combined with coded processing, which is encapsulated in a library of coded data types. The program flow monitoring is indirectly realized by means of an operating system.",2011,0, 5757,On human analyst performance in assisted requirements tracing: Statistical analysis,"Assisted requirements tracing is a process in which a human analyst validates candidate traces produced by an automated requirements tracing method or tool. The assisted requirements tracing process splits the difference between the commonly applied time-consuming, tedious, and error-prone manual tracing and the automated requirements tracing procedures that are a focal point of academic studies. In fact, in software assurance scenarios, assisted requirements tracing is the only way in which tracing can be at least partially automated. In this paper, we present the results of an extensive 12 month study of assisted tracing, conducted using three different tracing processes at two different sites. We describe the information collected about each study participant and their work on the tracing task, and apply statistical analysis to study which factors have the largest effect on the quality of the final trace.",2011,0, 5758,Fast 3-D fingertip reconstruction using a single two-view structured light acquisition,"Current contactless fingertip recognition systems based on three-dimensional finger models mostly use multiple views (N >; 2) or structured light illumination with multiple patterns projected over a period of time. In this paper, we present a novel methodology able to obtain a fast and accurate three-dimensional reconstruction of the fingertip by using a single two-view acquisition and a static projected pattern. The acquisition setup is less constrained than the ones proposed in the literature and requires only that the finger is placed according to the depth of focus of the cameras, and in the overlapping field of views. The obtained pairs of images are processed in order to extract the information related to the fingertip and the projected pattern. The projected pattern permits to extract a set of reference points in the two images, which are then matched by using a correlation approach. The information related to a previous calibration of the cameras is then used in order to estimate the finger model, and one input image is wrapped on the resulting three-dimensional model, obtaining a three-dimensional pattern with a limited distortion of the ridges. In order to obtain data that can be treated by traditional algorithms, the obtained three-dimensional models are then unwrapped into bidimensional images. The quality of the unwrapped images is evaluated by using a software designed for contact-based fingerprint images. The obtained results show that the methodology is feasible and a realistic three-dimensional reconstruction can be achieved with few constraints. These results also show that the fingertip models computed by using our approach can be processed by both specific three-dimensional matching algorithms and traditional matching approaches. We also compared the results with the ones obtained without using structured light techniques, showing that the use of a projector achieves a faster and more accurate fingertip reconstruction.",2011,0, 5759,TrendTV: An architecture for automatic change of TV channels based on social networks with multiple device support,"In this work, we present the TrendTV architecture, a set of software layers that links various TV show viewers, producing a personalized recommendation system. In this architecture is possible to indicate the quality of a TV show in real time through the interaction between viewers using a social network that can be accessed through several different ways. Associated with a personalized Electronic Programming Guide, this social network allows the viewer to perform filtering on a particular subject from the indication made by other viewers through the interactivity over the Web, interactive TV or through a mobile device. The result is a dynamic database containing the classification of several TV programs built from that architecture, and an application that change automatically to the best channel at the moment.",2011,0, 5760,On performance of combining methods for three-node half-duplex cooperative diversity network,"We analysis the performance of the ad-hoc network with a base station, a mobile and a third station acting as a relay. Three combining methods for the Amplify-and-Forward (AF) protocol and the Decode-and-Forward (DF) protocol are compared. Simulations indicate that the Amplifyand-Forward (AF) protocol beats the Decode-and-Forward (DF) protocol under all these three combining methods. To combine the incoming signals the channel quality should be estimated as well as possible, more estimation accuracy requires more resource. A very simple combining method can obtain the performance compared with that by optimal combining methods approximately. At the same time, all three combining methods for both diversity protocols can achieve the maximum diversity order.",2011,0, 5761,Automatic measurement of electrical parameters of signal relays,"The manufacturing process of Metal to Carbon relays used in railway signaling systems for configuring various circuits of signals / points / track circuits etc. consists of seven phases from raw material to finished goods. To ensure in-process quality, the electrical parameters are measured manually after each stage. Manual measurement process is tedious, error prone and involves lot of time, effort and manpower. Besides, it is susceptible to manipulation and may lead to inferior quality products being passed, either due to deliberation or due to malefic intentions. Due to erroneous measurement of electrical parameters, the functional reliability of relays is adversely affected. To enhance the trustworthiness of measurement of electrical parameters & to make the process faster, an automated measurement system having proprietary application software and a testing jig attachment has been developed. When the relay is fixed on the testing jig, the software scans all the relay contacts and measures all the electrical parameters viz. operating voltage / current, contact resistance, release voltage / current, coil resistance etc. The results are displayed on the computer screen and stored in a database file.",2011,0, 5762,A three-dimensional reconstruction system based on panoramic vision and two-dimensional ranging laser,"With respect to the limited field of view in the visual environment sensory perceptual system of the traditional three-dimensional reconstruction system, as well as such defects as high price, big size, cumbersome shape and low rate of imaging, a three-dimensional reconstruction system is designed based on panoramic vision and a two-dimensional ranging laser. By adopting the linear laser technology, a non-contact hardware system of laser scanning and measuring system is constructed based on panoramic vision, so as to achieve real-time scanning of objects; by adopting the corner detection technology to abstract image features, and to perform image matching and camera calibration, a three-dimensional model of objects is constructed. System software tests developed in the programming environment of Visual C++ show that this system could realize an excellent three-dimensional reconstruction based on panoramic vision and two-dimensional ranging laser.",2011,0, 5763,Modelling 3D camera movement for vibration characterisation and multiple object identification with application to lighting assessment,"Utilising cameras as a means to survey the surrounding environment is becoming increasingly popular in a number of different research areas and applications. Central to using camera sensors as input to a vision system, is the need to be able to manipulate and process the information captured in these images. One such application, is the use of cameras to monitor the quality of airport landing lighting at aerodromes where a camera is placed inside an aircraft and used to record images of the lighting pattern during the landing phase of a flight. The images are processed to determine a performance metric. This requires the development of custom software for the localisation and identification of luminaires within the image data. However, because of the necessity to keep airport operations functioning as efficiently as possible, it is difficult to collect enough image data to develop, test and validate any developed software. In this paper, we present a technique to model a virtual landing lighting pattern. A mathematical model is postulated which represents the glide path of the aircraft including random deviations from the expected path. A morphological method has been developed to localise and track the luminaires under different operating conditions.",2011,0, 5764,Large-Scale Simulator for Global Data Infrastructure Optimization,"IT infrastructures in global corporations are appropriately compared with nervous systems, in which body parts (interconnected datacenters) exchange signals (request responses) in order to coordinate actions (data visualization and manipulation). A priori inoffensive perturbations in the operation of the system or the elements composing the infrastructure can lead to catastrophic consequences. Downtime disables the capability of clients reaching the latest versions of the data and/or propagating their individual contributions to other clients, potentially costing millions of dollars to the organization affected. The imperative need of guaranteeing the proper functioning of the system not only forces to pay particular attention to network outages, hot-objects or application defects, but also slows down the deployment of new capabilities, features and equipment upgrades. Under these circumstances, decision cycles for these modifications can be extremely conservative, and be prolonged for years, involving multiple authorities across departments of the organization. Frequently, the solutions adopted are years behind state-of-the art technologies or phased out compared to leading research on the IT infrastructure field. In this paper, the utilization of a large-scale data infrastructure simulator is proposed, in order to evaluate the impact of "" what if"" scenarios on the performance, availability and reliability of the system. The goal is to provide data center operators a tool that allows understanding and predicting the consequences of the deployment of new network topologies, hardware configurations or software applications in a global data infrastructure, without affecting the service. The simulator was constructed using a multi-layered approach, providing a granularity down to the individual server component and client action, and was validated against a downscaled version of the data infrastructure of a Fortune 500 company.",2011,0, 5765,The Fading Boundary between Development Time and Run Time,"Summary form only given. Modern software applications are often embedded in highly dynamic contexts. Changes may occur in the requirements, in the behavior of the environment in which the application is embedded, in the usage profiles that characterize interactive aspects. Changes are difficult to predict and anticipate, and are out of control of the application. Their occurrence, however, may be disruptive, and therefore the software must also change accordingly. In many cases, changes to the software cannot be handled off-line, but require the software to self react by adapting its behavior dynamically, in order to continue to ensure the required quality of service. The big challenge in front of us is how to achieve the necessary degrees of flexibility and dynamism required in this setting without compromising dependability of the applications. To achieve dependability, a software engineering paradigm shift is needed. The traditional focus on quality, verification, models, and model transformations must extend from development time to run time. Not only software development environments (SDEs) are important for the software engineer to develop better software. Feature-full Software Run-time Environments (SREs) are also key. SREs must be populated by a wealth of functionalities that support on-line monitoring of the environment, inferring significant changes through machine learning methods, keeping models alive and updating them accordingly, reasoning on models about requirements satisfaction after changes occur, and triggering model-driven self-adaptive reactions, if necessary. In essence, self adaptation must be grounded on the firm foundations provided by formal methods and tools in a seamless SDE SRE setting. The talk discusses these concepts by focusing on non-functional requirements-reliability and performance-that can be expressed in quantitative probabilistic requirements. In particular, it shows how probabilistic model checking can help reasoning about re- - quirements satisfaction and how it can be made run-time efficient. The talk reports on some results of research developed within the SMScom project, funded by the European Commission, Programme IDEAS-ERC, Project 227977 (http://www.erc-smscom.org/).",2011,0, 5766,High Performance Dense Linear System Solver with Soft Error Resilience,"As the scale of modern high end computing systems continues to grow rapidly, system failure has become an issue that requires a better solution than the commonly used scheme of checkpoint and restart (C/R). While hard errors have been studied extensively over the years, soft errors are still under-studied especially for modern HPC systems, and in some scientific applications C/R is not applicable for soft error at all due to error propagation and lack of error awareness. In this work, we propose an algorithm based fault tolerance (ABFT) for high performance dense linear system solver with soft error resilience. By adapting a mathematical model that treats soft error during LU factorization as rank-one perturbation, the solution of Ax=b can be recovered with the Sherman-Morrison formula. Our contribution includes extending error model from Gaussian elimination and pair wise pivoting to LU with partial pivoting, and we provide a practical numerical bound for error detection and a scalable check pointing algorithm to protect the left factor that is needed for recovering x from soft error. Experimental results on cluster systems with ScaLAPACK show that the fault tolerance functionality adds little overhead to the linear system solving and scales well on such systems.",2011,0, 5767,Reliable software for unreliable hardware: Embedded code generation aiming at reliability,"A compilation technique for reliability-aware software transformations is presented. An instruction-level reliability estimation technique quantifies the effects of hardware-level faults at the instruction-level while considering spatial and temporal vulnerabilities. It bridges the gap between hardware - where faults occur according to our fault model - and software (the abstraction level where we aim to increase reliability). For a given tolerable performance overhead, an optimization algorithm compiles an application software with respect to a tradeoff between performance and reliability. Compared to performance-optimized compilation, our method incurs 60%-80% lower application failures, averaged over various fault injection scenarios and fault rates.",2011,0, 5768,Feature Selection of Imbalanced Gene Expression Microarray Data,"Gene expression data is a very complex data set characterised by abundant numbers of features but with a low number of observations. However, only a small number of these features are relevant to an outcome of interest. With this kind of data set, feature selection becomes a real prerequisite. This paper proposes a methodology for feature selection for an imbalanced leukaemia gene expression data based on random forest algorithm. It presents the importance of feature selection in terms of reducing the number of features, enhancing the quality of machine learning and providing better understanding for biologists in diagnosis and prediction. Algorithms are presented to show the methodology and strategy for feature selection taking care to avoid over fitting. Moreover, experiments are done using imbalanced Leukaemia gene expression data and special measurement is used to evaluate the quality of feature selection and performance of classification.",2011,0, 5769,Diagnostic system for on-line detection of rotor faults in induction motor drives,"The paper presents an on-line condition monitoring and diagnostic system for induction motor drives. It enables detection of many different faults, which may arise during the lifetime of the motor, although special attention was devoted to identify broken rotor bars at an early stage of the fault propagation. The method is based on the analysis of stator current frequency spectrum, which can be measured without disturbing normal motor operation, therefore it is completely non-invasive and easy to implement in industrial environments. The presented diagnostic system is being applied on 17 high-voltage motors (range of power 1000 - 6400 kW) in two thermal power plants to increase operational reliability of induction motors and to reduce costs of their maintenance. Concepts of hardware and software configurations are explained in details as well as some of the numerous monitoring results during the last five-years period. The paper also discusses two examples of a timely and efficient detection of rotor electric asymmetry due to cracked end-ring segments of induction motors, which exhibited no obvious problems during operation.",2011,0, 5770,A simple fault detection of induction motor by using parity equations,"In this paper, a fault detection technique by using parity equations applied to an induction motor is presented. The nonlinear model of A.C. motor is matched with the lineal model of D.C. motor in synchronous reference frame in order to generate a relative large change in the residual obtained with parity equations in presence of fault, which allows the simplicity and reliability of the fault detection. The good performance of the fault detection system is validated by using a simulator software of power electronics and motor control applications (PSIM).",2011,0, 5771,From Boolean to quantitative synthesis,"Motivated by improvements in constraint-solving technology and by the increase of routinely available computational power, partial-program synthesis is emerging as an effective approach for increasing programmer productivity. The goal of the approach is to allow the programmer to specify a part of her intent imperatively (that is, give a partial program) and a part of her intent declaratively, by specifying which conditions need to be achieved or maintained. The task of the synthesizer is to construct a program that satisfies the specification. As an example, consider a partial program where threads access shared data without using any synchronization mechanism, and a declarative specification that excludes data races and deadlocks. The task of the synthesizer is then to place locks into the program code in order for the program to meet the specification. In this paper, we argue that quantitative objectives are needed in partial-program synthesis in order to produce higher-quality programs, while enabling simpler specifications. Returning to the example, the synthesizer could construct a naive solution that uses one global lock for shared data. This can be prevented either by constraining the solution space further (which is error-prone and partly defeats the point of synthesis), or by optimizing a quantitative objective that models performance. Other quantitative notions useful in synthesis include fault tolerance, robustness, resource (memory, power) consumption, and information flow.",2011,0, 5772,On the Interplay between Structural and Logical Dependencies in Open-Source Software,"Structural dependencies have long been explored in the context of software quality. More recently, software evolution researchers have investigated logical dependencies between artifacts to assess failure-proneness, detect design issues, infer code decay, and predict likely changes. However, the interplay between these two kinds of dependencies is still obscure. By mining 150 thousand commits from the Apache Software Foundation repository and employing object-oriented metrics reference values, we concluded that 91% of all established logical dependencies involve non-structurally related artifacts. Furthermore, we found some evidence that structural dependencies do not lead to logical dependencies in most situations. These results suggest that dependency management methods and tools should rely on both kinds of dependencies, since they represent different dimensions of software evolvability.",2011,0, 5773,What You See is What You Asked for: An Effort-Based Transformation of Code Analysis Tasks into Interactive Visualization Scenarios,"We propose an approach that derives interactive visualization scenarios from descriptions of code analysis tasks. The scenario derivation is treated as an optimization process. In this context, we evaluate different possibilities of using a given visualization tool to perform the analysis task, and select the scenario that requires the least effort from the analyst. Our approach was applied successfully to various analysis tasks such as design defect detection and feature location.",2011,0, 5774,Are the Clients of Flawed Classes (Also) Defect Prone?,"Design flaws are those characteristics of design entities (e.g., methods, classes) which make them harder to maintain. Existing studies show that classes revealing particular design flaws are more change and defect prone than the other classes. Since various collaborations are found among the instances of classes, classes are not isolated within the source code of object-oriented systems. In this paper we investigate if classes using classes revealing design flaws are more defect prone than classes which do not use classes revealing design flaws. We detect four design flaws in three releases of Eclipse and investigate the relation between classes that use/do not use flawed classes and defects. The results show that classes that use flawed classes are defect prone and this does not depend on the number of the used flawed classes. This findings show a new type of correlation between design flaws and defects, bringing evidence related to an increased likelihood of exhibiting defects for classes that use classes revealing design flaws. Based on the provided evidence, practitioners are advised once again about the negative impact design flaws have at a source code level.",2011,0, 5775,An MRF Model for Binarization of Natural Scene Text,"Inspired by the success of MRF models for solving object segmentation problems, we formulate the binarization problem in this framework. We represent the pixels in a document image as random variables in an MRF, and introduce a new energy (or cost) function on these variables. Each variable takes a foreground or background label, and the quality of the binarization (or labelling) is determined by the value of the energy function. We minimize the energy function, i.e. find the optimal binarization, using an iterative graph cut scheme. Our model is robust to variations in foreground and background colours as we use a Gaussian Mixture Model in the energy function. In addition, our algorithm is efficient to compute, and adapts to a variety of document images. We show results on word images from the challenging ICDAR 2003 dataset, and compare our performance with previously reported methods. Our approach shows significant improvement in pixel level accuracy as well as OCR accuracy.",2011,0, 5776,Distortion Measurement for Automatic Document Verification,"Document forgery detection is important as techniques to generate forgeries are becoming widely available and easy to use even for untrained persons. In this work, two types of forgeries are considered: forgeries generated by re-engineering a document and forgeries that are generated using scanning and printing a genuine document. An unsupervised approach is presented to automatically detect forged documents of these types by detecting the geometric distortions introduced during the forgery process. Using the matching quality between all pairs of documents, outlier detection is performed on the summed matching quality to identify the tampered document. Quantitative evaluation is done on two public data sets, reporting a true positive rate from to 0.7 to 1.0.",2011,0, 5777,Automatic band detection of 1D-gel images,"Many gel electrophoresis image segmentation methods have been proposed. However, most of the proposed methods cannot provide a satisfying performance since a gel electrophoresis image is a manually captured image often with various noise and poor quality. This paper presents a 1D-gel image segmentation method (1DGISM) to cut the bands from a one-dimension gel electrophoresis image (1D-gel image). The experimental results showed that 1DGISM can give an impressive segmentation performance even if the band boundaries are indistinct.",2011,0, 5778,GPRS-based data real-time transmission system of water-quality monitoring,"In order to meet the current water resource management automation and remote monitoring needs, a GPRS-based real-time management system is designed,which includes terminal subsystem,master station subsystem and communication service subsystem. The terminal subsystem is developed based on STM32F103X microprocessor and GPRS network. The GPRS module of MC55 is used to realize the remote wireless data transmission between terminal and GPRS network, and on this basis to achieve a wide range of water resource management monitoring. The Software of the management system is based on Windows platform, the operational parameters and real-time water-quality data are acquired and processed for statistics, analysis, curves and classification under the SQL2Server + VC++ environment. The designed system meets the requirements of strong real-time performance, networkization and intelligentization.",2011,0, 5779,Autonomic Configuration Adaptation Based on Simulation-Generated State-Transition Models,"Configuration management is a complex task, even for experienced system administrators, which makes self-managing systems a particularly desirable solution. This paper describes a novel contribution to self-managing systems, including an autonomic configuration self-optimization methodology. Our solution involves a systematic simulation method that develops a state-transition model of the behavior of a service-oriented system in terms of its configuration and performance. At run time, the system's behavior is monitored and classified in one of the model states. If this state may lead to futures that violate service level agreements, the system configuration is changed toward a safer future state. Similarly, a satisfactory state that is over-provisioned may be transitioned to a more economical satisfactory state. Aside from the typical benefits of self-optimization, our approach includes an intuitive, explainable decision model, the ability to predict the future with some accuracy avoiding trial-and-error, offline training, and the ability to improve the model at run-time. We demonstrate this methodology in an experiment where Amazon EC2 instances are added and removed to handle changing request volumes to a real service-oriented application. We show that a knowledge base generated entirely in simulation can be used to make accurate changes to a real-world application.",2011,0, 5780,The Relevance of Assumptions and Context Factors for the Integration of Inspections and Testing,"Integrating inspection processes with testing processes promises to deliver several benefits, including reduced effort for quality assurance or higher defect detection rates. Systematic integration of these processes requires knowledge regarding the relationships between these processes, especially regarding the relationship between inspection defects and test defects. Such knowledge is typically context-dependent and needs to be gained analytically or empirically. If such kind of knowledge is not available, assumptions need to be made for a specific context. This article describes the relevance of assumptions and context factors for integrating inspection and testing processes and provides mechanisms for deriving assumptions in a systematic manner.",2011,0, 5781,Empirical Evaluation of Mixed-Project Defect Prediction Models,"Defect prediction research mostly focus on optimizing the performance of models that are constructed for isolated projects. On the other hand, recent studies try to utilize data across projects for building defect prediction models. We combine both approaches and investigate the effects of using mixed (i.e. within and cross) project data on defect prediction performance, which has not been addressed in previous studies. We conduct experiments to analyze models learned from mixed project data using ten proprietary projects from two different organizations. We observe that code metric based mixed project models yield only minor improvements in the prediction performance for a limited number of cases that are difficult to characterize. Based on existing studies and our results, we conclude that using cross project data for defect prediction is still an open challenge that should only be considered in environments where there is no local data collection activity, and using data from other projects in addition to a project's own data does not pay off in terms of performance.",2011,0, 5782,Read rate profile monitoring for defect detection in RFID Systems,"RFID technologies are sometimes used into critical domains where the on-line detection of RFID system defects is a must. Moreover, as RFID systems are based on low cost tags which are often used into harsh environment, they do not always ensure robust functioning. This article proposes a new on-line monitoring approach allowing the detection of system defects to enhance system reliability and availability. This approach is based on the characterization of a statistical system parameter - the tag read rate profile - to perform the on-line detection of faulty RFID components. This monitoring approach is compared to classical monitoring approaches for a real case study and through several fault simulations. Results show that the proposed read rate profile monitoring is more efficient than the existing approaches. In addition, results show that this approach should be combined with an existing approach to maximize the fault detection.",2011,0, 5783,E-Quality: A graph based object oriented software quality visualization tool,"Recently, with increasing maintenance costs, studies on software quality are becoming increasingly important and widespread because high quality software means more easily maintainable software. Measurement plays a key role in quality improvement activities and metrics are the quantitative measurement of software design quality. In this paper, we introduce a graph based object-oriented software quality visualization tool called ""E-Quality"". E-Quality automatically extracts quality metrics and class relations from Java source code and visualizes them on a graph-based interactive visual environment. This visual environment effectively simplifies comprehension and refactoring of complex software systems. Our approach assists developers in understanding of software quality attributes by level categorization and intuitive visualization techniques. Experimental results show that the tool can be used to detect software design flaws and refactoring opportunities.",2011,0, 5784,A visual analysis and design tool for planning software reengineerings,"Reengineering complex software systems represents a non-trivial process. As a fundamental technique in software engineering, reengineering includes (a) reverse engineering the as-is system design, (b) identifying a set of transformations to the design, and (c) applying these transformations. While methods a) and c) are widely supported by existing tools, identifying possible transformations to improve architectural quality is not well supported and, therefore, becomes increasingly complex in aged and large software systems. In this paper we present a novel visual analysis and design tool to support software architects during reengineering tasks in identifying a given software's design and in visually planning quality-improving changes to its design. The tool eases estimating effort and change impact of a planned reengineering. A prototype implementation shows the proposed technique's feasibility. Three case studies conducted on industrial software systems demonstrate usage and scalability of our approach.",2011,0, 5785,Computing indicators of creativity,"Currently, the most common measurement of creativity is based on tests of divergence. These creativity tests include divergent thinking, divergent feeling, etc. In most cases the evaluation criteria is a subjective appraisal by a trained ""rater"" to assess the amount of divergence from the ""norm"" a particular submitted solution has to a presented or discovered task. The larger the divergence from the standard, the more creative the solution is. Although the quality and quantity of the solutions to the task must be considered, divergence from the accepted ""norm"" is a significant indicator of creativity. Using the current model for showing creative divergence, a method for evaluating the divergence of programming solutions as compared to the standard tutorial solution, in order to indicate creativity should be in line with current creativity research. Instead of subjective ""rater evaluations"" a method of calculating numerical divergence from programming solutions was devised. This method was employed on three separate class conditions and yielded three separate divergence patterns, indicating that the divergence calculation appears to demonstrate, not only that creativity can be shown to exist in programming solutions, but that the calculation is sensitive enough to differentiate between different class learning conditions of the same teacher. So based on the idea that creativity can be shown through divergence in thinking and feeling, it stands to reason that creativity in programming could be revealed through a similar divergence to a standard norm through calculating the divergence to that norm. Consequently, this divergence calculation method shows promising indicators to inform the measurement of creativity within programming and possibly other scientific areas.",2011,0, 5786,Research of artificial neural network's component of software quality evaluation and prediction method,This paper represents the artificial neural network's method of design results evaluation and software quality characteristics prediction (NMEP) and the analysis of results of realized artificial neural network (ANN) training and functioning.,2011,0, 5787,An alternative optical method for acoustic resonance detection in HID lamps,Acoustic resonances are observed in high-pressure discharge lamps operated with ac input modulated power frequencies in the kHz range. This paper describes an optical acoustic resonance detection method for HID lamps using computer controlled webcams and image processing software. Quality indexes are proposed to evaluate lamp performance and experimental results are presented.,2011,0, 5788,A metric suite for early estimation of software testing effort using requirement engineering document and its validation,"Software testing is one of the most important and critical activity of software development life cycle which ensures software quality and directly influences the development cost and success of the software. This paper empirically proposes a test metric for the estimation of the software testing effort using IEEE-Software Requirement Specification (SRS) document in order to avoid budget overshoot, schedule escalation etc., at very early stage of software development. Further the effort required to develop or test the software will also depend on the complexity of the proposed software. Therefore the proposed test metric computes the requirement based complexity for yet to be developed software on the basis of SRS document. Later the proposed measure is also compared with other proposals for test effort estimation like code based, cognitive value based and use case based measures. The result obtained validates that the proposed test metric is a comprehensive one and compares well with various other prominent measures. The computation of proposed test effort estimation involves least overhead as compared to others.",2011,0, 5789,Impact of attribute selection on defect proneness prediction in OO software,"Defect proneness prediction of software modules always attracts the developers because it can reduce the testing efforts as well as software development time. In the current context, with the piling up of constraints like requirement ambiguity and complex development process, developing fault free reliable software is a daunting task. To deliver reliable software, software engineers are required to execute exhaustive test cases which become tedious and costly for software enterprises. To ameliorate the testing process one can use a defect prediction model so that testers can focus their efforts on defect prone modules. Building a defect prediction model becomes very complex task when the number of attributes is very large and the attributes are correlated. It is not easy even for a simple classifier to cope with this problem. Therefore, while developing a defect proneness prediction model, one should always be careful about feature selection. This research analyzes the impact of attribute selection on Naive Bayes (NB) based prediction model. Our results are based on Eclipse and KC1 bug database. On the basis of experimental results, we show that careful combination of attribute selection and machine learning apparently useful and, on the Eclipse data set, yield reasonable good performance with 88% probability of detection and 49% false alarm rate.",2011,0, 5790,On Protecting Cryptographic Applications Against Fault Attacks Using Residue Codes,"We propose a new class of error detection codes, {em quadratic dual residue codes}, to protect cryptographic computations running on general-purpose processor cores against fault attacks. The assumed adversary model is a powerful one, whereby the attacker can inject errors anywhere in the data path of a general-purpose microprocessor by bit flipping. We demonstrate that quadratic dual residue codes provide a much better protection under this powerful adversary model compared to similar codes previously proposed for the same purpose in the literature. The adopted strategy aims to protect the single-precision arithmetic operations, such as addition and multiplication, which usually dominate the execution time of many public key cryptography algorithms in general-purpose microprocessors. Two so called {em robust} units for addition and multiplication operations, which provide a protection against faults attacks, are designed and tightly integrated into the data path of a simple, embedded re-configurable processor. We report the implementation results that compare the proposed error detection codes favorably with previous proposals of similar type in the literature. In addition, we present performance evaluations of the software implementations of Montgomery multiplication algorithm using the robust execution units. Implementation results clearly show that it is feasible to implement robust arithmetic units with relatively low overhead even for a simple embedded processor.",2011,0, 5791,Autonomic Resource Management with Support Vector Machines,"The use of virtualization technology makes data centers more dynamic and easier to administrate. Today, cloud providers offer customers access to complex applications running on virtualized hardware. Nevertheless, big virtualized data centers become stochastic environments and the implification on the user side leads to many challenges for the provider. He has to find cost-efficient configurations and has to deal with dynamic environments to ensure service guarantees. In this paper, we introduce a software solution that reduces the degree of human intervention to manage cloud services. We present a multi-agent system located in the Software as a Service (SaaS) layer. Agents allocate resources, configure applications, check the feasibility of requests, and generate cost estimates. The agents learn behavior models of the services via Support Vector Machines (SVMs) and share their experiences via a global knowledge base. We evaluate our approach on real cloud systems with three different applications, a brokerage system, a high-performance computing software, and a web server.",2011,0, 5792,Piggy-Backing Link Quality Measurements to IEEE 802.15.4 Acknowledgements,"In this paper we present an approach to piggyback link quality measurements to IEEE 802.15.4 acknowledgement frames by generating acknowledgements in software instead of relying on hardware support. We show that the software-generated ACKs can be sent meeting the timing constraints of IEEE 802.15.4. This allows for a standard conforming, energy neutral dissemination of link quality related information in IEEE 802.15.4 networks. This information is available at no cost when transmitting data and can be used as input for various schemes for adaptive transmission power control and to assess the current channel quality.",2011,0, 5793,Analyzing Performance of Lease-Based Schemes under Failures,"Leases have proved to be an effective concurrency control technique for distributed systems that are prone to failures. However, many benefits of leases are only realized when leases are granted for approximately the time of expected use. Correct assessment of lease duration has proven difficult for all but the simplest of resource allocation problems. In this paper, we present a model that captures a number of lease styles and semantics used in practice. We consider a few performance characteristics for lease-based systems and analytically derive how they are affected by lease duration. We confirm our analytical findings by running a set of experiments with the OO7 benchmark suite using a variety of workloads and fault loads.",2011,0, 5794,Measurement methods for QoS in VoIP review,"In the last years there is a merging trend towards unified communication systems over IP protocols from desktop to handheld devices. However this trend brings forth the limited QoS control existing in these types of networks. The lack of cost-effective QoS strategies is felt negatively directly by the end user in terms of both communication quality and increasing costs. Therefore, this paper analyses which are the main methods available for measuring the QoS in VoIP networks for both audio and video calls and how neural networks can be used to predict the quality as perceived from the end user perspective.",2011,0, 5795,Real time acquisition of ESPI specklegram and automated analysis for NDE,"Electronic Speckle Pattern Interferometry (ESPI) is a fast developing whole field optical technique used for the measurement of optical phase changes under object deformations and has evolved as a powerful on-line nondestructive evaluation (NDE) tool. In ESPI the speckle interferometry data of the object under different loading conditions are stored as digital images and then converted into interference fringes by digital subtraction or by addition method. These interference patterns correspond to surface displacements, which in turn relates to the strain variation in the specimen. Defects in the specimen produce a strain concentration on loading and generate a fringe anomaly in the interferogram. These anomalies in the interferogram are analyzed for the characterisation of defects in the material. This technique is used as an effective tool in Non-Destructive Evaluation (NDE). In this paper a method for on-line analysis of ESPI fringe patterns for the qualitative evaluation in the materials is presented. Continuous acquisition and simultaneous digital processing of specklegrams has been developed for generating the fringe patterns. The image acquisition and processing system used for the present study is an algorithm developed by using Imaging Graphs software. Image processing techniques are introduced for contrast enhancement, noise reduction and fringe sharpening for the on-line detection of different types of defects in the material.",2011,0, 5796,The design of a software fault prone application using evolutionary algorithm,"Most of the current project management software's are utilizing resources on developing areas in software projects. This is considerably essential in view of the meaningful impact towards time and cost-effective development. One of the major areas is the fault proneness prediction, which is used to find out the impact areas by using several approaches, techniques and applications. Software fault proneness application is an application based on computer aided approach to predict the probability that the software contains faults. The application will uses object oriented metrics and count metrics values from open source software as input values to the genetic algorithm for generation of the rules to classify the software modules in the categories of Faulty and Non Faulty modules. At the end of the process, the result will be visualized using genetic algorithm applet, bar and pie chart. This paper will discussed the detail design of software fault proneness application by using genetic algorithm based on the object oriented approach and will be presented using the Unified Modeling Language (UML). The aim of the proposed design is to develop an automated tool for software development group to discover the most likely software modules to be high problematic in the future.",2011,0, 5797,LTE CoS/QoS Harmonization Emulator,"There are many challenges in the QoS controls and traffic handling mechanisms in the deployment of Long Term Evolution (LTE). Simulations are crucial for performance evaluation of end-to-end QoS (Quality of Service) management across LTE network elements over heterogeneous networks. The LTE CoS/QoS Harmonization Emulator presented in this paper will allow the user to determine whether the QoS design strategy specifically and logically fits in a variety of LTE network topologies. The Emulator is capable of evaluating effectiveness of CoS/QoS mapping, performance of QoS controls (classification/policing/mark/queuing&dropping/ shaping) and traffic handling mechanisms that supports specific LTE applications. This module is portable and can be integrated with the performance management (PM) system as a part of business/operations impact analysis to support testing during the system development and quality assurance phases. The Emulator is offered for free under the terms of an academic, non-commercial use license.",2011,0, 5798,Low-Complexity Encoding Method for H.264/AVC Based on Visual Perception,"H.264/AVC standard achieved excellent encoding performance of at the cost of increased computational complexity and falling encoding speed. In order to overcome poor real-time encoding performance of H.264/AVC, aiming at computing redundancy, based on the integration of visual selective attention mechanism and low complexity encoding of information analysis and visual perception, making use of the distribution of motion vector and the relationship between the mode decision probability and the human visual attention a low-complexity H.264/AVC video coding scheme is implemented in this paper. The simulation results show that the approach encoding effectively resolves the conflict between coding accuracy and speed, saving about 80% coding time on average, which effectively maintains good video quality and overall improves the encoding performance of H.264/AVC.",2011,0, 5799,"iProblems - An Integrated Instrument for Reporting Design Flaws, Vulnerabilities and Defects","In the current demonstration we present a new instrument that provides for each existing class in an analyzed system information related to the problems the class reveals. We associate different types of problems to a class: design flaws, vulnerabilities and defects. In order to validate its usefulness, we perform some experiments on a suite of object-oriented systems and some results are briefly presented in the last part of the demo.",2011,0, 5800,Approximate Code Search in Program Histories,"Very often a defect must be corrected not only in the current version of a program at one particular place but in many places and many other versions -- possibly even in different development branches. Consequently, we need a technique to efficiently locate all approximate matches of an arbitrary defective code fragment in the program's history as they may need to be fixed as well. This paper presents an approximate whole-program code search in multiple releases and branches. We evaluate this technique for real-world defects of various large and realistic programs having multiple releases and branches. We report runtime measurements and recall using varying levels of allowable differences of the approximate search.",2011,0, 5801,Impact of Installation Counts on Perceived Quality: A Case Study on Debian,"Software defects are generally used to indicate software quality. However, due to the nature of software, we are often only able to know about the defects found and reported, either following the testing process or after being deployed. In software research studies, it is assumed that a higher amount of defect reports represents a higher amount of defects in the software system. In this paper, we argue that widely deployed programs have more reported defects, regardless of their actual number of defects. To address this question, we perform a case study on the Debian GNU/Linux distribution, a well-known free / open source software collection. We compare the defects reported for all the software packages in Debian with their popularity. We find that the number of reported defects for a Debian package is limited by its popularity. This finding has implications on defect prediction studies, showing that they need to consider the impact of popularity on perceived quality, otherwise they might be risking bias.",2011,0, 5802,An Empirical Validation of the Benefits of Adhering to the Law of Demeter,"The Law of Demeter formulates the rule-of-thumb that modules in object-oriented program code should ""only talk to their immediate friends"". While it is said to foster information hiding for object-oriented software, solid empirical evidence confirming the positive effects of following the Law of Demeter is still lacking. In this paper, we conduct an empirical study to confirm that violating the Law of Demeter has a negative impact on software quality, in particular that it leads to more bugs. We implement an Eclipse plugin to calculate the amount of violations of both the strong and the weak form of the law in five Eclipse sub-projects. Then we discover the correlation between violations of the law and the bug-proneness and perform a logistic regression analysis of three sub-projects. We also combine the violations with other OO metrics to build up a model for predicting the bug-proneness for a given class. Empirical results show that violations of the Law of Demeter indeed highly correlate with the number of bugs and are early predictor of the software quality. Based on this evidence, we conclude that obeying the Law of Demeter is a straight-forward approach for developers to reduce the number of bugs in their software.",2011,0, 5803,Assessing Software Quality by Program Clustering and Defect Prediction,"Many empirical studies have shown that defect prediction models built on product metrics can be used to assess the quality of software modules. So far, most methods proposed in this direction predict defects by class or file. In this paper, we propose a novel software defect prediction method based on functional clusters of programs to improve the performance, especially the effort-aware performance, of defect prediction. In the method, we use proper-grained and problem-oriented program clusters as the basic units of defect prediction. To evaluate the effectiveness of the method, we conducted an experimental study on Eclipse 3.0. We found that, comparing with class-based models, cluster-based prediction models can significantly improve the recall (from 31.6% to 99.2%) and precision (from 73.8% to 91.6%) of defect prediction. According to the effort-aware evaluation, the effort needed to review code to find half of the total defects can be reduced by 6% if using cluster-based prediction models.",2011,0, 5804,Modularization Metrics: Assessing Package Organization in Legacy Large Object-Oriented Software,"There exist many large object-oriented software systems consisting of several thousands of classes that are organized into several hundreds of packages. In such software systems, classes cannot be considered as units for software modularization. In such context, packages are not simply classes containers, but they also play the role of modules: a package should focus to provide well identified services to the rest of the software system. Therefore, understanding and assessing package organization is primordial for software maintenance tasks. Although there exist a lot of works proposing metrics for the quality of a single class and/or the quality of inter-class relationships, there exist few works dealing with some aspects for the quality of package organization and relationship. We believe that additional investigations are required for assessing package modularity aspects. The goal of this paper is to provide a complementary set of metrics that assess some modularity principles for packages in large legacy object-oriented software: Information-Hiding, Changeability and Reusability principles. Our metrics are defined with respect to object-oriented dependencies that are caused by inheritance and method call. We validate our metrics theoretically through a careful study of the mathematical properties of each metric.",2011,0, 5805,ImpactScale: Quantifying change impact to predict faults in large software systems,"In software maintenance, both product metrics and process metrics are required to predict faults effectively. However, process metrics cannot be always collected in practical situations. To enable accurate fault prediction without process metrics, we define a new metric, ImpactScale. ImpactScale is the quantified value of change impact, and the change propagation model for ImpactScale is characterized by probabilistic propagation and relation-sensitive propagation. To evaluate ImpactScale, we predicted faults in two large enterprise systems using the effort-aware models and Poisson regression. The results showed that adding ImpactScale to existing product metrics increased the number of detected faults at 10% effort (LOC) by over 50%. ImpactScale also improved the predicting model using existing product metrics and dependency network measures.",2011,0, 5806,Identifying distributed features in SOA by mining dynamic call trees,"Distributed nature of web service computing imposes new challenges on software maintenance community for localizing different software features and maintaining proper quality of service as the services change over time. In this paper, we propose a new approach for identifying the implementation of web service features in a service oriented architecture (SOA) by mining dynamic call trees that are collected from distributed execution traces. The proposed approach addresses the complexities of SOA-based systems that arise from: features whose locations may change due to changing of input parameters; execution traces that are scattered throughout different service provider platforms; and trace files that contain interleaving of execution traces related to different concurrent service users. In this approach, we execute different groups of feature-specific scenarios and mine the resulting dynamic call trees to spot paths in the code of a service feature, which correspond to a specific user input and system state. This allows us to focus on a the implementation of a specific feature in a distributed SOA-based system for different maintenance tasks such as bug localization, structure evaluation, and performance analysis. We define a set of metrics to assess structural properties of a SOA-based system. The effectiveness and applicability of our approach is demonstrated through a case study consisting of two service-oriented banking systems.",2011,0, 5807,Structural conformance checking with design tests: An evaluation of usability and scalability,"Verifying whether a software meets its functional requirements plays an important role in software development. However, this activity is necessary, but not sufficient to assure software quality. It is also important to check whether the code meets its design specification. Although there exists substantial tool support to assure that a software does what it is supposed to do, verifying whether it conforms to its design remains as an almost completely manual activity. In a previous work, we proposed design tests - test-like programs that automatically check implementations against design rules. Design test is an application of the concept of test to design conformance checking. To support design tests for Java projects, we developed DesignWizard, an API that allows developers to write and execute design tests using the popular JUnit testing framework. In this work, we present a study on the usability and scalability of DesignWizard to support structural conformance checking through design tests. We conducted a qualitative usability evaluation of DesignWizard using the Think Aloud Protocol for APIs. In the experiment, we challenged eleven developers to compose design tests for an open-source software project. We observed that the API meets most developers' expectations and that they had no difficulties to code design rules as design tests. To assess its scalability, we evaluated DesignWizard's use of CPU time and memory consumption. The study indicates that both are linear functions of the size of software under verification.",2011,0, 5808,A probabilistic software quality model,"In order to take the right decisions in estimating the costs and risks of a software change, it is crucial for the developers and managers to be aware of the quality attributes of their software. Maintainability is an important characteristic defined in the ISO/IEC 9126 standard, owing to its direct impact on development costs. Although the standard provides definitions for the quality characteristics, it does not define how they should be computed. Not being tangible notions, these characteristics are hardly expected to be representable by a single number. Existing quality models do not deal with ambiguity coming from subjective interpretations of characteristics, which depend on experience, knowledge, and even intuition of experts. This research aims at providing a probabilistic approach for computing high-level quality characteristics, which integrate expert knowledge, and deal with ambiguity at the same time. The presented method copes with “goodnessâ€?functions, which are continuous generalizations of threshold based approaches, i.e. instead of giving a number for the measure of goodness, it provides a continuous function. Two different systems were evaluated using this approach, and the results were compared to the opinions of experts involved in the development. The results show that the quality model values change in accordance with the maintenance activities, and they are in a good correlation with the experts' expectations.",2011,0, 5809,Predicting post-release defects using pre-release field testing results,"Field testing is commonly used to detect faults after the in-house (e.g., alpha) testing of an application is completed. In the field testing, the application is instrumented and used under normal conditions. The occurrences of failures are reported. Developers can analyze and fix the reported failures before the application is released to the market. In the current practice, the Mean Time Between Failures (MTBF) and the Average usage Time (AVT) are metrics that are frequently used to gauge the reliability of the application. However, MTBF and AVT cannot capture the whole pattern of failure occurrences in the field testing of an application. In this paper, we propose three metrics that capture three additional patterns of failure occurrences: the average length of usage time before the occurrence of the first failure, the spread of failures to the majority of users, and the daily rates of failures. In our case study, we use data derived from the pre-release field testing of 18 versions of a large enterprise software for mobile applications to predict the number of post-release defects for up to two years in advance. We demonstrate that the three metrics complement the traditional MTBF and AVT metrics. The proposed metrics can predict the number of post-release defects in a shorter time frame than MTBF and AVT.",2011,0, 5810,Using source code metrics to predict change-prone Java interfaces,"Recent empirical studies have investigated the use of source code metrics to predict the change- and defect-proneness of source code files and classes. While results showed strong correlations and good predictive power of these metrics, they do not distinguish between interface, abstract or concrete classes. In particular, interfaces declare contracts that are meant to remain stable during the evolution of a software system while the implementation in concrete classes is more likely to change. This paper aims at investigating to which extent the existing source code metrics can be used for predicting change-prone Java interfaces. We empirically investigate the correlation between metrics and the number of fine-grained source code changes in interfaces of ten Java open-source systems. Then, we evaluate the metrics to calculate models for predicting change-prone Java interfaces. Our results show that the external interface cohesion metric exhibits the strongest correlation with the number of source code changes. This metric also improves the performance of prediction models to classify Java interfaces into change-prone and not change-prone.",2011,0, 5811,A clustering approach to improving test case prioritization: An industrial case study,"Regression testing is an important activity for controlling the quality of a software product, but it accounts for a large proportion of the costs of software. We believe that an understanding of the underlying relationships in data about software systems, including data correlations and patterns, could provide information that would help improve regression testing techniques. We conjecture that if test cases have common properties, then test cases within the same group may have similar fault detection ability. As an initial approach to investigating the relationships in massive data in software repositories, in this paper, we consider a clustering approach to help improve test case prioritization. We implemented new prioritization techniques that incorporate a clustering approach and utilize code coverage, code complexity, and history data on real faults. To assess our approach, we have designed and conducted empirical studies using an industrial software product, Microsoft Dynamics Ax, which contains real faults. Our results show that test case prioritization that utilizes a clustering approach can improve the effectiveness of test case prioritization techniques.",2011,0, 5812,Code Hot Spot: A tool for extraction and analysis of code change history,"Commercial software development teams have limited time available to focus on improvements to their software. These teams need a way to quickly identify areas of the source code that would benefit from improvement, as well as quantifiable data to defend the selected improvements to management. Past research has shown that mining configuration management systems for change information can be useful in determining faulty areas of the code. We present a tool named Code Hot Spot, which mines change records out of Microsoft's TFS configuration management system and creates a report of hot spots. Hot spots are contiguous areas of the code that have higher values of metrics that are indicators of faulty code. We present a study where we use this tool to study projects at ABB to determine areas that need improvement. The resulting data have been used to prioritize areas for additional code reviews and unit testing, as well as identifying change prone areas in need of refactoring.",2011,0, 5813,Source code comprehension strategies and metrics to predict comprehension effort in software maintenance and evolution tasks - an empirical study with industry practitioners,"The goal of this research was to assess the consistency of source code comprehension strategies and comprehension effort estimation metrics, such as LOC, across different types of modification tasks in software maintenance and evolution. We conducted an empirical study with software development practitioners using source code from a small paint application written in Java, along with four semantics-preserving modification tasks (refactoring, defect correction) and four semantics-modifying modification tasks (enhancive and modification). Each task has a change specification and corresponding source code patch. The subjects were asked to comprehend the original source code and then judge whether each patch meets the corresponding change specification in the modification task. The subjects recorded the time to comprehend and described the comprehension strategies used and their reason for the patch judgments. The 24 subjects used similar comprehension strategies. The results show that the comprehension strategies and effort estimation metrics are not consistent across different types of modification tasks. The recorded descriptions indicate the subjects scanned through the original source code and the patches when trying to comprehend patches in the semantics-modifying tasks while the subjects only read the source code of the patches in semantics-preserving tasks. An important metric for estimating comprehension efforts of the semantics-modifying tasks is the Code Clone Subtracted from LOC(CCSLOC), while that of semantics-preserving tasks is the number of referred variables.",2011,0, 5814,The misuse of the NASA metrics data program data sets for automated software defect prediction,"Background: The NASA Metrics Data Program data sets have been heavily used in software defect prediction experiments. Aim: To demonstrate and explain why these data sets require significant pre-processing in order to be suitable for defect prediction. Method: A meticulously documented data cleansing process involving all 13 of the original NASA data sets. Results: Post our novel data cleansing process; each of the data sets had between 6 to 90 percent less of their original number of recorded values. Conclusions: One: Researchers need to analyse the data that forms the basis of their findings in the context of how it will be used. Two: Defect prediction data sets could benefit from lower level code metrics in addition to those more commonly used, as these will help to distinguish modules, reducing the likelihood of repeated data points. Three: The bulk of defect prediction experiments based on the NASA Metrics Data Program data sets may have led to erroneous findings. This is mainly due to repeated data points potentially causing substantial amounts of training and testing data to be identical.",2011,0, 5815,Further thoughts on precision,"Background: There has been much discussion amongst automated software defect prediction researchers regarding use of the precision and false positive rate classifier performance metrics. Aim: To demonstrate and explain why failing to report precision when using data with highly imbalanced class distributions may provide an overly optimistic view of classifier performance. Method: Well documented examples of how dependent class distribution affects the suitability of performance measures. Conclusions: When using data where the minority class represents less than around 5 to 10 percent of data points in total, failing to report precision may be a critical mistake. Furthermore, deriving the precision values omitted from studies can reveal valuable insight into true classifier performance.",2011,0, 5816,Predicting software defects: A cost-sensitive approach,"Find software defects is a complex and slow task which consumes most of the development budgets. In order to try reducing the cost of test activities, many researches have used machine learning to predict whether a module is defect-prone or not. Defect detection is a cost-sensitive task whereby a misclassification is more costly than a correct classification. Yet, most of the researches do not consider classification costs in the prediction models. This paper introduces an empirical method based in a COCOMO (COnstructive COst MOdel) that aims to assess the cost of each classifier decision. This method creates a cost matrix that is used in conjunction with a threshold-moving approach in a ROC (Receiver Operating Characteristic) curve to select the best operating point regarding cost. Public data sets from NASA (National Aeronautics and Space Administration) IV&V (Independent Verification & Validation) Facility Metrics Data Program (MDP) are used to train the classifiers and to provide some development effort information. The experiments are carried out through a methodology that complies with validation and reproducibility requirements. The experimental results have shown that the proposed method is efficient and allows the interpretation of the classifier performance in terms of tangible cost values.",2011,0, 5817,Inline magnet inspection using fast high resolution MagCam magnetic field mapping and analysis,"Driven by a clear industrial need for advanced inspection equipment for high-end permanent magnets for drives, sensors, medical applications, consumer electronics and other applications, we present a new magnetic measurement technology, called a `magnetic field camera', a powerful and unique measurement platform for fast and accurate live inspection of both uniaxial and multipole permanent magnets. It is based on high resolution - high speed 2D mapping of the magnetic field distribution, using a patented semiconductor chip with an integrated 2D array of over 16000 microscopic Hall sensors. The measured magnetic field maps are analyzed by powerful data analysis software which uses advanced algorithms to extract unprecedented magnet information from the MagCam measurements. All these quantitative magnet properties can be tested against user-defined quality tolerances for immediate and automated pass/fail or classification analyses, all in real time. The system can be remotely operated using standard industrial communication protocols for integration in production lines. Its applications include inspection of incoming magnets, automated inline magnet quality control and research and development of magnets and magnetic systems and more.",2011,0, 5818,KOMPSAT-5 SAR data processing: Design drivers and key performance,"A general overview of KOMPSAT-5 SAR Processors chain and Software Infrastructure is provided, focusing on design drivers and key performance, mainly in terms of computation time and image quality parameters. An overview of the SAR Standard Products generated in the frame of KOMPSAT-5 mission is also depicted along with a sketch of the whole SAR Processors chain architecture. The End to End (E2E) test approach relies upon the generation of SAR Payload simulated RAW data combined with final image quality parameter estimation.",2011,0, 5819,The Method of Medical Named Entity Recognition Based on Semantic Model and Improved SVM-KNN Algorithm,"In the medical field, a lot of unstructured information which is expressed by natural language exists in medical literature, technical documentation and medical records. IE (Information Extraction) as one of the most important research directions in natural language process aims to help humans extract concerned information automatically. NER (Named Entity Recognition) is one of the subsystems of IE and has direct influence on the quality of IE. Nowadays NER of medical field has not reached ideal precision largely due to the knowledge complexity in medical field. It is hard to describe the medical entity definitely in feature selection and current selected features are not rich and lack of semantic information. Besides that, different medical entities which have the similar characters more easily cause classification algorithm making wrong judgment. Combination multi classification algorithms such as SVM-KNN can overcome its own disadvantages and get higher performance. But current SVM-KNN classification algorithm may provide wrong categories results due to setting inappropriate K value and unbalanced examples distribution. In this paper, we introduce two-level modelling tool to help specialists to build semantic models and select features from them. We design and implement medical named entity recognition analysis engine based on UIMA framework and adopt improved SVM-KNN algorithm called EK-SVM-KNN (Extending K SVM-KNN) in classification. We collect experiment data from SLE(Systemic Lupus Erythematosus) clinical information system in Renji Hospital. We adopt 50 Pathology reports as training data and provide 1000 Pathology as test data. Experiment shows medical NER based on semantic model and improved SVM-KNN algorithm can enhance the quality of NER and we get the precision, recall rate and F value up to 86%.",2011,0, 5820,Quality-Driven Hierarchical Clustering Algorithm for Service Intelligence Computation,"Clustering is an important technique for intelligence computation such as trust, recommendation, reputation, and requirement elicitation. With the user centric nature of service and the user's lack of prior knowledge on the distribution of the raw data, one challenge is on how to associate user quality requirements on the clustering results with the algorithmic output properties (e.g. number of clusters to be targeted). In this paper, we focus on the hierarchical clustering process and propose two quality-driven hierarchical clustering algorithms, HBH (homogeneity-based hierarchical) and HDH (homogeneity-driven hierarchical) clustering algorithms, with minimum acceptable homogeneity and relative population for each cluster output as their input criteria. Furthermore, we also give a HDH-approximation algorithm in order to address the time performance issue. Experimental study on data sets with different density distribution and dispersion levels shows that the HDH gives the best quality result and HDH-approximation can significantly improve the execution time.",2011,0, 5821,Towards identifying OS-level anomalies to detect application software failures,"The next generation of critical systems, namely complex Critical Infrastructures (LCCIs), require efficient runtime management, reconfiguration strategies, and the ability to take decisions on the basis of current and past behavior of the system. Anomaly-based detection, leveraging information gathered at Operating System (OS) level (e.g., number of system call errors, signals, and holding semaphores in the time unit), seems to be a promising approach to reveal online application faults. Recently an experimental campaign to evaluate the performance of two anomaly detection algorithms was performed on a case study from the Air Traffic Management (ATM) domain, deployed under the popular OS used in the production environment, i.e., Red Hat 5 EL. In this paper we investigate the impact of the OS and the monitored resources on the quality of the detection, by executing experiments on Windows Server 2008. Experimental results allow identifying which of the two operating systems provides monitoring facilities best suited to implement the anomaly detection algorithms that we have considered. Moreover numerical sensitivity analysis of the detector parameters is carried out to understand the impact of their setting on the performance.",2011,0, 5822,A Reliable and Self-Aware Clock for reference time failure detection in internal synchronization environment,"A reliable source of time is a fundamental requirement for safety critical applications. For this class of applications, synchronization protocols like the IEEE1588 standard, can be adopted. This type of synchronization protocols was designed to achieve very precise clock synchronization, but they may not be sufficient to ensure safety of the whole system. For example an instability of the local oscillator of a reference node could influence the quality of synchronization of all nodes. In the recent years a new software clock, the Reliable and Self-Aware Clock (R&SAClock), which is designed to estimate the quality of synchronization through statistical analysis, was developed and tested. This paper describe and evaluate the design of a protocol for timing failure detection for internal synchronization based on a revised version of the R&SAClock software suitably modified to cross exploit the information on the quality of synchronization among all the nodes of the system.",2011,0, 5823,Remote diagnostics and performance analysis for a wireless sensor network,"Wireless Sensor Networks (WSNs) comprise embedded sensor nodes that operate autonomously in a multi-hop topology. The challenges are unreliable wireless communications, harsh environment, and limited energy and computation resources. To ensure the desired level of service, it is essential to diagnose performance issues e.g. due to low quality links or energy depletion. This paper presents remote diagnostics and performance analysis that comprise self-diagnostics on embedded sensor nodes, the remote collection of diagnostics, and the design of a diagnostics analysis tool. Unlike the related proposals, our approach allows correcting detected problems by identifying the reasons for misbehavior. The diagnostics is verified with a practical WSN implementation. It has only a small overhead, less than 18 B/min per node in the implementation, allowing the use in bandwidth and energy constrained WSNs.",2011,0, 5824,Link quality aware code dissemination in wireless sensor networks,"Wireless reprogramming is a crucial technique for software deployment in wireless sensor networks (WSNs). Code dissemination is a basic building block to enable wireless repro-gramming. We present ECD, an Efficient Code Dissemination protocol leveraging 1-hop link quality information. Compared to prior works, ECD has three salient features. First, it supports dynamically configurable packet sizes. By increasing the packet size for high PHY rate radios, it significantly improves the transmission efficiency. Second, it employs an accurate sender selection algorithm to mitigate transmission collisions and transmissions over poor links. Third, it employs a simple impact-based backoff timer design to shorten the time spent in coordinating multiple eligible senders so that the largest impact sender is most likely to transmit. We implement ECD based on TinyOS and evaluate its performance extensively. Testbed experiments show that ECD outperforms state-of-the-art protocols, Deluge and MNP, in terms of completion time and data traffic. (e.g., about 20% less traffic and 20-30% shorter completion time compared to Deluge).",2011,0, 5825,QoS-oriented Service Management in clouds for large scale industrial activity recognition,"Motivated by the need of industrial enterprises for supervision services for quality, security and safety guarantee, we have developed an Activity Recognition Framework based on computer vision and machine learning tools, attaining good recognition rates. However, the deployment of multiple cameras to exploit redundancies, the large training set requirements of our time series classification models, as well as general resource limitations together with the emphasis on real-time performance, pose significant challenges and lead us to consider a decentralized approach. We thus adapt our application to a new and innovative real-time enabled framework for service-based infrastructures, which has developed QoS-oriented Service Management mechanisms in order to allow cloud environments to facilitate real-time and interactivity. Deploying the Activity Recognition Framework in a cloud infrastructure can therefore enable it for large scale industrial environments.",2011,0, 5826,An automated retinal image quality grading algorithm,"This paper introduces an algorithm for the automated assessment of retinal fundus image quality grade. Retinal image quality grading assesses whether the quality of the image is sufficient to allow diagnostic procedures to be applied. Automated quality analysis is an important preprocessing step in algorithmic diagnosis, as it is necessary to ensure that images are sufficiently clear to allow pathologies to be visible. The algorithm is based on standard recommendations for quality analysis by human screeners, examining the clarity of retinal vessels within the macula region. An evaluation against a reference standard data-set is given; it is shown that the algorithm's performance correlates closely with that of clinicians manually grading image quality.",2011,0, 5827,Mosaicing of optical microscope imagery based on visual information,"Tools for high-throughput high-content image analysis can simplify and expedite different stages of biological experiments, by processing and combining different information taken at different time and in different areas of the culture. Among the most important in this field, image mosaicing methods provide the researcher with a global view of the biological sample in a unique image. Current approaches rely on known motorized x-y stage offsets and work in batch mode, thus jeopardizing the interaction between the microscopic system and the researcher during the investigation of the cell culture. In this work we present an approach for mosaicing of optical microscope imagery, based on local image registration and exploiting visual information only. To our knowledge, this is the first approach suitable to work on-line with non-motorized microscopes. To assess our method, the quality of resulting mosaics is quantitatively evaluated through on-purpose image metrics. Experimental results show the importance of model selection issues and confirm the soundness of our approach.",2011,0, 5828,Multiresolution texture models for brain tumor segmentation in MRI,"In this study we discuss different types of texture features such as Fractal Dimension (FD) and Multifractional Brownian Motion (mBm) for estimating random structures and varying appearance of brain tissues and tumors in magnetic resonance images (MRI). We use different selection techniques including KullBack - Leibler Divergence (KLD) for ranking different texture and intensity features. We then exploit graph cut, self organizing maps (SOM) and expectation maximization (EM) techniques to fuse selected features for brain tumors segmentation in multimodality T1, T2, and FLAIR MRI. We use different similarity metrics to evaluate quality and robustness of these selected features for tumor segmentation in MRI for real pediatric patients. We also demonstrate a non-patient-specific automated tumor prediction scheme by using improved AdaBoost classification based on these image features.",2011,0, 5829,Telephone-quality pathological speech classification using empirical mode decomposition,"This paper presents a computationally simple and effective methodology based on empirical mode decomposition (EMD) for classification of telephone quality normal and pathological speech signals. EMD is used to decompose continuous normal and pathological speech signals into intrinsic mode functions, which are analyzed to extract physically meaningful and unique temporal and spectral features. Using continuous speech samples from a database of 51 normal and 161 pathological speakers, which has been modified to simulate telephone quality speech under different levels of noise, a linear classifier is used with the feature vector thus obtained to obtain a high classification accuracy, thereby demonstrating the effectiveness of the methodology. The classification accuracy reported in this paper (89.7% for signal-to-noise ratio 30 dB) is a significant improvement over previously reported results for the same task, and demonstrates the utility of our methodology for cost-effective remote voice pathology assessment over telephone channels.",2011,0, 5830,Intelligent diagnostics of devices with the use of infrared mapping images,"A correct diagnostics together with the early prediction of failure or malfunction of the system are the major issues in modern maintenance. Nowadays, the time-honored diagnostics may be inadequate and lead to omitting failures, resulting in higher costs of repairing damaged equipment. That is why the interest in intelligent diagnostics increases due to the possibility of better interpretation of the component status and early failure prediction. One of the ways of determining the device condition is to measure and analyze the temperature of multiple points on the device. Thermographics may show a beginning of significant wear of a component, and enable a repair or replacing before the failure appears. The paper presents the key aspects of the diagnostics with thermal images, including: technology mapping, mapping algorithms, together with a presentation of available software solutions.",2011,0, 5831,Combating class imbalance problem in semi-supervised defect detection,"Detection of defect-prone software modules is an important topic in software quality research, and widely studied under enough defect data circumstance. An improved semi-supervised learning approach for defect detection involving class imbalanced and limited labeled data problem has been proposed. This approach employs random under-sampling technique to resample the original training set and updating training set in each round for co-train style algorithm. In comparison with conventional machine learning approaches, our method has significant superior performance in the aspect of AUC (area under the receiver operating characteristic) metric. Experimental results also show that with the proposed learning approach, it is possible to design better method to tackle the class imbalanced problem in semi-supervised learning.",2011,0, 5832,Software defect prediction using transfer method,"Traditional machine learning works well within company defect prediction. Unlike these works, we consider the scenario where source and target data are drawn from different companies, recently referred to as cross-company defect prediction. In this paper, we proposed a novel algorithm based on transfer method, called Transfer Naive Bayes (TNB). Our solution transferred the information of test data to the weights of the training data. The theoretical analysis and experiment results indicate that our algorithm is able to get more accurate result within less runtime cost than the state of the art algorithm.",2011,0, 5833,On the Effectiveness of Contracts as Test Oracles in the Detection and Diagnosis of Race Conditions and Deadlocks in Concurrent Object-Oriented Software,"The idea behind Design by Contract (DbC) is that a method defines a contract stating the requirements a client needs to fulfill to use it, the precondition, and the properties it ensures after its execution, the post condition. Though there exists ample support for DbC for sequential programs, applying DbC to concurrent programs presents several challenges. We have proposed a solution to these challenges in the context of Java as programming language and the Java Modeling language as specification language. This paper presents our findings when applying our DbC technique on an industrial case study to evaluate the ability of contract-based, runtime assertion checking code at detecting and diagnosing race conditions and deadlocks during system testing. The case study is a highly concurrent industrial system from the telecommunications domain, with actual faults. It is the first work to systematically investigate the impact of contract assertions for the detection of race conditions and deadlocks, along with functional properties, in an industrial system.",2011,0, 5834,Measuring the Efficacy of Code Clone Information in a Bug Localization Task: An Empirical Study,"Much recent research effort has been devoted to designing efficient code clone detection techniques and tools. However, there has been little human-based empirical study of developers as they use the outputs of those tools while performing maintenance tasks. This paper describes a study that investigates the usefulness of code clone information for performing a bug localization task. In this study 43 graduate students were observed while identifying defects in both cloned and non-cloned portions of code. The goal of the study was to understand how those developers used clone information to perform this task. The results of this study showed that participants who first identified a defect then used it to look for clones of the defect were more effective than participants who used the clone information before finding any defects. The results also show a relationship between the perceived efficacy of the clone information and effectiveness in finding defects. Finally, the results show that participants who had industrial experience were more effective in identifying defects than those without industrial experience.",2011,0, 5835,A Qualitative Study of Open Source Software Development: The Open EMR Project,"Open Source software is competing successfully in many areas. The commercial sector is recognizing the benefits offered by Open Source development methods that lead to high quality software. Can these benefits be realized in specialized domains where expertise is rare? This study examined discussion forums of an Open Source project in a particular specialized application domain - electronic medical records - to see how development roles are carried out, and by whom. We found through a qualitative analysis that the core developers in this system include doctors and clinicians who also use the product. We also found that the size of the community associated with the project is an order of magnitude smaller than predicted, yet still maintains a high degree of responsiveness to issues raised by users. The implication is that a few experts and a small core of dedicated programmers can achieve success using an Open Source approach in a specialized domain.",2011,0, 5836,Exploring Software Measures to Assess Program Comprehension,"Software measures are often used to assess program comprehension, although their applicability is discussed controversially. Often, their application is based on plausibility arguments, which, however, is not sufficient to decide whether software measures are good predictors for program comprehension. Our goal is to evaluate whether and how software measures and program comprehension correlate. To this end, we carefully designed an experiment. We used four different measures that are often used to judge the quality of source code: complexity, lines of code, concern attributes, and concern operations. We measured how subjects understood two comparable software systems that differ in their implementation, such that one implementation promised considerable benefits in terms of better software measures. We did not observe a difference in program comprehension of our subjects as the software measures suggested it. To explore how software measures and program comprehension could correlate, we used several variants of computing the software measures. This brought them closer to our observed result, however, not as close as to confirm a relationship between software measures and program comprehension. Having failed to establish a relationship, we present our findings as an open issue to the community and initiate a discussion on the role of software measures as comprehensibility predictors.",2011,0, 5837,Survey Reproduction of Defect Reporting in Industrial Software Development,"Context: Defect reporting is an important part of software development in-vivo, but previous work from open source context suggests that defect reports often have insufficient information for defect fixing. Objective: Our goal was to reproduce and partially replicate one of those open source studies in industrial context to see how well the results could be generalized. Method: We surveyed developers from six industrial software development organizations about the defect report information, from three viewpoints: concerning quality, usefulness and automation possibilities of the information. Seventy-four developers out of 142 completed our survey. Results: Our reproduction confirms the results of the prior study in that ""steps to reproduce"" and ""observed behaviour"" are highly important defect information. Our results extend the results of the prior study as we found that ""part of the application"", ""configuration of the application"", and ""operating data"" are also highly important, but they were not surveyed in the prior study. Finally, we classified defect information as ""critical problems"", ""solutions"", ""boosters"", and ""essentials"" based on the survey answers. Conclusion: The quality of defect reports is a problem in the software industry as well as in the open source community. Thus, we suggest that a part of the defect reporting should be automated since many of the defect reporters lack technical knowledge or interest to produce high-quality defect reports.",2011,0, 5838,Mining Static Code Metrics for a Robust Prediction of Software Defect-Proneness,"Defect-proneness prediction is affected by multiple aspects including sampling bias, non-metric factors, uncertainty of models etc. These aspects often contribute to prediction uncertainty and result in variance of prediction. This paper proposes two methods of data mining static code metrics to enhance defect-proneness prediction. Given little non-metric or qualitative information extracted from software codes, we first suggest to use a robust unsupervised learning method, shared nearest neighbors (SNN) to extract the similarity patterns of the code metrics. These patterns indicate similar characteristics of the components of the same cluster that may result in introduction of similar defects. Using the similarity patterns with code metrics as predictors, defect-proneness prediction may be improved. The second method uses the Occam's windows and Bayesian model averaging to deal with model uncertainty: first, the datasets are used to train and cross-validate multiple learners and then highly qualified models are selected and integrated into a robust prediction. From a study based on 12 datasets from NASA, we conclude that our proposed solutions can contribute to a better defect-proneness prediction.",2011,0, 5839,Network Versus Code Metrics to Predict Defects: A Replication Study,"Several defect prediction models have been proposed to identify which entities in a software system are likely to have defects before its release. This paper presents a replication of one such study conducted by Zimmermann and Nagappan on Windows Server 2003 where the authors leveraged dependency relationships between software entities captured using social network metrics to predict whether they are likely to have defects. They found that network metrics perform significantly better than source code metrics at predicting defects. In order to corroborate the generality of their findings, we replicate their study on three open source Java projects, viz., JRuby, ArgoUML, and Eclipse. Our results are in agreement with the original study by Zimmermann and Nagappan when using a similar experimental setup as them (random sampling). However, when we evaluated the metrics using setups more suited for industrial use -- forward-release and cross-project prediction -- we found network metrics to offer no vantage over code metrics. Moreover, code metrics may be preferable to network metrics considering the data is easier to collect and we used only 8 code metrics compared to approximately 58 network metrics.",2011,0, 5840,Measuring Architectural Change for Defect Estimation and Localization,"While there are many software metrics measuring the architecture of a system and its quality, few are able to assess architectural change qualitatively. Given the sheer size and complexity of current software systems, modifying the architecture of a system can have severe, unintended consequences. We present a method to measure architectural change by way of structural distance and show its strong relationship to defect incidence. We show the validity and potential of the approach in an exploratory analysis of the history and evolution of the Spring Framework. Using other, public datasets, we corroborate the results of our analysis.",2011,0, 5841,Handling Estimation Uncertainty with Bootstrapping: Empirical Evaluation in the Context of Hybrid Prediction Methods,"Reliable predictions are essential for managing software projects with respect to cost and quality. Several studies have shown that hybrid prediction models combining causal models with Monte Carlo simulation are especially successful in addressing the needs and constraints of today's software industry: They deal with limited measurement data and, additionally, make use of expert knowledge. Moreover, instead of providing merely point estimates, they support the handling of estimation uncertainty, e.g., estimating the probability of falling below or exceeding a specific threshold. Although existing methods do well in terms of handling uncertainty of information, we can show that they leave uncertainty coming from imperfect modeling largely unaddressed. One of the consequences is that they probably provide over-confident uncertainty estimates. This paper presents a possible solution by integrating bootstrapping into the existing methods. In order to evaluate whether this solution does not only theoretically improve the estimates but also has a practical impact on the quality of the results, we evaluated the solution in an empirical study using data from more than sixty projects and six estimation models from different domains and application areas. The results indicate that the uncertainty estimates of currently used models are not realistic and can be significantly improved by the proposed solution.",2011,0, 5842,Inferring Skill from Tests of Programming Performance: Combining Time and Quality,"The skills of software developers are important to the success of software projects. Also, when studying the general effect of a tool or method, it is important to control for individual differences in skill. However, the way skill is assessed is often ad hoc, or based on unvalidated methods. According to established test theory, validated tests of skill should infer skill levels from well-defined performance measures on multiple, small, representative tasks. In this respect, we show how time and quality, which are often analyzed separately, can be combined as task performance and subsequently be aggregated as an approximation of skill. Our results show significant positive correlations between our proposed measures of skill and other variables, such as seniority, lines of code written, and self-evaluated expertise. The method for combining time and quality is a promising first step to measuring programming skill in both industry and research settings.",2011,0, 5843,Modeling the Number of Active Software Users,"More and more software applications are developed within a software ecosystem (SECO), such as the Face book ecosystem and the iPhone AppStore. A core asset of a software ecosystem is its users, and the behavior of the users strongly affects the decisions of software vendors. The number of active users reflects user satisfaction and quality of the applications in a SECO. However, we can hardly find any literature about the number of active software users. Because software users are one of the most important assets of a software business, this information is very sensitive. In this paper, we analyzed the traces of software application users within a large scale software ecosystem with millions of active users. We identified useful patterns of user behavior, and proposed models that help to understand the number of active application users. The model we proposed better predicts the number of active users than just looking at the traditional retention rate. It also provides a fast way to monitor user satisfaction of online software applications. We have therefore provided an alternative way for SECO platform vendors to identify rising or falling applications, and for third party application vendors to identify risks and opportunity of their products.",2011,0, 5844,What are Problem Causes of Software Projects? Data of Root Cause Analysis at Four Software Companies,"Root cause analysis (RCA) is a structured investigation of a problem to detect the causes that need to be prevented. We applied ARCA, an RCA method, to target problems of four medium-sized software companies and collected 648 causes of software engineering problems. Thereafter, we applied grounded theory to the causes to study their types and related process areas. We detected 14 types of causes in 6 process areas. Our results indicate that development work and software testing are the most common process areas, whereas lack of instructions and experiences, insufficient work practices, low quality task output, task difficulty, and challenging existing product are the most common types of the causes. As the types of causes are evenly distributed between the cases, we hypothesize that the distributions could be generalizable. Finally, we found that only 2.5% of the causes are related to software development tools that are widely investigated in software engineering research.",2011,0, 5845,Composite Release Values for Normalized Product-level Metrics,"For metrics normalized using field data, variability is often high, since different classes of customers use different features of large releases. It is important to understand the quality health of the entire release, so we need to have a way to estimate the overall normalized metric value for the entire release, across all product lines. This paper looks at three different ways to calculate release values for a key normalized metric, software defects per million usage hours per month (SWDPMH). The 'Aggregate,' 'Averaging,' and 'Indexing' approaches are defined and examined for two major IOS release variants. Each of these approaches has general strengths and weaknesses, and these are described. Each of the three approaches has been found to be useful in particular situations, and these scenarios are also described. The primary objective of this study is to find an accurate method to estimate SWDPMH for software releases that can be used to improve release management operations, corporate goaling, and best practices evaluation for successor feature releases. Since the methods described are not particular to SWDPMH, we believe they may be useful for other normalized metrics-this is an area of future work.",2011,0, 5846,Obtaining Thresholds for the Effectiveness of Business Process Mining,"Business process mining is a powerful tool to retrieve the valuable business knowledge embedded in existing information systems. The effectiveness of this kind of proposal is usually evaluated using recall and precision, which respectively measure the completeness and exactness of the retrieved business processes. Since the effectiveness assessment of business process mining is a difficult and error-prone activity, the main hypothesis of this work studies the possibility of obtaining thresholds to determine when recall and precision values are appropriate. The business process mining technique under study is MARBLE, a model-driven framework to retrieve business processes from existing information systems. The Bender method was applied to obtain the thresholds of the recall and precision measures. The experimental data used as input were obtained from a set of 44 business processes retrieved with MARBLE through a family of case studies carried out over the last two years. The study provides thresholds for recall and precision measures, which facilitates the interpretation of their values by means of five linguistic labels that range from low to very high. As a result, recall must be high (with at least a medium precision above 0.56), and precision must also be high (with at least a low recall of 0.70) to ensure that business processes were recovered (by using MARBLE) with an effectiveness value above 0.65. The thresholds allowed us to ascertain with more confidence whether MARBLE can effectively mine business processes from existing information systems. In addition, the provided results can be used as reference values to compare MARBLE with other similar business process mining techniques.",2011,0, 5847,Scrum + Engineering Practices: Experiences of Three Microsoft Teams,"The Scrum methodology is an agile software development process that works as a project management wrapper around existing engineering practices to iteratively and incrementally develop software. With Scrum, for a developer to receive credit for his or her work, he or she must demonstrate the new functionality provided by a feature at the end of each short iteration during an iteration review session. Such a short-term focus without the checks and balances of sound engineering practices may lead a team to neglect quality. In this paper we present the experiences of three teams at Microsoft using Scrum with an additional nine sound engineering practices. Our results indicate that these teams were able to improve quality, productivity, and estimation accuracy through the combination of Scrum and nine engineering practices.",2011,0, 5848,Design for testability in embedded software projects,"The purpose of this white paper is to focus on design techniques or methodologies that add testability features to embedded software which is an integral and important process for verification of any safety critical system. This paper presents testability in two forms: software testability i.e. testability at code level and design testability: i.e. testability at software requirements level. The two testability forms are further classified into subtypes explained in detail with examples which cite issues that are faced by engineers while performing verification. In this paper I have also come up with metrics which could become an important criterion in calculating the testability of a system considering the number of inputs that can be driven and the outputs that can be observed in the software during the verification process. Practical usage of these metrics may help designers understand how testable the system they have designed is, thus reducing verification effort and project cost. Good testability features, if not present in a system, may lead to increased cost and completion period of the project at a time when cost reduction and deadline chasing is the key to winning future projects. This paper endeavors to provide software developers guidance to incorporate important testability features into the software during the design and coding phase.",2011,0, 5849,Automated extraction of architecture-level performance models of distributed component-based systems,"Modern enterprise applications have to satisfy increasingly stringent Quality-of-Service requirements. To ensure that a system meets its performance requirements, the ability to predict its performance under different configurations and workloads is essential. Architecture-level performance models describe performance-relevant aspects of software architectures and execution environments allowing to evaluate different usage profiles as well as system deployment and configuration options. However, building performance models manually requires a lot of time and effort. In this paper, we present a novel automated method for the extraction of architecture-level performance models of distributed component-based systems, based on monitoring data collected at run-time. The method is validated in a case study with the industry-standard SPECjEnterprise2010 Enterprise Java benchmark, a representative software system executed in a realistic environment. The obtained performance predictions match the measurements on the real system within an error margin of mostly 10-20 percent.",2011,0, 5850,Software process evaluation: A machine learning approach,"Software process evaluation is essential to improve software development and the quality of software products in an organization. Conventional approaches based on manual qualitative evaluations (e.g., artifacts inspection) are deficient in the sense that (i) they are time-consuming, (ii) they suffer from the authority constraints, and (iii) they are often subjective. To overcome these limitations, this paper presents a novel semi-automated approach to software process evaluation using machine learning techniques. In particular, we formulate the problem as a sequence classification task, which is solved by applying machine learning algorithms. Based on the framework, we define a new quantitative indicator to objectively evaluate the quality and performance of a software process. To validate the efficacy of our approach, we apply it to evaluate the defect management process performed in four real industrial software projects. Our empirical results show that our approach is effective and promising in providing an objective and quantitative measurement for software process evaluation.",2011,0, 5851,Ecological inference in empirical software engineering,"Software systems are decomposed hierarchically, for example, into modules, packages and files. This hierarchical decomposition has a profound influence on evolvability, maintainability and work assignment. Hierarchical decomposition is thus clearly of central concern for empirical software engineering researchers; but it also poses a quandary. At what level do we study phenomena, such as quality, distribution, collaboration and productivity? At the level of files? packages? or modules? How does the level of study affect the truth, meaning, and relevance of the findings? In other fields it has been found that choosing the wrong level might lead to misleading or fallacious results. Choosing a proper level, for study, is thus vitally important for empirical software engineering research; but this issue hasn't thus far been explicitly investigated. We describe the related idea of ecological inference and ecological fallacy from sociology and epidemiology, and explore its relevance to empirical software engineering; we also present some case studies, using defect and process data from 18 open source projects to illustrate the risks of modeling at an aggregation level in the context of defect prediction, as well as in hypothesis testing.",2011,0, 5852,Towards dynamic backward slicing of model transformations,"Model transformations are frequently used means for automating software development in various domains to improve quality and reduce production costs. Debugging of model transformations often necessitates identifying parts of the transformation program and the transformed models that have causal dependence on a selected statement. In traditional programming environments, program slicing techniques are widely used to calculate control and data dependencies between the statements of the program. Here we introduce program slicing for model transformations where the main challenge is to simultaneously assess data and control dependencies over the transformation program and the underlying models of the transformation. In this paper, we present a dynamic backward slicing approach for both model transformation programs and their transformed models based on automatically generated execution trace models of transformations.",2011,0, 5853,Automatically detecting the quality of the query and its implications in IR-based concept location,"Concept location is an essential task during software maintenance and in particular program comprehension activities. One of the approaches to this task is based on leveraging the lexical information found in the source code by means of Information Retrieval techniques. All IR-based approaches to concept location are highly dependent on the queries written by the users. An IR approach, even though good on average, might fail when the input query is poor. Currently there is no way to tell when a query leads to poor results for IR-based concept location, unless a considerable effort is put into analyzing the results after the fact. We propose an approach based on recent advances in the field of IR research, which aims at automatically determining the difficulty a query poses to an IR-based concept location technique. We plan to evaluate several models and relate them to IR performance metrics.",2011,0, 5854,Using Formal Concept Analysis to support change analysis,"Software needs to be maintained and changed to cope with new requirement, existing faults and change requests as software evolves. One particular issue in software maintenance is how to deal with a change proposal before change implementation? Changes to software often cause unexpected ripple effects. To avoid this and alleviate the risk of performing undesirable changes, some predictive measurement should be conducted and a change scheme of the change proposal should be presented. This research intends to provide a unified framework for change analysis, which includes dependencies extraction, change impact analysis, changeability assessment, etc. We expect that our change analysis framework will contribute directly to the improvement of the accuracy of these predictive measures before change implementation, and thus provide more accurate change analysis results for software maintainers, improve quality of software evolution and reduce the software maintenance effort and cost.",2011,0, 5855,Texture feature based fingerprint recognition for low quality images,"Fingerprint-based identification is one of the most well-known and publicized biometrics for personal identification. Extracting features out of poor quality prints is the most challenging problem faced in this area. In this paper, the texture feature based approach for fingerprint recognition using Discrete Wavelet Transform (DWT) is developed to identify the low quality fingerprint from inked-printed images on paper. The fingerprint image from paper is very poor quality image and sometimes it is complex with fabric background. Firstly, a center point area of the fingerprint is detected and keeping the Core Point as center point, the image of size w x w is cropped. Gabor filtering is applied for fingerprint enhancement over the orientation image. Finally, the texture features are extracted by analyzing the fingerprint with Discrete Wavelet Transform (DWT) and Euclidean distance metric is used as similarity measure. The accuracy is improved up to 98.98%.",2011,0, 5856,Management towards reducing cloud usage costs,"Summary form only given. Many organizations are attracted to cloud computing as an ICT (information and communications technology) sourcing model that can improve flexibility and total cost of ICT systems. However, it can be difficult for a prospective cloud customer to determine and manage cloud usage costs. We present an overview of several NICTA research projects that aim at providing information that can help ICT professionals determine various cloud usage costs and make decisions that are appropriate from the business viewpoint. Before migrating an application into a cloud, it is necessary to choose to which cloud to migrate, because there is a huge variety of cloud offerings, with significantly different pricing models. To accurately capture projected operating costs of an application in a particular cloud and enable side-by-side comparison of cloud offerings from different providers, NICTA developed a cost estimation tool that calculates the costs based on usage patterns and other characteristics of the application. This tool can also be used during runtime as an input into making adaptation/control decisions. To collect various runtime metrics (e.g., about the amount of transferred data or received quality of service - QoS) that are necessary for operational management and assessment of cloud usage costs, NICTA developed an innovative tool for flexible and integrated monitoring of applications in clouds and (in case of hybrid clouds) related local data centers. To help determine which runtime adaptation/control decisions are best from the business viewpoint (e.g., incur lowest cost), we extended the WS-Policy4MASC language and MiniZnMASC middleware for autonomic business-driven IT management with events and adaptation actions relevant for cloud management. The tools from the presented projects can be used separately or as parts of a powerful integrated cloud management system (which contains several additional tools).",2011,0, 5857,System Monitoring with Metric-Correlation Models,"Modern software systems expose management metrics to help track their health. Recently, it was demonstrated that correlations among these metrics allow errors to be detected and their causes localized. Prior research shows that linear models can capture many of these correlations. However, our research shows that several factors may prevent linear models from accurately describing correlations, even if the underlying relationship is linear. Common phenomena we have observed include relationships that evolve, relationships with missing variables, and heterogeneous residual variance of the correlated metrics. Usually these phenomena can be discovered by testing for heteroscedasticity of the underlying linear models. Such behaviour violates the assumptions of simple linear regression, which thus fail to describe system dynamics correctly. In this paper we address the above challenges by employing efficient variants of Ordinary Least Squares regression models. In addition, we automate the process of error detection by introducing the Wilcoxon Rank-Sum test after proper correlations modeling. We validate our models using a realistic Java-Enterprise-Edition application. Using fault-injection experiments we show that our improved models capture system behavior accurately.",2011,0, 5858,Thermal analysis and experimental validation on cooling efficiency of thin film transistor liquid crystal display (TFT-LCD) panels,"This research explored the thermal analysis and modeling of a 32â€?thin film transistor liquid crystal display (TFT-LCD) panel, in the purpose of making possible improvements in cooling efficiencies. The illumination of the panel was insured by 180 light emitting diodes (LEDs) located at the top and bottom edges of the panels. These LEDs dissipate high heat flux at low thermal resistance. Hence, in order to insure good image quality in panels and long service life, an adequate thermal management is necessary. For this purpose, a commercially available computational fluid dynamics (CFD) simulation software “FloEFDâ€?was used to predict the temperature distribution. This thermal prediction by computational method was validated by an experimental thermal analysis by attaching 10 thermocouples on the back cover of the panel and measuring the temperatures. Also, thermal camera images of the panel by FLIR Thermacam SC 2000 test device were also analyzed.",2011,0, 5859,Performance Analysis and Performance Modeling of Web-Applications,"We study the performance of coupled web servers and database servers of certain web-applications. We focus on the performance bottleneck of one of the most pressing applications and their average response time. Therefore we present the software architecture of this web-application, called ""Study-Portal"" of the University of Mannheim, and we identify typical workload scenarios. We discuss our performance measurements and document the software and hardware infrastructure used during the measurements. Our goal is the identification of the most influential parameters which determine the timing behavior of the combined web-application. For the modeling we will use standard stochastic methods, even if it will mean over simplification. Our results then provide a simple estimate for the considered web-application: a doubling of application servers leads to half of the response time in case of sufficiently many clients. The developed model fits well with the observations in the operating environment of an IT-center.",2011,0, 5860,Machine-Learning Models for Software Quality: A Compromise between Performance and Intelligibility,"Building powerful machine-learning assessment models is an important achievement of empirical software engineering research, but it is not the only one. Intelligibility of such models is also needed, especially, in a domain, software engineering, where exploration and knowledge capture is still a challenge. Several algorithms, belonging to various machine-learning approaches, are selected and run on software data collected from medium size applications. Some of these approaches produce models with very high quantitative performances, others give interpretable, intelligible, and ""glass-box"" models that are very complementary. We consider that the integration of both, in automated decision-making systems for assessing software product quality, is desirable to reach a compromise between performance and intelligibility.",2011,0, 5861,Impact of Data Sampling on Stability of Feature Selection for Software Measurement Data,"Software defect prediction can be considered a binary classification problem. Generally, practitioners utilize historical software data, including metric and fault data collected during the software development process, to build a classification model and then employ this model to predict new program modules as either fault-prone (fp) or not-fault-prone (nfp). Limited project resources can then be allocated according to the prediction results by (for example) assigning more reviews and testing to the modules predicted to be potentially defective. Two challenges often come with the modeling process: (1) high-dimensionality of software measurement data and (2) skewed or imbalanced distributions between the two types of modules (fp and nfp) in those datasets. To overcome these problems, extensive studies have been dedicated towards improving the quality of training data. The commonly used techniques are feature selection and data sampling. Usually, researchers focus on evaluating classification performance after the training data is modified. The present study assesses a feature selection technique from a different perspective. We are more interested in studying the stability of a feature selection method, especially in understanding the impact of data sampling techniques on the stability of feature selection when using the sampled data. Some interesting findings are found based on two case studies performed on datasets from two real-world software projects.",2011,0, 5862,Decimal Hamming: A Software-Implemented Technique to Cope with Soft Errors,"A low-overhead technique for correction of induced errors affecting algorithms and their data based on the concepts behind Hamming code is presented and evaluated. We go beyond Hamming code by computing the check digits as decimal sums, and using a checker algorithm to perform single error detection and correction and double error detection. This generalization allows for the protection of complex data structures, providing lower performance overhead than classic approaches based on algorithm redundancy, and has been successfully applied to different benchmarking algorithms and their associated data. Comparison of the resulting overheads with those imposed by the classic duplication with comparison shows that the proposed technique, named Decimal Hamming, imposes lower execution time and program memory overheads, while providing enhanced error correction capabilities.",2011,0, 5863,Assuring application-level correctness against soft errors,"Traditionally, research in fault tolerance has required architectural state to be numerically perfect for program execution to be correct. However, in many programs, even if execution is not 100% numerically correct, the program can still appear to execute correctly from the user's perspective. To quantify user satisfaction, application-level fidelity metrics (such as PSNR) can be used. The output for such applications is defined to be correct if the fidelity metrics satisfy a certain threshold. However, such applications still contain instructions whose outputs are critical - i.e. their correctness decides if the overall quality of the program output is acceptable. In this paper, we present an analysis technique for identifying such critical program segments. More importantly, our technique is capable of guaranteeing application-level correctness through a combination of static analysis and runtime monitoring. Our static analysis consists of data flow analysis followed by control flow analysis to find static critical instructions which affect several instructions. Critical instructions are further refined into likely non-critical and likely critical sets in a profiling phase. At runtime, we use a monitoring scheme to monitor likely non-critical instructions and take remedial actions if some likely non-critical instructions become critical. Based on this analysis, we minimize the number of instructions that are duplicated and checked at runtime using a software-based fault detection and recovery technique [20]. Put together, our approach can lead to 22% average energy savings for multimedia applications while guaranteeing application-level correctness, when compared to a recent work [9], which cannot guarantee application-level correctness. Comparing to the approach proposed in [20] which guarantees both application-level and numerical correctness, our method achieves 79% energy reduction.",2011,0, 5864,Computing Properties of Large Scalable and Fault-Tolerant Logical Networks,"As the number of processors embedded in high performance computing platforms becomes higher and higher, it is vital to force the developers to enhance the scalability of their codes in order to exploit all the resources of the platforms. This often requires new algorithms, techniques and methods for code development that add to the application code new properties: the presence of faults is no more an occasional event but a challenge. Scalability and Fault-Tolerance issues are also present in hidden part of any platform: the overlay network that is necessary to build for controlling the application or in the runtime system support for messaging which is also required to be scalable and fault tolerant. In this paper, we focus on the computational challenges to experiment with large scale (many millions of nodes) logical topologies. We compute Fault-Tolerant properties of different variants of Binomial Graphs (BMG) that are generated at random. For instance, we exhibit interesting properties regarding the number of links regarding some desired Fault-Tolerant properties and we compare different metrics with the Binomial Graph structure as the reference structure. A software tool has been developed for this study and we show experimental results with topologies containing 21000 nodes. We also explain the computational challenge when we deal with such large scale topologies and we introduce various probabilistic algorithms to solve the problems of computing the conventional metrics.",2011,0, 5865,Design and development of the CO2 enriched Seawater Distribution System,"The kinetics of the reaction that occurs when CO2 and seawater are in contact is a complex function of temperature, alkalinity, final pH and TCO2 which taken together determine the time required for complete equilibrium. This reaction is extremely important to the study of Ocean Acidification (OA) and is the critical technical driver in the Monterey Bay Aquarium Research Institute's (MBARI) Free Ocean CO2 Enrichment (FOCE) experiments. The deep water FOCE science experiments are conducted at depths beyond scuba diver reach and demand that a valid perturbation experiment operate at a stable yet naturally fluctuating lower pH condition and avoid large or rapid pH variation as well as incomplete reactions, when we expose an experimental region or sample. Therefore, the technical requirement is to create a CO2 source in situ that is stable and well controlled. After extensive research and experimentation MBARI has developed the ability to create an in situ source of CO2 enriched seawater (ESW) for distribution and subsequent use in an ocean acidification experiment. The system mates with FOCE, but can be used in conjunction with other CO2 experimental applications in deep water. The ESW system is completely standalone from FOCE. While the chemical changes induced by the addition of fossil fuel CO2 on the ocean are well known and easily predicted, the biological consequences are less clear and the subject of considerable debate. Experiments have been successfully carried out on land to investigate the effects of elevated atmospheric CO2 levels in various areas around the globe but only limited work on CO2 impacts to ocean environmental systems have been carried out to date. With rising concern over the long-term reduction in ocean pH, there is a need for viable in situ techniques to carry out experiments on marine biological systems. Previous investigations have used aqua- ia that compromise these studies because of reduced ecological complexity and buffering capacity. Additionally, aquaria use tightly controlled experimental conditions such as temperature, artificial light, and water quality that do not represent the natural ocean variability. In order to study the future effects of ocean acidification, scientists and engineers at MBARI have developed a technique and apparatus for an in situ perturbation experiment, the FOCE experimental platform. At the time of this writing the FOCE system and associated ESW are attached to the Monterey Accelerated Research System (MARS) cabled observatory. Engineering validation and tuning experiments using remote control and real time experimental feedback are underway. Additionally, an extensive instrumentation suite provides all of the necessary data for pH calculation and experimental control. The ESW is a separately deployed system that stores and distributes CO2 enriched seawater. It receives power and communications via an underwater mateable electrical tether. The CO2 enriched seawater is pumped into the FOCE sections from the ESW. This paper describes the design, development, and testing of the underwater ESW Distribution System as well as the software control algorithms as applied to FOCE. The paper covers the initial prototype, lessons learned, and the final operational version.",2011,0, 5866,3-dimensional analysis of Ground Penetrating Radar image for non-destructive road inspection,"Regular maintenance of highway is an important issue to ensure safety of the vehicles using the road. Most of existing method of highway inspections are destructive, which take much times, efforts, and costs. In this paper, we propose GPR (Ground Penetrating Radar) imaging to detect possible defect of the road. GPR scanning on a plane parallel to the road yields 3D images, so that slice-by-slice images can be generated for a comprehensive evaluation. First, we simulate the subsurface-scanning with GPR-Max software, by setting up the parameters similar to expected real-condition. Then, we set up the experiment in our GPR Test-Range, in which a Network Analyzer is employed as a GPR. We compare and analyze both of the simulation and Test-Range results, including slice analysis, to asses the quality of the method. Our results indicates implementability of such 3D GPR imaging for road inspection.",2011,0, 5867,Measuring firewall security,"In the recent years, more attention is given to firewalls as they are considered the corner stone in Cyber defense perimeters. The ability to measure the quality of protection of a firewall policy is a key step to assess the defense level for any network. To accomplish this task, it is important to define objective metrics that are formally provable and practically useful. In this work, we propose a set of metrics that can objectively evaluate and compare the hardness and similarities of access policies of single firewalls based on rules tightness, the distribution of the allowed traffic, and security requirements. In order to analyze firewall polices based on the policy semantic, we used a canonical representation of firewall rules using Binary Decision Diagrams (BDDs) regardless of the rules format and representation. The contribution of this work comes in measuring and comparing firewall security deterministically in term of security compliance and weakness in order to optimize security policy and engineering.",2011,0, 5868,Critiquing Rules and Quality Quantification of Development-Related Documents,"As the development of embedded systems grows in scale, it is becoming more important for engineers to share development documents such as requirements, design specifications and testing specifications, and to accurately circulate and understand the information necessary for development. Also, many defects that can be originated in the surface expression of the documents are reported through investigations of causes of defects in embedded systems development, In this paper, we highlight improper surface expressions of Japanese documents, and define quality criteria and critiquing rules to detect problems such as ambiguous expressions or omissions of information. We also carry out visual quality inspections and evaluate detection performance, correlations and working time. Then, we verify the validity of the critiquing rules we have defined and apply them to the document critiquing tool to evaluate the quality of the actual documents used in the development of embedded systems. And we quantify the quality of these documents by automatically detecting improper expression. We also apply supplemental critiquing rules to the document critiquing tool for use by non-native speakers of Japanese, and verify its efficacy at improving the quality of Japanese documents created by foreigners.",2011,0, 5869,A Proposal of NHPP-Based Method for Predicting Code Change in Open Source Development,"This paper proposes a novel method for predicting the amount of source code changes (changed lines of code: changed-LOC) in the open source development (OSD). While the software evolution can be observed through the public code repository in OSD, it is not easy to understand and predict the state of the whole development because of the huge amount of less-organized information.The method proposed in the paper predicts the code changes by using only data freely available from the code repository the code-change time stamp and the changed-LOC.The method consists of two steps: 1) to predict the number of occurrences of code changes by using a non-homogeneous Poisson process (NHPP)-based model, and 2) to predict the amount of code changes by using the outcome of the step-1 and the previously changed-LOC.The empirical work shows that the proposed method has an ability to predict the changed-LOC in the next 12 months with less than 10% error.",2011,0, 5870,Enabling Analysis and Measurement of Conventional Software Development Documents Using Project-Specific Formalism,"We describe a new approach to modeling and analyzing software development documents that are typically written using conventional office applications. Our approach brings automation to content extraction, quality checking and measurement of massive document artifacts that tend to be handled by labor-intensive manual work in industry today. Rather than seeking an approach based on creation or rewriting of contents using more rigid, machine-friendly representations such as standardized formal models and restricted languages, we provide a method to deal with the diversity of document artifacts by making use of project-specific formalism that exists in target documents. We demonstrate that such project-specific formalism often tends to ""naturally"" exist at syntactic levels, and it is possible to define a ""document model"", a logical data representation gained by transformation rule from the physical, syntactic structure to the logical, semantic structure. With this transformation, various quality checking rules for completeness, consistency, traceability, etc., are realized by evaluating constraints for data items in the logical structure, and measurement of these quality aspects is automated. We developed a tool to allow a user to easily define document models and checking rules, and provide the insights on transformations when defining document models for various industry specification documents written in word processor files, spreadsheets and presentations. We also demonstrate the use of natural language processing can improve document modeling and quality checking by compensating for a weakness of formalism and applying analysis to specific parts of the target documents.",2011,0, 5871,An Empirical Study of Fault Prediction with Code Clone Metrics,"In this paper, we present a replicated study to predict fault-prone modules with code clone metrics to follow Baba's experiment. We empirically evaluated the performance of fault prediction models with clone metrics using 3 datasets from the Eclipse project and compared it to fault prediction without clone metrics. Contrary to the original Baba's experiment, we could not significantly support the effect of clone metrics, i.e., the result showed that F1-measure of fault prediction was not improved by adding clone metrics to the prediction model. To explain this result, this paper analyzed the relationship between clone metrics and fault density. The result suggested that clone metrics were effective in fault prediction for large modules but not for small modules.",2011,0, 5872,Quantifying the Effectiveness of Testing Efforts on Software Fault Detection with a Logit Software Reliability Growth Model,"Quantifying the effects of software testing metrics such as the number of test runs on the fault detection ability is quite important to design and manage effective software testing. This paper focuses on the regression model which represents the causal relationship between the software testing metrics and the fault detection probability. In a numerical experiment, we perform the quantitative estimation of the causal relationship through the quantization of software testing metrics.",2011,0, 5873,Analyzing Involvements of Reviewers through Mining a Code Review Repository,"In order to assure the quality of software, early detection of defects is highly recommended. Code review is one of effective way for such early detection of defects in software. Code review activities must contain various useful insights for software quality. However, especially in open source software developments, records of code review merely exist. In this study, we try to analyze a code review repository of an open source software, Chromium, which adopts a code review tool in its development.Before analyzing the code review data, we address 7 research questions. We can find interesting answers for these questions by repositories mining.",2011,0, 5874,Using Efficient Machine-Learning Models to Assess Two Important Quality Factors: Maintainability and Reusability,"Building efficient machine-learning assessment models is an important achievement of empirical software engineering research. Their integration in automated decision-making systems is one of the objectives of this work. It aims at empirically verify the relationships between some software internal artifacts and two quality attributes: maintainability and reusability. Several algorithms, belonging to various machine-learning approaches, are selected and run on software data collected from medium size applications. Some of these approaches produce models with very high quantitative performances; others give interpretable and ""glass-box"" models that are very complementary.",2011,0, 5875,Evidence-Based Evaluation of Effort Estimation Methods,"In the selection of appropriate effort estimation methods, there are different questions, depending on the situation in the concrete IT environment. The main goal is of course the best and most accurate estimate of the efforts and costs for their own IT department in conjunction with least cost to the estimate itself. On the other hand, questions of comparability within the company or (internationally distributed) business areas seems to be relevant in a global marketplace of software production. Furthermore another target is to compare themselves with competitors and make the effort estimation transparent outside. This paper deals with these issues, examining the various cost estimation methods for their importance or ""evidence"" and the resulting selection criteria for individual applications. Our paper is based on our theoretical and industrial experience in effort estimation over the last twenty years in our software measurement community.",2011,0, 5876,An Exploratory Study on the Impact of Usage of Screenshot in Software Inspection Recording Activity,"This paper describes an exploratory study on theuse of screenshots for recording software inspection activities such as defect reproduction and correction. Although detected defects are usually recorded in writing, using screenshots to record detected defects should ecrease the percentage of irreproducible defects and the time needed to reproduce defects during the defect orrection phase. An experiment was conducted to clarify the efficiency of using screenshots to record detected defects. One practitioner group and two student groups participated in the experiment. The recorder in each group used a prototype support tool for capturing screenshots during the experiment. Each group conducted two trials: one with a general spreadsheet application to support recording, the other with the prototype tool that supportsrecording inspection activities. After the inspection meeting, the recorder was asked to reproduce the recorded defects. The percentage of reproduce defects and time to reproduce defects was measured. The results of the experiment show that use of screenshots increases the percentage of reproduced defects and decreases the time needed to reproduce the defects. The results also indicate that use of the recording tool affected the types of defects.",2011,0, 5877,Fault Prediction Capability of Program File's Logical-Coupling Metrics,"Frequent changes in logically coupled source files induced bugs in software. Metrics have been used to identify source files which are logically coupled. In this paper, we propose an approach to compute a set of eight metrics, which measure logical-couplings among source files. We compute these metrics using the historical data of software changes which are related to the fixing of post release bugs. To validate that our propose set of metrics is highly correlated with the number bugs and are more capable to construct a bug prediction model, we performed an experiment. Our experimental results show that our propose set of metrics is highly correlated with the number of bugs, and hence can be used to construct a bug prediction model. In our experiment, the obtained accuracy of our bug predictor model is 97%.",2011,0, 5878,Software Metrics Based on Coding Standards Violations,"Software metrics is one of promise technique to capture the size and quality of products, development process in order to assess a software development. Many software metrics based on various aspects of a product and/or a process have been proposed. There is some research which discuss the relation between software metrics and faults to use these metrics as the indicator of quality. Most of these software metrics are based on structural features of products or process information related to explicit fault. In this paper, we focus on latent faults detected by static analysis techniques. The coding checker is widely used to find coding standards violations which are strongly relating to latent faults. In this paper, we propose new software metrics based on coding standards violations to capture latent faults in a development. We analyze two open source projects by using proposed metrics and discuss the effectiveness.",2011,0, 5879,Approach to Introducing a Statistical Quality Control,"This paper describes some examples and points about implementing a statistical quality control in developing business systems in Sumitomo Electric Industries, Ltd. and Sumitomo Electric Information Systems Co., Ltd. Although the X-R chart is often used in statistical quality control, it is recommended to introduce the u-chart if defects are to be controlled in future. If a defect detection process such as a review or a test does not reach a statistical steady state that indicates a process is stable on the control chart, an appropriate control could be conducted by not only further standardizing the process, but also reviewing the definition of size indicators. Quality prediction can be made possible by accumulating defect data collected to create a control chart and analyzing the distributions of introduced defect densities at an organization level. If the accuracy of quality prediction is low, it can be enhanced by carrying out improvements to narrow the widths of distributions of introduced defect densities.",2011,0, 5880,Internal and External Software Benchmark Repository Utilization for Effort Estimation,"Data repositories play critical role in software management practices. Construction of the estimation models, benchmark of software performance indicators, identification of the process improvement opportunities, and quality assessment are major utilization areas of software repositories. This paper presents the observed difficulties in utilization of external and multi-organizational software benchmark repositories for effort estimation model construction for a software organization in finance domain. ISBSG, Albrecht, China, Desharnais, Finnish, Maxwell and Kemerer repositories' data were utilized in this study. The approach was the utilization of these repositories and organization's own repository to estimate the software development effort and evaluate whether external and multi-organizational data be used for effort estimation.",2011,0, 5881,Common Practices and Problems in Effort Data Collection in the Software Industry,"Effort data is crucial for project management activities in software projects. This data is utilized in estimations that are required for project planning, in the formation of benchmarking data sets and as a main input for project monitoring and control activities. However, there are known problems regarding effort data collection in the industry. In this study we investigate the effort data collection practices in the industry and factors that lead to inaccurate effort data collection. A pilot study was conducted to observe problems in effort data collection and a survey is conducted to observe the existence of these problems in the industry as well as causes of these problems. In this paper, findings of these studies are presented along with certain suggestions to improve the quality of effort data.",2011,0, 5882,OpenMDSP: Extending OpenMP to Program Multi-Core DSP,"Multi-core Digital Signal Processors (DSP) are widely used in wireless telecommunication, core network transcoding, industrial control, and audio/video processing etc. Comparing with general purpose multi-processors, the multi-core DSPs normally have more complex memory hierarchy, such as on-chip core-local memory and non-cache-coherent shared memory. As a result, it is very challenging to write efficient multi-core DSP applications. The current approach to program multi-core DSPs is based on proprietary vendor SDKs, which only provides low-level, non-portable primitives. While it is acceptable to write coarse-grained task level parallel code with these SDKs, it is very tedious and error prone to write fine-grained data parallel code with them. We believe it is desired to have a high-level and portable parallel programming model for multi-core DSPs. In this paper, we propose Open MDSP, an extension of Open MP designed for multi-core DSPs. The goal of Open MDSP is to fill the gap between Open MP memory model and the memory hierarchy of multi-core DSPs. We propose three class of directives in Open MDSP: (1) data placement directives allow programmers to control the placement of global variables conveniently, (2) distributed array directives divide whole array into sections and promote them into core-local memory to improve performance, and (3) stream access directives promote big array into core-local memory section by section during a parallel loop's processing. We implement the compiler and runtime system for Open MDSP on Free Scale MSC8156. Benchmarking result shows that seven out of nine benchmarks achieve a speedup of more than 5 with 6 threads.",2011,0, 5883,An Evaluation of Vectorizing Compilers,"Most of today's processors include vector units that have been designed to speedup single threaded programs. Although vector instructions can deliver high performance, writing vector code in assembly language or using intrinsics in high level languages is a time consuming and error-prone task. The alternative is to automate the process of vectorization by using vectorizing compilers. This paper evaluates how well compilers vectorize a synthetic benchmark consisting of 151 loops, two application from Petascale Application Collaboration Teams (PACT), and eight applications from Media Bench II. We evaluated three compilers: GCC (version 4.7.0), ICC (version 12.0) and XLC (version 11.01). Our results show that despite all the work done in vectorization in the last 40 years 45-71% of the loops in the synthetic benchmark and only a few loops from the real applications are vectorized by the compilers we evaluated.",2011,0, 5884,Using Automated Control Charts for the Runtime Evaluation of QoS Attributes,"As modern software systems operate in a highly dynamic context, they have to adapt their behaviour in response to changes in their operational environment or/and requirements. Triggering adaptation depends on detecting quality of service (QoS) violations by comparing observed QoS values to predefined thresholds. These threshold-based adaptation approaches result in late adaptations as they wait until violations have occurred. This may lead to undesired consequences such as late response to critical events. In this paper we introduce a statistical approach CREQA - Control Charts for the Runtime Evaluation of QoS Attributes. This approach estimates at runtime capability of a system, and then it monitors and provides early detection of any changes in QoS values allowing timely intervention in order to prevent undesired consequences. We validated our approach using a series of experiments and response time datasets from real world web services.",2011,0, 5885,Full-system analysis and characterization of interactive smartphone applications,"Smartphones have recently overtaken PCs as the primary consumer computing device in terms of annual unit shipments. Given this rapid market growth, it is important that mobile system designers and computer architects analyze the characteristics of the interactive applications users have come to expect on these platforms. With the introduction of high-performance, low-power, general purpose CPUs in the latest smartphone models, users now expect PC-like performance and a rich user experience, including high-definition audio and video, high-quality multimedia, dynamic web content, responsive user interfaces, and 3D graphics. In this paper, we characterize the microarchitectural behavior of representative smartphone applications on a current-generation mobile platform to identify trends that might impact future designs. To this end, we measure a suite of widely available mobile applications for audio, video, and interactive gaming. To complete this suite we developed BBench, a new fully-automated benchmark to assess a web-browser's performance when rendering some of the most popular and complex sites on the web. We contrast these applications' characteristics with those of the SPEC CPU2006 benchmark suite. We demonstrate that real-world interactive smartphone applications differ markedly from the SPEC suite. Specifically the instruction cache, instruction TLB, and branch predictor suffer from poor performance. We conjecture that this is due to the applications' reliance on numerous high level software abstractions (shared libraries and OS services). Similar trends have been observed for UI-intensive interactive applications on the desktop.",2011,0, 5886,Towards optimal performance and resource management in web systems via model predictive control,"Management of the performance quality attributes and shared computing resources in a web system environment is vital to many business domains in order to achieve business objectives. These systems need to provide agreed levels of quality of service to their clients while allocating limited available resources among them. This paper proposes a new runtime management scheme based on predictive control to manage such systems within defined constraints. It firstly addresses the issue of building a multi-input multi-output (MIMO) system model when the equality constraints on the total amount of resources are considered. Then, the same performance management and resource allocation problem is reformulated with inequality constraints to improve the system model identification and runtime control. With the dynamic model built, the resource management problem in a shared resource environment is solved using model predictive control and constraint optimization. The performance of the proposed control system is validated on a prototype system implementing a real world scenario under different operation conditions.",2011,0, 5887,Evaluating the viability of process replication reliability for exascale systems,"As high-end computing machines continue to grow in size, issues such as fault tolerance and reliability limit application scalability. Current techniques to ensure progress across faults, like checkpoint-restart, are increasingly problematic at these scales due to excessive overheads predicted to more than double an application's time to solution. Replicated computing techniques, particularly state machine replication, long used in distributed and mission critical systems, have been suggested as an alternative to checkpoint-restart. In this paper, we evaluate the viability of using state machine replication as the primary fault tolerance mechanism for upcoming exascale systems. We use a combination of modeling, empirical analysis, and simulation to study the costs and benefits of this approach in comparison to check-point/restart on a wide range of system parameters. These results, which cover different failure distributions, hardware mean time to failures, and I/O bandwidths, show that state machine replication is a potentially useful technique for meeting the fault tolerance demands of HPC applications on future exascale platforms.",2011,0, 5888,Large scale debugging of parallel tasks with AutomaDeD,"Developing correct HPC applications continues to be a challenge as the number of cores increases in today's largest systems. Most existing debugging techniques perform poorly at large scales and do not automatically locate the parts of the parallel application in which the error occurs. The over head of collecting large amounts of runtime information and an absence of scalable error detection algorithms generally cause poor scalability. In this work, we present novel, highly efficient techniques that facilitate the process of debugging large scale parallel applications. Our approach extends our previous work, AutomaDeD, in three major areas to isolate anomalous tasks in a scalable manner: (i) we efficiently compare elements of graph models (used in AutomaDeD to model parallel tasks) using pre-computed lookup-tables and by pointer comparison; (ii) we compress per-task graph models before the error detection analysis so that comparison between models involves many fewer elements; (iii) we use scalable sampling-based clustering and nearest-neighbor techniques to isolate abnormal tasks when bugs and performance anomalies are manifested. Our evaluation with fault injections shows that AutomaDeD scales well to thousands of tasks and that it can find anomalous tasks in under 5 seconds in an online manner.",2011,0, 5889,An adaptive H.264 video protection scheme for video conferencing,"Real-time video communication such as Internet video conferencing is often afflicted by packet loss over the network. To improve the quality of video, error protection schemes have been introduced based on FMO in H.264 whose encoding efficiency is unacceptable. This paper presents a novel region of interest (ROI) protection scheme that can accurately extract ROI area using facial recognition and greatly speedup video encoding based on feedback using x264 codec implementation. In this scheme, the video receiver uses a packet loss prediction model to predict whether to send feedback to the video sender that dynamically adjust the ROI protecting scheme. Experiments prove that the quality of the ROI area can be effectively improved by the scheme whose encoding performance increases by 50 times compared with FMO based algorithms.",2011,0, 5890,Spam diagnosis infrastructure for individual cyberspace,"The theory, methods and the architecture of parallel information's analysis is presented by the form of analytical, graph and table forms of associative relations for the search, recognition, diagnosis of destructive components and the decision making in n-dimensional vector cybernetic individual space. Vector -logical processes-models of actual oriented tasks are considered. They include the diagnostic of spam and the recovery of serviceability, the hardware-software components of computer systems and the decision quality is estimated by the interactions of non-arithmetic metrics of Boolean vectors. The concept of self-development information of computer ecosystem is offered. It repeats the evolution of the functionality of the person. Original processes-models of associative-logical information analysis are represented on the basis of high-speed multiprocessor in n-dimensional vector discrete space.",2011,0, 5891,"The evidential independent verification of software of information and control systems, critical to safety: Functional model of scenario","The results of development of the techniques which form the scenario of target technology ≪Evidential independent verification of I&C Systems Software of critical applicationâ‰?and utilities of the scenario support at information, analytical and organizational levels are presented in the article. The result of the scenario implementation is the quantitative definition of latent faults probability and completeness of test coverage for critical software. This technology can be used by I&C systems developers, certification and regulation bodies to carry out independent verification (or certification) during modernization and modification of critical software directly on client objects without intruding (interrupting) in technological processes.",2011,0, 5892,High performance H.264/AVC encoding motion prediction algorithm,"The ITU-T H.264 can be considered currently as the most efficient professional video coding standard. This codec combines distinct complex techniques in order to encapsulate raw video into efficient compressed streams. Unfortunately, the improved H.264 coding efficiency increases significantly the computational complexity, making real-time applications difficult to be achieved. Particularly the motion prediction is an important and complex algorithm used to remove temporal redundancy of image sequences. Considering that, we propose an optimized high performance small diamond topology zonal search motion prediction algorithm. The proposed method has been designed to operate with orthogonal linear vectors, which are specially aligned with 4:1 sub sampled memories organization reducing the number of operations and data manipulations. The proposed solution was implemented in the H.264 JM reference software obtaining results at least two times faster than other available fast algorithms with negligible quality losses.",2011,0, 5893,A stochastic formulation of successive software releases with faults severity,"Software companies are coming with multiple add-ons to survive in the pure competitive environment. Each succeeding up-gradation offers some performance enhancement and distinguishing itself from the past release. If the size of the software system is large, the number of faults detected during the testing phase becomes large, and the number of faults, which are removed through each debugging, becomes small compared to initial fault content at the beginning of the testing phase. In such a situation, we can model the software fault detection process as a stochastic process with continuous state space. In this paper, we propose a multi-release software reliability growth model based on Itô's type of differential equation. The model categorizes Faults in two categories: simple and hard with respect to time which they take for isolation and removal after their observation. The model developed is validated on real data set.",2011,0, 5894,Assessing Measurements of QoS for Global Cloud Computing Services,"Many global distributed cloud computing applications and services running over the Internet, between globally dispersed clients and servers, will require certain levels of Quality of Service (QoS) in order to deliver and give a sufficiently smooth user experience. This would be essential for real-time streaming multimedia applications like online gaming and watching movies on a pay as you use basis hosted in a cloud computing environment. However, guaranteeing or even predicting QoS in global and diverse networks supporting complex hosting of application services is a very challenging issue that needs a stepwise refinement approach to be solved as the technology of cloud computing matures. In this paper, we investigate if latency in terms of simple Ping measurements can be used as an indicator for other QoS parameters such as jitter and throughput. The experiments were carried out on a global scale, between servers placed in universities in Denmark, Poland, Brazil and Malaysia. The results show some correlation between latency and throughput, and between latency and jitter, even though the results are not completely consistent. As a side result, we were able to monitor the changes in QoS parameters during a number of 24-hour periods. This is also a first step towards defining QoS parameters to be included in Service Level Agreements for cloud computing in the foreseeable future.",2011,0, 5895,Analysis of rotor fault detection in inverter fed induction machines at no load by means of finite element method,"This paper analyzes a new method for detecting defective rotor bars at zero load and standstill by means of modeling using the finite element method (FEM). The detection method uses voltage pulses generated by the switching of the inverter to excite the machine and measures the corresponding reaction of the machine phase currents, which can be used to identify a modulation of the transient leakage inductance caused by asymmetries within the machine. The presented 2D finite element model and the simulation procedure are oriented towards this approach and are developed by means of the FEM software ANSYS. The analysis shows how the transient flux linkage imposed by voltage pulses is influenced by a broken bar leading to very distinct rotor-fixed modulation, that can be clearly exploited for monitoring. Simulation results are presented to show the transient flux paths. These simulation results are supported by measurements on a specially manufactured induction machine.",2011,0, 5896,Decoupled recursive-least-squares technique for extraction of instantaneous synchronized symmetrical components under fault conditions,"This paper presents the decoupled recursive-least-squares (DRLS) technique for extraction of instantaneous synchronized symmetrical components under fault conditions in power grids. The proposed DRLS technique demonstrates outstanding robustness during faults, subsequent circuit breakers operation, and transmission line reclosing. The proposed DRLS technique also significantly reduces the computational burden of the conventional RLS technique for implementation on digital signal processors (DSPs). The performance of the proposed DRLS technique has been evaluated through presetting selected simulation results in MATLAB-Simulink software. DSP implementation of the DRLS technique also confirms considerable computational efficiency improvement in comparison with that of the conventional RLS technique. The DRLS technique also shows better efficiency in comparison with the enhanced phase-locked loop (EPLL) technique under highly distorted power system due to harmonics according to our implementation on two different R&D DSP platforms.",2011,0, 5897,System Failure Forewarning Based on Workload Density Cluster Analysis,"Each computer system contains design objectives for long-term usage, so the operator must conduct a continuous and accurate assessment of system performance in order to detect the potential factors that will degrade system performance. Condition indicators are the basic components of diagnosis. It is important to select feature vectors that meet the criteria in order to provide true accuracy and powerful diagnostic routines. Our goal is to indicate the actual system status according to the workload, and use clustering techniques to analyze the workload distribution density to build diagnostic templates. Such templates can be used for system failure forewarning. In the proposed system, we present an approach, based on workload density cluster analysis to automatically monitor the health of software systems and system failure forewarning. Our approach consists of tracking the workload density of metric clusters. We employ the statistical template model to automatically identify significant changes in cluster moving, therefore enabling robust fault detection. We observed two circumstances from the experiment results. First, under most normal status, the lowest accuracy value is approximate our theoretical minimum threshold of 84%. Such result implies a close correlation between our measured and real system status. Second, the command data used by the system could predict 90% of events announced, which reveals the prediction effectiveness of this proposed system. Although it is infeasible for the system to process the largest possible fault events in the deployment of resources, we could apply statistics to characterize the anomalous behaviors to understand the nature of emergencies and to test system service under such scenarios.",2011,0, 5898,A hybrid method for constructing High Level Architecture of BBS user network,"It is useful to understand the High Level Architecture (HLA) of the user network of Bulletin Board Systems (BBS) for some applications. In this paper, we construct the HLA of a BBS user network through hybrid static and dynamic analysis of the quantitative temporal graphs that are extracted from the BBS entries. We detect the HLA framework first though the static structural analysis of the aggregation of the temporal graphs. Then, we identify the HLA components including communities, community cores, and hubs elaborately through the dynamic analysis of the quantitative temporal attributes of nodes. The hybrid method guarantees the HLA quality as it removes the false components from the HLA. It controls the computational cost at a low level also. A metric is proposed to evaluate the HLA efficiency in information transmission. The experiments show that the HLA constructed by the hybrid method outperforms that constructed by the comparative method.",2011,0, 5899,Algorithm analyzer to check the efficiency of codes,Efficiency of codes developed is always an issue in software development. Software can be said to be of good quality if the measurable features of the software can be quantitatively checked for adoption of standards or following certain set rules. Software metrics can therefore come into play in the sense of helping to measure certain characteristics of software. The issue and factors pertaining to efficiency of a code will be addressed by software metrics. Existing tools that are used to analyze several software metrics have come a long way in helping to assess this very important part of software development. This paper described how software metrics can be used in analyzing efficiency of the developed code in early stage of development. A tool (algorithm analyzer) was developed to enable analyze a given code to check its efficiency level and produce efficiency reports based on the analysis. The system is able to help the code checking whilst maintaining the standards of coding for its users. With the reports that are generated it would be easy for users to determine the efficiency of their object oriented codes.,2011,0, 5900,Design of the mechanical condition monitoring system for Molded Case Circuit Breakers,"In this paper, a mechanical condition monitoring system of Molded Case Circuit Breaker (MCCB) is designed and developed. The operation principle of the monitoring system, the hardware design and the software design are introduced in detail. The three-phase voltage, the voltage between auxiliary normally closed contacts and the motor current are detected during the opening operation, closing operation and reclosing operation of MCCB, wherein these operations are driven by the motor. The mechanical condition characteristic parameters, including closing time, opening time, reclosing time, three phases asynchronous time, closing speed, opening speed, reclosing speed and the force on the handle, can be calculated by analyzing the voltage signals and current signals. The test results show that the system has a good performance. Moreover the characteristic parameters of circuit breaker, obtained in the test, provide test data for the theory research on remaining life prediction of MCCB.",2011,0, 5901,A Novel Framework for Monitoring and Analyzing Quality of Data in Simulation Workflows,"In recent years scientific workflows have been used for conducting data-intensive and long running simulations. Such simulation workflows have processed and produced different types of data whose quality has a strong influence on the final outcome of simulations. Therefore being able to monitor and analyze quality of this data during workflow execution is of paramount importance, as detection of quality problems will enable us to control the execution of simulations efficiently. Unfortunately, existing scientific workflow execution systems do not support the monitoring and analysis of quality of data for multi-scale or multi-domain simulations. In this paper, we examine how quality of data can be comprehensively measured within workflows and how the measured quality can be used to control and adapt running workflows. We present a quality of data measurement process and describe a quality of data monitoring and analysis framework that integrates this measurement process into a workflow management system.",2011,0, 5902,Exploiting Text-Related Features for Content-based Image Retrieval,"Distinctive visual cues are of central importance for image retrieval applications, in particular, in the context of visual location recognition. While in indoor environments typically only few distinctive features can be found, outdoors dynamic objects and clutter significantly impair the retrieval performance. We present an approach which exploits text, a major source of information for humans during orientation and navigation, without the need for error-prone optical character recognition. To this end, characters are detected and described using robust feature descriptors like SURF. By quantizing them into several hundred visual words we consider the distinctive appearance of the characters rather than reducing the set of possible features to an alphabet. Writings in images are transformed to strings of visual words termed visual phrases, which provide significantly improved distinctiveness when compared to individual features. An approximate string matching is performed using N-grams, which can be efficiently combined with an inverted file structure to cope with large datasets. An experimental evaluation on three different datasets shows significant improvement of the retrieval performance while reducing the size of the database by two orders of magnitude compared to state-of-the-art. Its low computational complexity makes the approach particularly suited for mobile image retrieval applications.",2011,0, 5903,Towards Energy Consumption Measurement in a Cloud Computing Wireless Testbed,"The evolution of the Next Generation Networks, especially the wireless broadband access technologies such as Long Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMAX), have increased the number of ""all-IP"" networks across the world. The enhanced capabilities of these access networks has spearheaded the cloud computing paradigm, where the end-users aim at having the services accessible anytime and anywhere. The services availability is also related with the end-user device, where one of the major constraints is the battery lifetime. Therefore, it is necessary to assess and minimize the energy consumed by the end-user devices, given its significance for the user perceived quality of the cloud computing services. In this paper, an empirical methodology to measure network interfaces energy consumption is proposed. By employing this methodology, an experimental evaluation of energy consumption in three different cloud computing access scenarios (including WiMAX) were performed. The empirical results obtained show the impact of accurate network interface states management and application network level design in the energy consumption. Additionally, the achieved outcomes can be used in further software-based models to optimized energy consumption, and increase the Quality of Experience (QoE) perceived by the end-users.",2011,0, 5904,Implementation and Usability Evaluation of a Cloud Platform for Scientific Computing as a Service (SCaaS),"Scientific computing requires simulation and visualization involving large data sets among collaborating teams. Cloud platforms offer a promising solution via ScaaS. We report on the architecture, implementation and User Experience (UX) evaluation of one such SCaaS platform implementing TOUGH2V2.0, a numerical simulator for sub-surface fluid and heat flow, offered as a service. Results from example simulations, with virtualization of workloads in a multi-tenant, Virtual Machine (VM)-based cloud platform, are presented. These include fluid production from a geothermal reservoir, diffusive and advective spreading of contaminants, radial flow from a CO2 injection well and gas diffusion of a chemical through porous media. Prepackaged VM pools deployed autonomically ensure that sessions are provisioned elastically on demand. Users can access data-intensive visualizations via a web-browser. Authentication, user state and sessions are managed securely via an Access Gateway, to autonomically redirect and manage the workflows when multiple concurrent users are accessing their own sessions. Usability in the cloud and the traditional desktop are comparatively assessed, using several UX metrics. Simulated network conditions of different quality were imposed using a WAN emulator. Usability was found to be good for all the simulations under even moderately degraded network quality, as long as latency was not well above 100 ms. Hosting of a complex scientific computing application on an actual, global Enterprise cloud platform (as opposed to earlier remoting platforms) and its usability assessment, both presented for the first time, are the essential contributions of this work.",2011,0, 5905,Optimal Sizing of Combined Heat & Power (CHP) Generation in Urban Distribution Network (UDN),"The capacity of Combine Heat and Power (CHP) generation connected to Urban Distribution Network (UDN) will increase significantly as a result of EU government targets and initiatives. CHP generation can have significant impact on the power flow, voltage profile, fault current level and the power quality for customers and electricity suppliers. The connection of CHP plant at UDN creates a number of welldocumented impacts with voltage rise and fault current level being the dominant effects. A range of options have traditionally been used to mitigate adverse impacts but these generally revolve around network upgrades, the cost of which may be considerable. Connection of CHP generation can fundamentally alter the operation of UDN. Where CHP plant capacity is comparable to or larger than local demand there are likely to be observable impacts on network power flows, voltage regulation and fault current level. New connection of CHP schemes must be evaluated to identify and quantify any adverse impact on the security and quality of local electricity supplies. The impacts that arise from an individual CHP scheme are assessed in details when the developer makes an application for the connection of the CHP plant. The objective of this paper is to use static method to develop techniques that provide means of determining the optimum capacity of a CHP plant that may be accommodate within UDN. The main tool used in this paper is ERAC power analyzing software incorporating load flow and fault current level analysis. These analysis are demonstrated on 15 busbar network resembling part of typical UDN. In order to determine optimal placement and sizing of a CHP plant that could be connected at any particular busbar on UDN without causing a significant adverse impact on performance of the UDN, the multiple linear regression model is created and demonstrated using the data obtain by the analysis performed by ERAC power analyzing software.",2011,0, 5906,A practical approach for channel problem detection and adaptation in tactical radio systems,"Soldier-based tactical radio systems in the battlefield have tendencies to be severely affected by various phenomenon such as signal jamming and environment deterioration. This is because many types of wireless handheld and manpack devices carried by soldiers may not have efficient algorithms or the hardware for adapting the device to these conditions. Taking on the advantage that most of these devices use software-defined radios but do not fully utilize its functionalities, we present channel quality estimation and channel adaptation algorithms that can solve the problems of these devices. We utilize simple and practical channel problem detection methods by constantly monitoring the channel quality parameters such as signal-to-interference-noise ratio and clear channel assessment value while inducing low overhead. Furthermore, we propose efficient channel adaptation methods that can quickly sense and avoid deteriorated channel conditions. Via the experimental studies, we implement our proposed scheme on actual testbeds and show that it can improve the performance and mission-criticality of the tactical networks.",2011,0, 5907,Computational resiliency for distributed applications,"In recent years, computer network attacks have decreased overall reliability of computer systems and undermined confidence in mission-critical software. These robustness issues are magnified in distributed applications, which provide multiple points of failure and attack. The notion of resiliency is concerned with constructing applications that are able to operate through a wide variety of failures, errors, and malicious attacks. A number of approaches have been proposed in the literature based on fault tolerance achieved through replication of resources. In general, these approaches provide graceful degradation of performance to the point of failure but do not guarantee progress in the presence of multiple cascading and recurrent failures. Our approach is to dynamically replicate message-passing processes, detect inconsistencies in their behavior, and restore the level of fault tolerance as a computation proceeds. This paper describes a novel operating system technology for resilient message-passing applications that is automated, scalable, and transparent. The technology provides mechanisms for process replication, process migration, and adaptive failure detection. To quantify the performance overhead of the technology, we benchmark a distributed application exemplar to represent a broader class of applications.",2011,0, 5908,Adaptive Failure Detection via Heartbeat under Hadoop,"Hadoop has become one popular framework to process massive data sets in a large scale cluster. However, it is observed that the detection of the failed worker is delayed, which may result in a significant increase in the completion time of jobs with different workload. To cope with it, we present two mechanisms: Adaptive interval and Reputation-based Detector that support Hadoop to detect the failed worker in the shortest time. The Adaptive interval is trying to dynamically configure the expiration time which is adaptive to the job size. The Reputation-based Detector is trying to evaluate the reputation of each worker. Once the reputation of a worker is lower than a threshold, then the worker will be considered as a failed worker. In our experiments, we demonstrate that both of these strategies have achieved great improvement in the detection of the failed worker. Specifically, the Adaptive interval has a relatively better performance with small jobs, while the Reputation-based Detector is more suitable for large jobs.",2011,0, 5909,Software Fault Prediction Framework Based on aiNet Algorithm,"Software fault prediction techniques are helpful in developing dependable software. In this paper, we proposed a novel framework that integrates testing and prediction process for unit testing prediction. Because high fault prone metrical data are much scattered and multi-centers can represent the whole dataset better, we used artificial immune network (aiNet) algorithm to extract and simplify data from the modules that have been tested, then generated multi-centers for each network by Hierarchical Clustering. The proposed framework acquires information along with the testing process timely and adjusts the network generated by aiNet algorithm dynamically. Experimental results show that higher accuracy can be obtained by using the proposed framework.",2011,0, 5910,Adaptive Energy-Efficient Architecture for WCDMA Channel Estimation,"Due to the fast changing wireless communication standards coupled with strict performance constraints, the demand for flexible yet high-performance architectures is increasing. To tackle the flexibility requirement, Software-Defined Radio (SDR) is emerging as an obvious solution, where the underlying hardware implementation is tuned via software layers to the varied standards depending on power-performance and quality requirements leading to adaptable, cognitive radio. To design the hardware architecture for SDR is an interesting challenge, which involves determining the perfect balance of flexibility and performance for the target algorithmic kernels. In this paper, we conduct such a design case study for representatives of two complexity classes of WCDMA channel estimation algorithms. The two algorithms, polynomial channel estimation and weighted multi-slot averaging, differ also significantly in their algorithmic performance, difference which can be exploited in cognitive radio. Furthermore, we propose new design guidelines for highly specialised architectures that provide just enough flexibility to support multiple applications, targeting adaptability, low power and high-performance. Our experiments with various design points show that the resulting architecture meets the performance constraints of WCDMA and offers weak programmability to tune the architecture depending on power/performance constraints of SDR.",2011,0, 5911,Specialist tool for monitoring the measurement degradation process of induction active energy meters,"This paper presents a methodology and a specialist tool for failure probability analysis of induction type watt-hour meters, considering the main variables related to their measurement degradation processes. The database of the metering park of a distribution company, named Elektro Electricity and Services Co., was used for determining the most relevant variables and to feed the data in the software. The modeling developed to calculate the watt-hour meters probability of failure was implemented in a tool through a user friendly platform, written in Delphi language. Among the main features of this tool are: analysis of probability of failure by risk range; geographical localization of the meters in the metering park, and automatic sampling of induction type watt-hour meters, based on a risk classification expert system, in order to obtain information to aid the management of these meters. The main goals of the specialist tool are following and managing the measurement degradation, maintenance and replacement processes for induction watt-hour meters.",2011,0, 5912,Prototype design of low cost four channels digital electroencephalograph for sleep monitoring,"The electrical activity in brain or known as electroencephalogram (EEG) signal is being used in the diagnosis of sleep quality. Based on EEG signal, power of brain wave that related with a sleep quality could be obtained by analysis of power spectral density. The problem in developing countries, for example in Indonesia, EEG instrument is not widely available in each region of the country. This project designed and implemented four channels digital EEG, in which the design of hardware and software concepts was adopted from OpenEEG project. A four channels EEG amplifier operated by battery with average gain magnitude of 6100 times, bandwidth 0.05-60Hz and slope gradient of -60.00 dB/decade is developed. Digital board consists of AT-mega8 and serial interface with optocoupler is used to interface and viewed EEG signal on notebook. The prototype has successfully detected patterns of cardiac signal simultaneously with good SNR. In EEG measurement through monitoring the brain wave sleep, the data generated by PSD (Power Spectral Density) graph show the dominance of the brain signals at 7-9Hz (alpha) and 3-5Hz (theta). From several tests and measurements, this research concludes that the prototype of low cost EEG 4 channels is capable of acquiring satisfactory brain wave monitoring during sleep from healthy volunteer.",2011,0, 5913,Automated Verification of Load Tests Using Control Charts,"Load testing is an important phase in the software development process. It is very time consuming but there is usually little time for it. As a solution to the tight testing schedule, software companies automate their testing procedures. However, existing automation only reduces the time required to run load tests. The analysis of the test results is still performed manually. A typical load test outputs thousands of performance counters. Analyzing these counters manually requires time and tacit knowledge of the system-under-test from the performance engineers. The goal of this study is to derive an approach to automatically verify load tests' results. We propose an approach based on a statistical quality control technique called control charts. Our approach can a) automatically determine if a test run passes or fails and b) identify the subsystem where performance problem originated. We conduct two case studies on a large commercial telecommunication software and an open-source software system to evaluate our approach. Our results warrant further development of control chart based techniques in performance verification.",2011,0, 5914,A New Approach to Evaluate Performance of Component-Based Software Architecture,"Nowadays, by technology developments, software systems enlarge in scale and complexity. In large systems and to overcome complexity, software architecture has been considered as a connected notion with product quality and plays a crucial role in the quality of final system. The aim of the analysis of software architecture is to recognize potential risks and investigating qualitative needs of software design before the process of production and implementation. Achievement to this goal reduces the costs and improves the software quality. In this paper, a new approach is presented to evaluate performance of component-based software architecture for software systems with distributed architecture. In this approach, at first system is modeled as a Discrete Time Markov Chain and then the required parameters are taken from, to produce a Product Form Queueing Network. Limitations of source, like restrictions of the number of threads in a particular machine, are also regarded in the model. The prepared model is solved by the SHARPE software packages. As the result of the solution of the produced model in this approach, throughput and the average response time and bottlenecks in different workloads of system are predicted and some suggestions are presented to improve the system performance.",2011,0, 5915,On the Effect of the Order of Test Cases in the Modified Exponential Software Reliability Growth Model,"In this paper, we propose a resampling method which is specifically designed for software testing and reduce the bias of estimator caused by the order of test cases. In particular, this paper considers the resampling in the modified exponential software reliability growth model that is modeled by the difficulty of fault detection. In numerical experiments, we investigate the effectiveness of the resampling method.",2011,0, 5916,Automated wafer defect map generation for process yield improvement,"Spatial Signature Analysis (SSA) is used to detect a reoccurring failure signature in today wafer fabrication. In order for SSA to be effective, it must correlate the signature to a wafer defect maps library. However, classifying the signatures for the library is time consuming and tedious. The Manual Visual Inspection (MVI) of several failure bins in a wafer map for multiple lots can lead to fatigue for the operator and resulted in inaccurate representation of the failure signature. Hence, an automated wafer map extraction process is proposed here to replace the MVI while ensuring accuracy of the failure signature library. Clustering tool namely Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is utilized to extract the wafer spatial signature while ignoring the outliners. The appropriate size for the clustered signature is investigated and its performance is compared to the MVI signature. The analysis shows that for 3 selected failure modes, 20% occurrence rate clustered pattern provide similar performance to a 50% MVI signature. The proposed technique leads to a significant reduction in the time required for extracting current and new signatures, allowing faster yield response and improvement.",2011,0, 5917,Simulink-based hardware/software trade-off analysis technique,"Fast analysis of hardware/software trade-offs for cost, performance and power-constrained embedded systems is a key to reduce the time to market and at the same time improve the quality of results. However, this analysis must also be close to the final results of the detailed HW and SW implementation in order to lead to an optimal solution. This requires the use of compilation (for SW) and synthesis (for HW) techniques that ensure the existence of a solution with the estimated cost, and are not too far from what will later be achieved by manual optimization and detailed design. We start from a realistic application domain, namely sound-triggered wireless security cameras, and we show in detail how one can start from an algorithm modeled and validated using Simulink, and using commercial state-of-the-art tools explore various possible hardware and software implementations for the frequency based audio detection front end, with respect to the overall design constraints and goals. We show how rapid estimations of the various aspects of the cost function can be obtained quickly, using directly the C code generated from Simulink, with a few manual refinements in order to increase the efficiency of both software and hardware implementations and bring them closer to the final optimized implementation. We report results showing different points in the design space. The results that we obtained are close to manual hand optimized implementations for both HW and SW, showing that the approach is useful for trade-off analysis in a very short time, and that further manual optimizations can quickly lead to the best implementation.",2011,0, 5918,"High level synthesis of stereo matching: Productivity, performance, and software constraints","FPGAs are an attractive platform for applications with high computation demand and low energy consumption requirements. However, design effort for FPGA implementations remains high - often an order of magnitude larger than design effort using high level languages. Instead of this time-consuming process, high level synthesis (HLS) tools generate hardware implementations from high level languages (HLL) such as C/C++/SystemC. Such tools reduce design effort: high level descriptions are more compact and less error prone. HLS tools promise hardware development abstracted from software designer knowledge of the implementation platform. In this paper, we examine several implementations of stereo matching, an active area of computer vision research that uses techniques also common for image de-noising, image retrieval, feature matching and face recognition. We present an unbiased evaluation of the suitability of using HLS for typical stereo matching software, usability and productivity of AutoPilot (a state of the art HLS tool), and the performance of designs produced by AutoPilot. Based on our study, we provide guidelines for software design, limitations of mapping general purpose software to hardware using HLS, and future directions for HLS tool development. For the stereo matching algorithms, we demonstrate between 3.5X and 67.9X speedup over software (but less than achievable by manual RTL design) with a five-fold reduction in design effort vs. manual hardware design.",2011,0, 5919,A perfSONAR-based Integrated Multi-domain Network Measurement Platform -- Internet Monitoring as a Service,"The scale, diversity, and decentralized administration of the Internet mean that to continuously acquire the global status of the network and to timely identify the causes of communication performance degradation is reasonably difficult. However, emerging advanced network applications, which are often sensitive to communication quality and bandwidth consumption, as well as increasing security threats, strongly require a higher quality of network measurement and analysis in terms of granularity (spatial and temporal), timeliness, continuity, coverage, and reliability. Integrating multiple-location, diverse-type, and long-term measurements has been considered a key means for coping with such difficulties. In addition, the measured data and analyzed results should be flexibly shared and reused for efficiency. Therefore, a multi-domain network measurement platform should be realized, as it can provide integrated network monitoring and analysis functionality over the Internet on demand and can adapt to the purposes of the individual users (applications) and operators. In this paper, we thus briefly introduce the design principles and software implementation for a perfSONAR-based integrated network measurement system aiming at ""Internet Monitoring As aService"", together with a preliminary experiment using a prototype system on the Internet. With help of a function we newly designed and developed, our system can easily utilize new measurement tools and flexibly integrate and provide the measurement results of those tools to users.",2011,0, 5920,Characterizing Time-Varying Behavior and Predictability of Cache AVF,"With the development of information systems, electronic devices are becoming more and more susceptible to soft errors, especially for the tough environment of drastic electromagnetic interference. Architectural Vulnerability Factor (AVF), which is defined as the fraction of soft errors that result in erroneous outputs, has been introduced to quantify the vulnerability of structures to soft errors. Recent studies have shown that the AVF of several micro-architectures (e.g. issue queue) exhibits significant runtime variations and certain predictability for SPEC2K benchmarks, thus motivating the development of AVF-aware fault tolerant techniques. Through accurate AVF prediction, these techniques provide error protection only at the execution points of high AVF rather than the whole execution lifetime of programs, thereby reducing the overheads of soft error protection schemes without sacrificing much reliability. The native motivation of this paper is to see if cache AVF also exhibits such predictability for the further exploration of AVF-aware cache protection techniques. In this paper, we characterize dynamic vulnerability behavior of level1 data (L1D) cache and analyze the correlation between L1D AVF and performance metrics. Based on the analysis of variance and predictability of L1D AVF, we propose a novel hierarchical classification of SPEC2K benchmarks to provide insights into the varying vulnerability behavior and predictability of cache AVF. We develop a new methodology to select the best-suited predictive method for cache AVF. Our results show that accurate cache AVF predictor is available for SPEC2K benchmarks, and most programs are excellent candidates for AVF-aware fault tolerant schemes.",2011,0, 5921,The Early Identification of Detector Locations in Dependable Software,"The dependability properties of a software system are usually assessed and refined towards the end of the software development lifecycle. Problems pertaining to software dependability may necessitate costly system redesign. Hence, early insights into the potential for error propagation within a software system would be beneficial. Further, the refinement of the dependability properties of software involves the design and location of dependability components called detectors and correctors. Recently, a metric, called spatial impact, has been proposed to capture the extent of error propagation in a software system, providing insights into the location of detectors and correctors. However, the metric only provides insights towards the end of the software development life cycle. In this paper, our objective is to investigate whether spatial impact can enable the early identification of locations for detectors. To achieve this we first hypothesise that spatial impact is correlated with module coupling, a metric that can be evaluated early in the software development life cycle, and show this relationship to hold. We then evaluate module coupling for the modules of a complex software system, identifying modules with high coupling values as potential locations for detectors. We then enhanced these modules with detectors and perform fault-injection analysis to determine the suitability of these locations. The results presented demonstrate that our approach can permit the early identification of possible detector locations.",2011,0, 5922,Statistical Evaluation of Complex Input-Output Transformations,"This paper presents a new, statistical approach to evaluating software products that transform complex inputs into complex outputs. This approach, called multistage stratified input/output (MSIO) sampling, combines automatic clustering of multidimensional I/O data with multistage sampling and manual examination of data elements, in order to accurately and economically estimate summary measures of output data quality. We report results of two case studies in which MSIO sampling was successfully applied to evaluating complex graphical outputs.",2011,0, 5923,Uncertainty Propagation through Software Dependability Models,"Stochastic models are often employed to study dependability of critical systems and assess various hardware and software fault-tolerance techniques. These models take into account the randomness in the events of interest (aleatory uncertainty) and are generally solved at fixed parameter values. However, the parameter values themselves are determined from a finite number of observations and hence have uncertainty associated with them (epistemic uncertainty). This paper discusses methods for computing the uncertainty in output metrics of dependability models, due to epistemic uncertainties in the model input parameters. Methods for epistemic uncertainty propagation through dependability models of varying complexity are presented with illustrative examples. The distribution, variance and expectation of model output, due to epistemic uncertainty in model input parameters are derived and analyzed to understand their limiting behavior.",2011,0, 5924,An Empirical Study of JUnit Test-Suite Reduction,"As test suites grow larger during software evolution, regression testing becomes expensive. To reduce the cost of regression testing, test-suite reduction aims to select a minimal subset of the original test suite that can still satisfy all the test requirements. While traditional test-suite reduction techniques were intensively studied on C programs with specially generated test suites, there are limited studies for test-suite reduction on programs with real-world test suites. In this paper, we investigate test-suite reduction techniques on Java programs with real-world JUnit test suites. We implemented four representative test-suite reduction techniques for JUnit test suites. We performed an empirical study on 19 versions of four real-world Java programs, ranging from 1.89 KLoC to 80.44 KLoC. Our study investigates both the benefits and the costs of test-suite reduction. The results show that the four traditional test-suite reduction techniques can effectively reduce these JUnit test suites without substantially reducing their fault-detection capability. Based on the results, we provide a guideline for achieving cost-effective JUnit test suite reduction.",2011,0, 5925,WSPred: A Time-Aware Personalized QoS Prediction Framework for Web Services,"The exponential growth of Web service makes building high-quality service-oriented applications an urgent and crucial research problem. User-side QoS evaluations of Web services are critical for selecting the optimal Web service from a set of functionally equivalent service candidates. Since QoS performance of Web services is highly related to the service status and network environments which are variable against time, service invocations are required at different instances during a long time interval for making accurate Web service QoS evaluation. However, invoking a huge number of Web services from user-side for quality evaluation purpose is time-consuming, resource-consuming, and sometimes even impractical (e.g., service invocations are charged by service providers). To address this critical challenge, this paper proposes a Web service QoS prediction framework, called WSPred, to provide time-aware personalized QoS value prediction service for different service users. WSPred requires no additional invocation of Web services. Based on the past Web service usage experience from different service users, WSPred builds feature models and employs these models to make personalized QoS prediction for different users. The extensive experimental results show the effectiveness and efficiency of WSPred. Moreover, we publicly release our real-world time-aware Web service QoS dataset for future research, which makes our experiments verifiable and reproducible.",2011,0, 5926,Parametric Bootstrapping for Assessing Software Reliability Measures,"The bootstrapping is a statistical technique to replicate the underlying data based on the resampling, and enables us to investigate the statistical properties. It is useful to estimate standard errors and confidence intervals for complex estimators of complex parameters of the probability distribution from a small number of data. In software reliability engineering, it is common to estimate software reliability measures from the fault data (fault-detection time data) and to focus on only the point estimation. However, it is difficult in general to carry out the interval estimation or to obtain the probability distributions of the associated estimators, without applying any approximate method. In this paper, we assume that the software fault-detection process in the system testing is described by a non-homogeneous Poisson process, and develop a comprehensive technique to study the probability distributions on significant software reliability measures. Based on the maximum likelihood estimation, we assess the probability distributions of estimators such as the initial number of software faults remaining in the software, software intensity function, mean value function and software reliability function, via parametric bootstrapping method.",2011,0, 5927,Estimating Software Intensity Function via Multiscale Analysis and Its Application to Reliability Assessment,"Since software fault detection process is well-modeled by a non-homogeneous Poisson process, it is of great interest to estimate accurately the intensity function from observed software-fault data. In the existing work the same authors introduced the wavelet-based techniques for this problem and found that the Haar wavelet transform provided a very powerful performance in estimating software intensity function. In this paper, we also study the Haar-wavelet-transform-based approach to be investigated from the point of view of multiscale analysis. More specifically, a Bayesian multiscale intensity estimation algorithm is employed. In numerical study with real software-fault count data, we compare the Bayesian multiscale intensity estimation with the existing non-Bayesian wavelet-based estimation as well as the conventional maximum likelihood estimation method and least squares estimation method.",2011,0, 5928,Using Dependability Benchmarks to Support ISO/IEC SQuaRE,"The integration of Commercial-Off-The-Shelf (COTS) components in software has reduced time-to-market and production costs, but selecting the most suitable component, among those available, remains still a challenging task. This selection process, typically named benchmarking, requires evaluating the behaviour of eligible components in operation, and ranking them attending to quality characteristics. Most existing benchmarks only provide measures characterising the behaviour of software systems in absence of faults ignoring the hard impact that both accidental and malicious faults have on software quality. However, since using COTS to build a system may motivate the emergence of dependability issues due to the interaction between components, benchmarking the system in presence of faults is essential. The recent ISO/IEC 25045 standard copes with this lack by considering accidental faults when assessing the recoverability capabilities of software systems. This paper proposes a dependability benchmarking approach to determine the impact that faults (noted as disturbances in the standard) either accidental or malicious may have on the quality features exhibited by software components. As will be shown, the usefulness of the approach embraces all evaluator profiles (developers, acquirers and third-party evaluators) identified in the ISO/IEC 25000 ""SQuaRE"" standard. The feasibility of the proposal is finally illustrated through the benchmarking of three distinct software components, which implement the OLSR protocol specification, competing for integration in a wireless mesh network.",2011,0, 5929,Autonomic Resource Management Handling Delayed Configuration Effects,"Today, cloud providers offer customers access to complex applications running on virtualized hardware. Nevertheless, big virtualized data centers become stochastic environments with performance fluctuations. The growing number of cloud services makes a manual steering impossible. An automatism on the provider side is needed. In this paper, we present a software solution located in the Software as a Service layer with autonomous agents that handle user requests. The agents allocate resources and configure applications to compensate performance fluctuations. They use a combination of Support Vector Machines and Model-Predictive Control to predict and plan future configurations. This allows them to handle configuration delays for requesting new virtual machines and to guarantee time-dependent service level objectives (SLOs). We evaluated our approach on a real cloud system with a high-performance software and a three-tier e-commerce application. The experiments show that the agents accurately configure the application and plan horizontal scalings to enforce SLO fulfillments even in the presence of noise.",2011,0, 5930,VM Leakage and Orphan Control in Open-Source Clouds,"Computer systems often exhibit degraded performance due to resource leakage caused by erroneous programming or malicious attacks, and computers can even crash in extreme cases of resource exhaustion. The advent of cloud computing provides increased opportunities to amplify such vulnerabilities, thus affecting a significant number of computer users. Using simulation, we demonstrate that cloud computing systems based on open-source code could be subjected to a simple malicious attack capable of degrading availability of virtual machines (VMs). We describe how the attack leads to VM leakage, causing orphaned VMs to accumulate over time, reducing the pool of resources available to users. We identify a set of orphan control processes needed in multiple cloud components, and we illustrate how such processes detect and eliminate orphaned VMs. We show that adding orphan control allows an open-source cloud to sustain a higher level of VM availability during malicious attacks. We also report on the overhead of implementing orphan control.",2011,0, 5931,Coverability of wireless sensor networks,"The coverability of Wireless Sensor Networks (WSNs) is essentially a Quality of Service (QoS) problem that measures how well the monitored area is covered by one or more sensor nodes. The cover-ability of WSNs was examined by combining existing computational geometry techniques such as the Voronoi diagram and Delaunay triangulation with graph theoretical algorithmic techniques. Three new evaluation algorithms, known as CRM (Comprehensive Risk Minimization), TWS (Threshold Weight Shortest path), and CSM (Comprehensive Support Maximization), were introduced to better measure the coverability. The experimental results show that the CRM and CSM algorithms perform better than the MAM (MAximize Minimum weight) and MIM (MInimize Maximum weight) algorithms, respectively. In addition, the TWS algorithm can provide a lower bound detection possibility that accurately reflects the coverability of the wireless sensor nodes. Both theoretical and experimental analyses show that the proposed CRM, TWS, and CSM algorithms have O(n2) complexity.",2011,0, 5932,Valuing quality of experience: A brave new era of user satisfaction and revenue possibilities,"Telecommunication market today is defined by a plethora of innovative products and technologies that constantly raising the bar of technical feasibility in both hardware and software. Meanwhile users constantly demand better quality and improved attributes for all applications, becoming less and less tolerant in errors or inconsistencies. Evaluation methods that were dominant for several years in the field seem to have limited effect on assess end-user satisfaction, leading to unhappy customers and lower revenue for key market players. Ensuring Quality of Service (QoS) proved no longer capable to increase market share therefore a novel evaluation method is necessary. The aim of the present paper is to present a new framework of user-oriented quality assessment that tries to measure the overall experience derived from a telecommunication product. Provided that modern services are based over the principal of sharing an overall experience with others, it seems certain that the new method of estimating Quality of Experience (QoE) will produce much better results, needed by both providers and customers.",2011,0, 5933,A method for copper lines classification,"Recently many end users show attention on the quality of Internet access; in Italy, a national wide measure campaign, sponsored by Italian Communication Regulatory Authority, allows user to evaluate his bandwidth using a licit software. In this context a primary end user need is to have the possibility to measure its bandwidth and to compare it with the parameters declared by ISPs. Assuming the availability of a standard recognized methodology to measure bandwidth on user access link, a problem with this approach arises when the measured performances are lower than the declared quality of service. When this happens, the problem could depend on several factors not directly attributable to the ISP. In this work, we propose a solution by which it is possible to characterize a physical line. The idea is to detect the situations in which the performances are degraded due to an unsatisfactory physical line state. To make this detection some real cases are considerate.",2011,0, 5934,Performance Analysis of Cloud Centers under Burst Arrivals and Total Rejection Policy,"Quality of service, QoS, has a great impact on wider adoption of cloud computing. Maintaining the QoS at an acceptable level for cloud users requires an accurate and well adapted performance analysis approach. In this paper, we describe a new approximate analytical model for performance evaluation of cloud server farms under burst arrivals and solve it to obtain important performance indicators such as mean request response time, blocking probability, probability of immediate service and probability distribution of number of tasks in the system. This model allows cloud operators to tune the parameters such as the number of servers and/or burst size, on one side, and the values of blocking probability and probability that a task request will obtain immediate service, on the other.",2011,0, 5935,Intelligent migration from smart metering to smart grid,"With the growing demand for more energy from the subscribers, and given the issues associated with ecological systems, it is inevitable and indispensable to move toward an efficient, economical, green, clean and self-correcting power system. Implementing this system is a challenge for the managers to tackle. It is a long time that smart metering system have been used by utility companies and implemented in quite a few numbers of them. Necessity for developing from smart metering to smart gird and utilizing it for consumption management, black-out management, demand response, distributed energy resources, Grid optimization, Load management, technical loss, theft detection, quality of service, utilizing the maximum capacity of power line, balancing the distribution network, freeing the distribution network and a great number of other advantages are inevitable. In this paper we have studied how to move toward smartening up the grid.",2011,0, 5936,Electrically detected magnetic resonance study of a near interface trap in 4H SiC MOSFETs,"It is well known that 4H silicon carbide (SiC) based metal oxide silicon field effect transistors (MOSFETs) have great promise in high power and high temperature applications. The reliability and performance of these MOSFETs is currently limited by the presence of SiC/SiO2 interface and near interface traps which are poorly understood. Conventional electron paramagnetic resonance (EPR) studies of silicon samples have been utilized to argue for carbon dangling bond interface traps [1]. For several years, with several coworkers, we have explored these silicon carbide based MOSFETs with electrically detected magnetic resonance (EDMR), [2,3] establishing a connection between an isotropic EDMR spectrum with g=2.003 and deep level defects in the interface/near interface region of SiC MOSFETs. We tentatively linked the spectrum to a silicon vacancy or closely related defect. This assessment was tentative because we were not previously able to quantitatively evaluate the electron nuclear hyperfine interactions at the site. Through multiple improvements in EDMR hardware and data acquisition software, we have achieved a very large improvement in sensitivity and resolution in EDMR, which allows us to detect side peak features in the EDMR spectra caused by electron nuclear hyperfine interactions. This improved resolution allows far more definitive conclusions to be drawn about defect structure. In this work, we provide extremely strong experimental evidence identifying the structure of that defect. The evidence comes from very high resolution and sensitivity “fast passageâ€?(FP) mode [4, 5] electrically detected magnetic resonance (EDMR) or FPEDMR of the ubiquitous EDMR spectrum.",2011,0, 5937,PV system monitoring and performance of a grid connected PV power station located in Manchester-UK,"In the last two decades renewable resources have gained more attention due to continuing energy demand, along with the depletion in fossil fuel resources and their environmental effects to the planet. This paper presents a novel approach in monitoring PV power stations. The monitoring system enables system degradation early detection by calculating the residual difference between the model predicted and the actual measured power parameters. The model being derived using the MATLAB/SIMULINK software package and is designed with a dialog box to enable the user input of the PV system parameters. The performance of the developed monitoring system was examined and validated under different operating condition and faults e.g. dust, shadow and snow. Results were simulated and analyzed using the environmental parameters of irradiance and temperature. The irradiance and temperature data is gathered from a 28.8kW grid connected solar power system located on the tower block within the MMU campus in central Manchester. These real-time parameters are used as inputs of the developed PV model. Repeatability and reliability of the developed model performance were validated over a one and half year's period.",2011,0, 5938,Parallelization of an ultrasound reconstruction algorithm for non destructive testing on multicore CPU and GPU,"The CIVA software platform developed by CEA-LIST offers various simulation and data processing modules dedicated to non-destructive testing (NDT). In particular, ultrasonic imaging and reconstruction tools are proposed, in the purpose of localizing echoes and identifying and sizing the detected defects. Because of the complexity of data processed, computation time is now a limitation for the optimal use of available information. In this article, we present performance results on parallelization of one computationally heavy algorithm on general purpose processors (GPP) and graphic processing units (GPU). GPU implementation makes an intensive use of atomic intrinsics. Compared to initial GPP implementation, optimized GPP implementation runs up to ×116 faster and GPU implementation up to ×631. This shows that, even with irregular workloads, combining software optimization and hardware improvements, GPU give high performance.",2011,0, 5939,Efficient Gender Classification Using Interlaced Derivative Pattern and Principal Component Analysis,"With the wealth of image data that is now becoming increasingly accessible through the advent of the world wide web and proliferation of cheap, high quality digital cameras it is becoming ever more desirable to be able to automatically classify Gender into appropriate category such that intelligent agents and other such intelligent software might make better informed decisions regarding them without a need for excessive human intervention. In this paper, we present a new technique which provides superior performance superior than existing gender classification techniques. We first detect the face portion using Voila Jones face detector and then Interlaced Derivative Pattern (IDP)extract discriminative facial features for gender which are passed through Principal Component Analysis (PCA) to eliminate redundant features and thus reduce dimension. Keeping in mind strengths of different classifiers three classifiers K-nearest neighbor, Support Vector Machine and Fisher Discriminant Analysis are combined, which minimizes the classification error rate. We have used Stanford University Medical students (SUMS) face database for our experiment. Comparing our results and performance with existing techniques our proposed method provides high accuracy rate and robustness to illumination change.",2011,0, 5940,Stream prediction using representative episode rules,"Stream prediction based on episode rules of the form ""whenever a series of antecedent event types occurs, another series of consequent event types appears eventually""has received intensive attention due to its broad applications such as reading sequence forecasting, stock trend analyzing, road traffic monitoring, and software fault preventing. Many previous works focus on the task of discovering a full set of episode rules or matching a single predefined episode rule, little emphasis has been attached to the systematic methodology of stream prediction. This paper fills the gap by constructing an efficient and effective episode predictor over an event stream which works on a three-step process of rule extracting, rule matching and result reporting. Aiming at this goal, we first propose an algorithm Extractor to extract all representative episode rules based on frequent closed episodes and their generators, then we introduce an approach Matcher to simultaneously match multiple episode rules by finding the latest minimal and non-overlapping occurrences of their antecedents, and finally we devise a strategy Reporter to report each prediction result containing a prediction interval and a series of event types. Experiments on both synthetic and real-world datasets demonstrate that our methods are efficient and effective in the stream environment.",2011,0, 5941,Risk management assessment using SERIM method,"Software development is a complex process that involved many activities and has a big uncertainty to success. It is also a typical of activity that can be costly if mismanaged. Many factors can lead the success and also can cause software project failure. The failure actually can be detected early if we can adopt the concept of risk management and implemented it into software development project. SERIM is a method to measure risk in software engineering, proposed by Karolak [4]. SERIM is based on the mathematics of probability. SERIM uses some parameters which are derived from risk factors. The factors are: Organization, Estimation, Monitoring, Development Methodology, Tools, Risk Culture, Usability, Correctness, Reliability and Personnel. Each factor then measured by some questions metrics and there is 81 software metric questions for all factors. The factors then related and mapped into SDLC phases and risk management activities to calculate the probability (P). SERIM uses 28 probability variables to assess the risk potentials. The SERIM method then implemented to assess the risk of information system development, TrainSys, which is developed for a training and education unit in an organization. The result is useful to determine the low probability of TrainSys project success factor. The result also show the dominant and highest factor need to address in order to improve the quality of process and product of software development.",2011,0, 5942,Adaptive resource management in PaaS platform using Feedback Control LRU algorithm,"Cloud computing gets more and more popular because of its abilities on offering flexible dynamic IT infrastructure, QoS (Quality of Service) guaranteed computing environments and configurable software services. Cloud computing supports three service models: SaaS (Software as a Service), PaaS (Platform as a Service), and IaaS (Infrastructure as a Service). PaaS provides users with an application development and hosting platform with great reliability, scalability and convenience and it has many advantages in helping customers create applications compared with other service models. However, as a typical distributed system with limited computing resources, PaaS platform has to address the problems of resource management in order to achieve satisfactory QoS as well as efficient resource utilization. This paper presents an adaptive resource management algorithm called Feedback Control LRU (FC-LRU) which integrates the feedback control technique with LRU (Least Recently Used) algorithm. Simulation is conducted to evaluate the performance of FC-LRU and the results demonstrate that FC-LRU achieves satisfactory performance: it enables PaaS platform to maintain a low missed deadline ratio and high CPU utilization under the conditions of multitasks with different workloads.",2011,0, 5943,A data placement algorithm with binary weighted tree on PC cluster-based cloud storage system,"The need and use of scalable storage on cloud has rapidly increased in last few years. Organizations need large amount of storage for their operational data and backups. To address this need, high performance storage servers for cloud computing are the ultimate solution, but they are very expensive. Therefore we propose efficient cloud storage system by using inexpensive and commodity computer nodes. These computer nodes are organized into PC cluster as datacenter. Data objects are distributed and replicated in a cluster of commodity nodes located in the cloud. In the proposed cloud storage system, a data placement algorithm which provides a highly available and reliable storage is proposed. The proposed algorithm applies binary tree to search storage nodes. It supports the weighted allocation of data objects, balancing load on PC cluster with minimum cost. The proposed system is implemented with HDFS and experimental results prove that the proposed algorithm can balance storage load depending on the disk space, expected availability and failure probability of each node in PC cluster.",2011,0, 5944,A quality of service framework for dependability in large-scale distributed systems,"As recognition grows within industry for the advantages that can be gained through the exploitation of large-scale dynamic systems, a need emerges for dependable performance. Future systems are being developed with a requirement to support mission critical and safety critical applications. These levels of criticality require predictable performance and as such have traditionally not been associated with adaptive systems. The software architecture proposed for such systems takes its properties from the service-oriented computing paradigm and the communication model follows a publish/subscribe approach. While adaptive, such architectures do not, however, typically support real-time levels of performance. There is scope, however, for dependability within such architectures through the use of Quality of Service (QoS) methods. QoS is used in systems where the distribution of resources cannot be decided at design time. In this paper a QoS based framework is proposed for providing adaptive and dependable behaviour for future large-scale dynamic systems through the flexible allocation of resources. Simulation results are presented to demonstrate the benefits of the QoS framework and the tradeoffs that occur between negotiation algorithms of varying complexities.",2011,0, 5945,A Software-Based Self-Test methodology for on-line testing of processor caches,"Nowadays, on-line testing is essential for modern high-density microprocessors to detect either latent hardware defects or new defects appearing during lifetime both in logic and memory modules. For cache arrays, the flexibility to apply online different March tests is a critical requirement. For small memory arrays that may lack programmable Memory Built-In Self-Test (MBIST) circuitry, such as L1 cache arrays, Software-Based Self-Test (SBST) can be a flexible and low-cost solution for on-line March test application. In this paper, an SBST program development methodology is proposed for online periodic testing of L1 data and instruction cache, both for tag and data arrays. The proposed SBST methodology utilizes existing special purpose instructions that modern Instruction Set Architectures (ISAs) implement to access caches for debug-diagnostic and performance purposes, termed hereafter Direct Cache Access (DCA) instructions, as well as, performance monitoring mechanisms to overcome testability challenges. The methodology has been applied to 2 processor benchmarks, OpenRISC and LEON3 to demonstrate its high adaptability, and experimental comparison results against previous contributions show that the utilization of DCA instructions significantly improves test code size (83%) and test duration (72%) when applied to the same benchmark (LEON3).",2011,0, 5946,Performance assessment of ASD team using FPL football rules as reference,"Agile software development (ASD) teams are committed to frequent, regular, high-quality deliverables. Agile team requires to produce high-quality code in short time span. Agile suggests methodologies like extreme programming and scrum to resolve the issues faced by the developers. Extreme programming is a methodology of ASD which suggests pair programming. But for a number of reasons, pairing is the most controversial and least universally-embraced agile programmer practice [1]. The reason for this is that certain task requires lot of deep thinking and so pairing (lack of privacy) does not work here. Certain personalities too do not work well with pairing. In scrum, daily standup-meeting is the method used to resolve impediments. Those impediments that are not resolved are added to product backlog. This adds to cost. There can be online mentors (e-Mentors) to help programmers resolve their domain issues. The selection of such mentors depends on their skill set and availability [3]. In order to sustain e-Mentoring, the experts who act as mentors in the respective domain (application / technology / tools) have to be rewarded for their assists. The mentor could be within the development team or can be part of any other project team. By seeing the similarities between the sports team and the Agile team, a way of recognizing and rewarding these assists is suggested in this paper. The set of rules used in Fantasy Premier League for performance assessment of football players is taken here as a reference for assessing agile team.",2011,0, 5947,Computerized instrumentation â€?Automatic measurement of contact resistance of metal to carbon relays used in railway signaling,"The Contact Resistance of metal to carbon relays used in railway signaling systems is a vital quality parameter. The manual measurement process is tedious, error prone and involves lot of time, effort and manpower. Besides, it is susceptible to manipulation and may adversely affect the functional reliability of relays due to erroneous measurements. To enhance the trustworthiness of measurement of contact resistance & to make the process faster, an automated measurement system having specially designed application software and a testing jig attachment has been developed. When the relay is fixed on the testing jig, the software scans all the relay contacts and measures the CR. The results are displayed on the computer screen and stored in a database file.",2011,0, 5948,Instantaneous within cycle model based fault estimators for SI engines,"The Mean Value Engine Model, commonly used in model based fault estimation of automotive spark ignition gasoline engines describes only the averaged dynamics of the engines, resulting in reduced sensitivity and isolability of faults. In this paper, estimation of faults is done using an instantaneous physics based within cycle model of the gasoline engine. A novel method for estimating certain types of faults is proposed and validated against the standard industrial simulation software, AMESim. The results of fault estimation show the efficacy of the method and also signify the importance of their instantaneously pulsating nature for characterizing the true nature of the fault.",2011,0, 5949,Non-iterative algorithm of analytical synchronization of two-end measurements for transmission line parameters estimation and fault location,This paper presents a new setting free approach to synchronization of two-end current and voltage unsynchronized measurements. Authors propose new non-iterative algorithm processing three-phase currents and voltages measured under normal steady state load condition of overhead line. Using such synchronization line parameters can be estimated and then precise fault location can be performed. The presented algorithm has been tested with ATP-EMTP software by generating great number of simulations with various sets of line parameters proving its usefulness for overhead lines up to 200km length.,2011,0, 5950,Pair analysis of requirements in software engineering education,"Requirements Analysis and Design is found to be one of the crucial subjects in Software Engineering education. Students need to have deeper understanding before they could start to analyse and design the requirements, either using models or textual descriptions. However, the outcomes of their analysis are always vague and error-prone. We assume that this issue can be handled if “pair analysisâ€?is conducted where all students are assigned with partners following the concept of pair-programming. To prove this, we have conducted a small preliminary evaluation to compare the outcomes of solo work and “pair analysisâ€?work for three different groups of students. The performance, efficacy and students' satisfaction and confidence level are evaluated.",2011,0, 5951,Adopting Six Sigma approach in predicting functional defects for system testing,"This research focuses on constructing a mathematical model to predict functional defects in system testing by applying Six Sigma approach. The motivation behind this effort is to achieve zero known post release defects of the software delivered to end-user. Besides serving as the indicator of optimizing testing process, predicting functional defects at the start of testing allows testing team to put comprehensive test coverage, find as many defects as possible and determine when to stop testing so that all known defects are contained within testing phase. Design for Six Sigma (DfSS) is chosen as the methodology as it emphasizes on customers' requirement and systematic techniques to build the model. Historical data becomes the crucial elements in this study. Metrics related to potential predictors and their relationships for the model are identified, which focuses on metrics in phases prior to testing phase. Repeatability and capability of testers' consistency in finding defects are analyzed. Type of data required are also identified and collected. The metrics of selected predictors which incorporate testing and development metrics are measured against total functional defects using multiple regression analysis. The best and most significant mathematical model generated by the regression analysis is selected as the proposed prediction model for functional defects in system testing phase. Validation of the model is then conducted to prove the goodness for implementation. Recommendation and future research work are provided at the end of this study.",2011,0, 5952,Efficient prediction of software fault proneness modules using support vector machines and probabilistic neural networks,"A software fault is a defect that causes software failure in an executable product. Fault prediction models usually aim to predict either the probability or the density of faults that the code units contain. Many fault prediction models using software metrics have been proposed in the Software Engineering literature. This study focuses on evaluating high-performance fault predictors based on support vector machines (SVMs) and probabilistic neural networks (PNNs). Five public NASA datasets from the PROMISE repository are used to make these predictive models repeatable, refutable, and verifiable. According to the obtained results, the probabilistic neural networks generally provide the best prediction performance for most of the datasets in terms of the accuracy rate.",2011,0, 5953,H.264 deblocking filter enhancement,This paper proposes new software-based techniques for speeding and reducing the complexity of the deblocking filter used in the state-of-the-art H.264 international video coding standard to improve the visual quality of the decoded video frames. The proposed techniques are classified as standard-compliant and standard-noncompliant techniques. The standard-compliant techniques optimize the standard filter through optimizing the boundary strength calculation and group filtering of macroblocks. The standard-noncompliant techniques predict the new boundary strength and edge detection conditions from previous values. Experimental results on both an embedded platform and a desktop PC show significant increment in performance improvement that reaches 47% for the standard-compliant techniques and 80% for the standard-noncompliant techniques. They also demonstrate that for standard-noncompliant techniques the quality degradation computed using the Peak Signal to Noise Ratio is insignificant.,2011,0, 5954,Geometric mean based trust management system for WSNs (GMTMS),"The Wireless Sensor Network (WSN) nodes are high-volume in number, and their deployment environment may be hazardous, unattended and/or hostile and sometimes dangerous. The traditional cryptographic and security mechanisms in WSNs cannot detect the node physical capture, and due to the malicious or selfish nodes even total breakdown of network may take place. Also, the traditional security mechanisms in WSNs requires sophisticated software, hardware, large memory, high processing speed and communication bandwidth at node. Hence, they are not sufficient for secure routing of message from source to destination in WSNs. Alternatively, trust management schemes consist a powerful tool for the detection of unexpected node behaviours (either faulty or malicious). In this paper, we propose a new geometric mean based trust management system by evaluating direct trust from the QoS characteristics (trust metrics) and indirect trust from recommendations by neighbour nodes, which allows for trusted nodes only to participate in routing.",2011,0, 5955,Understanding Bohr-Mandel Bugs through ODC Triggers and a Case Study with Empirical Estimations of Their Field Proportion,"This paper uses ODC Triggers as a means to estimate the Bohr-Mandel bug proportions from a software product in production. Specifically, the contributions are: A conceptual articulation of how ODC Triggers can differentiate between Bohr and Mandel bugsA grouping of triggers to estimate the proportion of Bohr-Mandel bugs in the field. A case study that estimates Mandelbug proportions to range ~20%-40% which is comparable to the JPL-NASA empirical study. A measure of the distribution of Mandelbugs across components, impact groups and time.Creates a discussion, and raises questions in greater depth on the Bohr-Mandel definitions, implications, features and manifestations.",2011,0, 5956,A smarter supply chain - end to end Quality Management,"For companies in a “Smarter Planetâ€?it is key that they are globally integrated, interconnected and intelligent. For globally integrated supply chains the large enterprises must rely heavily on their suppliers and their suppliers' suppliers for products, services and software. This increased reliance and dependence requires that an extended supply chain have an effective supplier quality management (SQM) system that provides visibility into this complex system. Essential in this SQM system is the ability to provide an end to end (E2E) process that begins with the concept of a new product and manages it through its extended lifecycle. These supply chains of large enterprises are faced with many daunting challenges such as consistent quality, compliance performance, part management (change control, parts obsolescence, counterfeit parts, etc. ), continuous improvement, inventory management, product security, on time delivery, and environment regulation compliances. An added complexity to the extended supply chain is that over the past several years there have been disruptions in the supply chain as a result of raw materials, demand changes, commodity volatility and fuel price increases. Per Manufacturing Insights “Worldwide Supply Chain 2010 top 10 Predictionsâ€?the #5 Prediction: “intelligent Supply Chains will put Broader Visibility Burden on Supply Chain Organizations, both Owned and Outsourcedâ€?further driving the need for a strong E2E Quality Management System. [1] Many major enterprises', like IBM, are implementing or considering implementing the use of business intelligence in their E2E supply management system.",2011,0, 5957,Generation of the power quality disturbances in LabVIEW software environment,"Software controlled procedure for classification and generation of the typical power quality disturbances, is presented in this paper. Generation procedure is functionally based on the virtual instrumentation concept, including software application developed using graphical programming package LabVIEW and D/A data acquisition card NI PCI 6713, installed in standard PC environment. Besides standard undisturbed three-phase voltage signal waveforms, six different categories of the PQ disturbances characteristic for real-time power distribution networks, can be simulated on the basis of developed virtual instruments: voltage swells, sags, interruptions, high-order voltage harmonics, swells with harmonics and sags with harmonics. Each of simulated PQ disturbances can be predefined and easily changed according to user requirements, using various combinations of the knobs and controls implemented on the virtual instrument front panel. Data acquisition 8-channel card NI 6713 provides real-time generation of the disturbances using analog output channels, which can be applied for testing and verification of the instruments developed for measurement and processing of the basic PQ parameters.",2011,0, 5958,A method for elimination of phase jitter in software signal demodulation,"This paper describes a new method for phase jitter elimination. It is based on detection of signal passing through zero after demodulation, which helps to determine a residual carrier frequency, and subsequently phase jitter elimination. Classical ways of phase jitter elimination require a frequency search in some range, while this new method is much faster. The method is implemented in the software for offline analysis of modulation quality for 16-QAM signal constellation. It analyzes the signal at receiving IF of radio-relay device. The program consists of several separate functional units, which are described in text. Measurements were made on real signal under the laboratory conditions.",2011,0, 5959,Signal acquisition and processing in the magnetic defectoscopy of steel wire ropes,"In this paper, the system that resolves the problem of wire rope defects using magnetic method of inspection is presented. Implementation of the system should provide for full monitoring of wire rope condition, according to the prescribed international standards. The purpose of this system, except to identify defects in the rope, is to determine to what extent the damage has been done. The measurement procedure provides for a better understanding of the defects that occur, as well as the rejection criteria of used ropes, that way increasing their security. Hardware and software design of appliance for recording defects and test results are presented in this paper.",2011,0, 5960,Measuring the quality characteristics of assembly code on embedded platforms,The paper describes the implementation of programming tool for measuring quality characteristics of assembly code. The aim of this paper is to prove the usability of these metrics for assessing the quality of assembly code generated by C Compiler for DSP architecture in order to improve the Compiler. The analysis of test results showed that the compiler generates good quality assembly code.,2011,0, 5961,Metrics and Antipatterns for Software Quality Evaluation,"In the context of software evolution, many activities are involved and are very useful, like being able to evaluate the design quality of an evolving system, both to locate the parts that need particular refactoring or reengineering efforts, and to evaluate parts that are well designed. This paper aims to give support hints for the evaluation of the code and design quality of a system and in particular we suggest to use metrics computation and antipatterns detection together. We propose metrics computation based on particular kinds of micro-structures and the detection of structural and object-oriented antipatterns with the aim of identifying areas of design improvements. We can evaluate the quality of a system according to different issues, for example by understanding its global complexity, analyzing the cohesion and coupling of system modules and locating the most critical and complex components that need particular refactoring or maintenance.",2011,0, 5962,Toward Intelligent Software Defect Detection - Learning Software Defects by Example,"Source code level software defect detection has gone from state of the art to a software engineering best practice. Automated code analysis tools streamline many of the aspects of formal code inspections but have the drawback of being difficult to construct and either prone to false positives or severely limited in the set of defects that can be detected. Machine learning technology provides the promise of learning software defects by example, easing construction of detectors and broadening the range of defects that can be found. Pinpointing software defects with the same level of granularity as prominent source code analysis tools distinguishes this research from past efforts, which focused on analyzing software engineering metrics data with granularity limited to that of a particular function rather than a line of code.",2011,0, 5963,Stability and Classification Performance of Feature Selection Techniques,"Feature selection techniques can be evaluated based on either model performance or the stability (robustness) of the technique. The ideal situation is to choose a feature selection technique that is robust to change, while also ensuring that models built with the selected features perform well. One domain where feature selection is especially important is software defect prediction, where large numbers of metrics collected from previous software projects are used to help engineers focus their efforts on the most faulty modules. This study presents a comprehensive empirical examination of seven filter-based feature ranking techniques (rankers) applied to nine real-world software measurement datasets of different sizes. Experimental results demonstrate that signal-to-noise ranker performed moderately in terms of robustness and was the best ranker in terms of model performance. The study also shows that although Relief was the most stable feature selection technique, it performed significantly worse than other rankers in terms of model performance.",2011,0, 5964,Fault Detection through Sequential Filtering of Novelty Patterns,"Multi-threaded applications are commonplace in today's software landscape. Pushing the boundaries of concurrency and parallelism, programmers are maximizing performance demanded by stakeholders. However, multi-threaded programs are challenging to test and debug. Prone to their own set of unique faults, such as race conditions, testers need to turn to automated validation tools for assistance. This paper's main contribution is a new algorithm called multi-stage novelty filtering (MSNF) that can aid in the discovery of software faults. MSNF stresses minimal configuration, no domain specific data preprocessing or software metrics. The MSNF approach is based on a multi-layered support vector machine scheme. After experimentation with the MSNF algorithm, we observed promising results in terms of precision. However, MSNF relies on multiple iterations (i.e., stages). Here, we propose four different strategies for estimating the number of the requested stages.",2011,0, 5965,New integrated hybrid evaporative cooling system for HVAC energy efficiency improvement,"Cooling systems in buildings are required to be more energy-efficient while maintaining the standard air quality. The aim of this paper is to explore the potential of reducing the energy consumption of a central air-conditioned building taking into account comfort conditions. For this, we propose a new hybrid evaporative cooling system for HVAC efficiency improvement. The integrated system will be modeled and analyzed to accomplish the energy conservation and thermal comfort objectives. Comparisons of the proposed hybrid evaporative cooling approach with current technologies are included to show its advantages. To investigate the potential of energy savings and air quality, a real-world commercial building, located in a hot and dry climate region, together with its central cooling plant is used in the case study. The energy consumption and relevant data of the existing central cooling plant are acquired in a typical summer week. The performance with different cooling systems is simulated by using a transient simulation software package. New modules for the proposed system are developed by using collected experimental data and implemented with the transient tool. Results show that more than 52% power savings can be obtained by this system while maintaining the predicted mean vote (PMV) between -1 to +1 for most of summer time.",2011,0, 5966,Autonomic Computing: Applications of Self-Healing Systems,"Self-Management systems are the main objective of Autonomic Computing (AC), and it is needed to increase the running system's reliability, stability, and performance. This field needs to investigate some issues related to complex systems such as, self-awareness system, when and where an error state occurs, knowledge for system stabilization, analyze the problem, healing plan with different solutions for adaptation without the need for human intervention. This paper focuses on self-healing which is the most important component of Autonomic Computing. Self-healing is a technique that aims to detect, analyze, and repair existing faults within the system. All of these phases are accomplished in real-time system. In this approach, the system is capable of performing a reconfiguration action in order to recover from a permanent fault. Moreover, self-healing system should have the ability to modify its own behavior in response to changes within the environment. Recursive neural network has been proposed and used to solve the main challenges of self-healing, such as monitoring, interpretation, resolution, and adaptation.",2011,0, 5967,Designing Phantoms for Industrial Computed Tomography,"The increasing use of computed tomography (CT) as a diagnostic tool creates the need for an efficient means of evaluating the performance of the CT systems now in use. Usually the metal products have the defects during manufacturing and are inevitable. In the NDT system, ability for evaluate the casting defects such as cracks and pores is also very important. In the present study, several phantoms were made for evaluating CT system and defect analysis tools. The phantoms were CT scanned, and the result of CT scan was discussed. For the accuracy matters, measurements were made and compared with real objects. It is expected that phantoms used in the present study help to validate defect detection softwares.",2011,0, 5968,Tomographic performance characteristics of the IQ●SPECT system,"The IQ·SPECT system was introduced by Siemens in 2010 to significantly improve the efficiency of myocardial perfusion imaging (MPI) using conventional, large field-of-view (FOV) SPECT and SPECT·CT systems. With IQ·SPECT, it is possible to perform MPI scans in one-fourth the time or using one-fourth the administered dose as compared to a standard protocol using parallel-hole collimators. This improvement is achieved by means of a proprietary multifocal collimator that rotates around the patient in a cardio-centric orbit resulting in a four-fold magnification of the heart while keeping the entire torso in the FOV. The data are reconstructed using an advanced reconstruction algorithm that incorporates measured values for gantry deflections, collimator-hole angles, and system point response function. This article explores the boundary conditions of IQ·SPECT imaging, as measured using the Data Spectrum® cardiac torso phantom with the cardiac insert. Impact on reconstructed image quality was evaluated for variations in positioning of the myocardium relative to the sweet spot, scan-arc limitations, and for low-dose imaging protocols. Reconstructed image quality was assessed visually using the INVIA 4DMSPECT and quantitatively using Siemens internal IQ assessment software. The results indicated that the IQ·SPECT system is capable of tolerating possible mispositioning of the myocardium relative to the sweet spot by the operator, and that no artifacts are introduced by the limited angle coverage. We also found from the study of multiple low dose protocols that the dwell time will need to be adjusted in order to acquire data with sufficient signal-to-noise ratio for good reconstructed image quality.",2011,0, 5969,Process automation of metal to Carbon relays: On â€?Line measurement of electrical parameters,"The manufacturing process of Metal to Carbon relays used in railway signaling systems for configuring various circuits of signals / points / track circuits etc. consists of seven phases from raw material to finished goods. To ensure in-process quality, the physical, electrical and various other parameters are measured manually with non-automated equipment, after each stage. Manual measurements are tedious, error prone and involve lot of time, effort and manpower. Besides, they are susceptible to manipulation and may lead to inferior quality products being passed, either due to deliberation or due to malefic intentions. Due to erroneous measurement of electrical parameters, the functional reliability of relays is adversely affected. To enhance the trustworthiness of measurement of electrical parameters & to make the process faster, an automated measurement system having proprietary application software and a testing jig attachment has been developed. When the relay was fixed on the testing jig, the software scanned all the relay contacts and measured all the electrical parameters viz. operating voltage / current, contact resistance, release voltage / current, coil resistance etc. The result was stored in a database file and ported on an internet website. Thus, the test results of individual relays were available on-line, with date & time tags and could be easily monitored.",2011,0, 5970,Functionality test of a readout circuit for a 1mm3 resolution clinical PET system,"We are developing a 1mm3 resolution Positron Emission Tomography (PET) camera dedicated to breast imaging, which collects high energy photons emitted from radioactively labeled agents injected in the patients to detect molecular signatures of breast cancer. The camera consists of 8 × 8 arrays of 1 × 1 × 1mm3 lutetium yttrium oxyorthosilicate (LYSO) crystals coupled to position sensitive avalanche photo-diodes (PSAPDs). The camera is built out of 2 panels each having 9 cartridges. A cartridge houses 8 layers of 16 dual LYSO-PSAPD modules and features 1024 readout channels. Amplification, shaping and triggering is done using the 36 channel RENA-3â„?chip. 32 of these chips are needed per cartridge, or 576 for the entire camera. The RENA-3â„?needs functionality and performance validation before assembly in the camera. A Chip Tester board was built to ensure RENA functionality and quality, featuring a 20,000-cycle chip holder and the ability to charge inject each channel individually. Lab View software was written to collect data on all RENA-3â„?channels automatically. Charge was injected using an arbitrary waveform generator. Software was written to validate gain, linearity, cross talk, and timing noise characteristics. Gain is tested using a linear fit, typical values of gain are 13 and -16 for positive and negative injected charges respectively. Timing analysis was based on analyzing the phase shift between U and V timing channels. A phase shift of 90 ± 5° and timing noise of 2ns were considered acceptable.",2011,0, 5971,FSFI: A Full System Simulator-Based Fault Injection Tool,"With VLSI technology advances and the increasing popularity of COTS components and multi-core processor in space, aviation, and military harsh environment, dependability becomes more attractive to overcome the increasing susceptibility to transient, permanent, and wear-out induced intermittent fault. Fault injection is widely used in dependability evaluation and fault emulation. As compared to physical- and software based fault-injection tool, this paper presents a simulation based Fault Injection tool, namely FSFI, to study high-level propagations of faults and system-level manifestations, especially the underlying causes of fault manifestations, namely Symptoms in this paper. The primary contributions of this paper are as follows. First and foremost, FSFI - A Full System simulator based Fault Injection tool is designed and described in great details. FSFI is based on an open source full system simulator (SAM) and it is heavily modified to support different processor component such as ALU, decoder, integer register files, and AGEN (Address Generation Unit). Additional four modules are added to the FSFI-simulator: Fault Injector, Monitor, Analyzer, and Controller. Second, transient faults are injected into different SPARC processor components, such as ALU, decoder, integer register files, and AGEN to deliberately study fault high level manifestations and propagations from processor component, through SPARC architecture level, hyper visor, OS, to application. Third, the underlying causes of fault manifestations, namely Symptoms, such as fatal trap, high OS, high hyper, and hangs, are captured by FSFI Monitor. The distributions of Symptoms for different components against typical benchmarks are investigated, and the underlying reasons are analyzed in detail. Preliminary experiment results show that FSFI is effective for fault high level propagation and Symptom research, and some of our future work is prospected.",2011,0, 5972,A Network Status Evaluation Method under Worm Propagation,"The measurement of worm propagation impact on network status remained an elusive goal. This paper analyzes the worm characteristics and network traffic and service, introduces evaluation metrics and presents a new method to assess the network situation under worm propagation. The applicability of this method is verified by simulated experiments with the network simulation tool LSNEMUlab test bed.",2011,0, 5973,The Testing and Diagnostic System on AVR of the Movable Electricity Generating Set,"AVR (Automatic Voltage Regulator) is the most important module, which can control the output voltage of generating sets, guarantee the voltage stability, improve power quality and decide the performance of electricity generating sets, so this paper introduces the design method of the AVR detecting and diagnostic system, which based on the fault database. The paper introduces the platform of the testing and diagnostic system on AVR from the hardware and software design, composed of the industrial computer (main controller), the programmed power supply, the data acquisition unit, and the software programmed by Lab Windows/CVI. The FTA (Fault Tree Analysis) method, establish fault database and analyse fault of AVR, is Applyed. Through testing the certain type's AVR, the method, proposd in the paper, is proved to be feasibile and versatile, and is satisfied with the detection and diagnosis for AVR eventually.",2011,0, 5974,Research on Detection and Diagnosis Technology for Certain Shortwave-Communication-Control Equipment,"Aiming at automatic detection and diagnosis for certain short wave-communication-control equipment, some suitable technology of detection and diagnosis for the equipment was researched. Methods of detection and diagnosis for the system were confirmed based on analyzing equipment characteristic and requirement of detection. The hardware and software platform of automatic detection and diagnosis system were designed. Results show that the proposed detection and diagnosis technology is quick in fault detection speed and high in fault diagnosis.",2011,0, 5975,Online muon reconstruction in the ATLAS Muon Spectrometer at the level-2 stage of the event selection,"To cope with the 40 MHz event production rate of LHC, the ATLAS experiment uses a multi level trigger architecture that selects events in three sequential steps, increasing the complexity of reconstruction algorithms and accuracy of measurements with each step. The Level-1 is implemented with custom hardware and provides a first reduction of the event rate to 75KHz, identifying physics candidates within a small detector region (Region of Interest, RoI). The higher trigger levels, the Level-2 and the Event Filter, are running on dedicated PC farms where the event rate is further reduced to O(400) Hz by software algorithms. At Level-2, the selection of the muon events is initiated by the “MuFastâ€?algorithm, which confirms the muon candidates by means of a precise measurement of the muon candidate momentum. Designed to use a negligible fraction of the Level-2 latency, this algorithm exploits fast tracking and Look Up Table (LUT) techniques to perform the muon reconstruction with the precision muon chamber data within the RoI. The quality of the track parameters is good and approaches that of the offline reconstruction in some detector regions, and enables Level-2 to achieve a significant reduction of the event rate. This paper presents the current state of the art of the L2 algorithm “MuFastâ€?reviewing its performance on the event selection, and the algorithm optimization achieved by using data taken in 2010.",2011,0, 5976,ATLAS detector data processing on the Grid,"The ATLAS detector is in the second year of continuous LHC running. A starting point for ATLAS physics analysis is data reconstruction. Following the prompt reconstruction, the ATLAS data are reprocessed, which allows reconstruction of the data with updated software and calibrations providing coherence and improving the quality of the reconstructed data for physics analysis. The large-scale data reprocessing campaigns are conducted on the Grid. Computing centers around the world participate in reprocessing providing tens of thousands of CPU-cores for a faster throughput. Reprocessing relies upon underlying ATLAS technologies providing reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, and petascale data integrity control. These technologies are also empowering ATLAS physics and subsystem groups in further data processing steps on the Grid. We present the experience of large-scale data reprocessing campaigns and group data processing on the Grid.",2011,0, 5977,Prompt reconstruction of ATLAS data in 2010 and 2011,"Excluding a short duration in 2008, the LHC started regular operation in November 2009 and has since then provided billions of collision events that were recorded by the ATLAS experiment and promptly reconstructed at an on-site computing farm. The prompt reconstruction chain takes place in two steps and is designed to provide high-quality data for physics publications with as little delay as possible. The first reconstruction step is used for data quality assessment and determining calibration constants and beam spot position, so that this information can be used in the second reconstruction step to optimize the reconstruction performance. After the technical stop of the LHC at the end of 2010, the prompt reconstruction chain had to deal with greatly increased luminosity and pileup conditions. To allow the computing resources to cope with this increased dataflow, without developing a backlog, recently significant improvements have been made in the ATLAS reconstruction software to reduce CPU time and file sizes for the produced datasets.",2011,0, 5978,Resistivity and mu-tau imager for automatic characterization of semiconductor materials,"Despite intensive research for the improvement of the quality of II-VI crystals, until now no clear method permits to determine quickly the performance of final detector, directly on the as-grown crystal or on the wafers. This situation induces excess cost, which hinders the development of detectors based on these binary and ternary semiconductors. The homogeneity of the semi-insulating crystals will determine the capacity of these semiconductors to be used in gamma and X- rays detectors for imaging in medical, spatial and industrial applications. It appears, that the smaller the pixel unit size is, the better the material uniformity has to be. Fast characterization and non destructive methods become, therefore, imperative to keep the production costs in industry acceptable. The previously developed innovative contactless equipment for the mapping of resistivity of the wafers before contacts are deposited has become of great interest with a rather large domain of resistivity (from 106 up to 1012 Ohm cm). We have introduced an additional contactless wafer mu-tau imaging of electrons on the same wafer automatically. The principle is based on a localized excitation of the wafer without any electrical contact and a particular use of the Hecht relation. In this paper, the main features of this new instrument will be presented and the results obtained on different materials CdTe, CdZnTe or CdMnTe presented and discussed. Very satisfying agreement has been observed between the results obtained with the new system and conventional methods based on the I-V and alpha spectroscopic measurements of detectors produced from these materials.",2011,0, 5979,"Non-invasive, label free, quantitative characterisation of live cells in monolayer culture","This paper presents the development of an evanescent wave light microscope and predictive software to enable the quantitative study of cellular processes and live cell quality at high resolution and without the use of labels. Using the proposed technique, the current measurement capabilities of live cell light microscopy will be extended as critical data such as cell adhesion, will be extracted and quantified. In particular, mouse neural stem cells are studied using TIRM technique. Results show that TIRM images are high in information content containing details about cell morphology, cell adhesion and through the combination with time-lapse imaging provide information on cell dynamic processes and motility.",2011,0, 5980,Design and FPGA implementation of digital noise generator based on superposition of Gaussian process,"Currently in the design of digital communication system, in order to detect the communication quality, plenty of tests need to be done in a noisy environment. The common method is adding analog noise to the transmitted data on the radio. Adding digital noise has always been a difficulty. A digital noise generator based on superposition of Gaussian process is presented in this paper. And hardware program simulation is carried out using Quartus II combined with Modelsim software, even the performance of the digital noise generator is tested on FPGA, simultaneously, compared with the single Gaussian noise.",2011,0, 5981,Increasing security of supply by the use of a Local Power Controller during large system disturbances,"This paper describes intelligent ways in which distributed generation and local loads can be controlled during large system disturbances, using Local Power Controllers. When distributed generation is available, and a system disturbance is detected early enough, the generation can be dispatched, and its output power can be matched as closely as possible to local microgrid demand levels. Priority-based load shedding can be implemented to aid this process. In this state, the local microgrid supports the wider network by relieving the wider network of the micro-grid load. Should grid performance degrade further, the local microgrid can separate itself from the network and maintain power to the most important local loads, re-synchronising to the grid only after more normal performance is regained. Such an intelligent system would be a suitable for hospitals, data centres, or any other industrial facility where there are critical loads. The paper demonstrates the actions of such Local Power Controllers using laboratory experiments at the 10kVA scale.",2011,0, 5982,Simple scoring system for ECG quality assessment on Android platform,"Work presented in this paper was undertaken in response to the PhysioNet/CinC Challenge 2011: Improving the quality of ECGs collected using mobile phones. For the purpose of this challenge we have developed an algorithm that uses five simple rules, detecting the most common distortions of the ECG signal in the out of hospital environment. Using five if-then rules arranges for easy implementation and reasonably swift code on the mobile device. Our results on test set B were well-outside the top ten algorithms (Best score: 0.932; Our score: 0.828). Nevertheless our algorithm placed second among those providing open-source code for evaluation on the data set C, where neither data nor scores were released to the participants before the end of the challenge. The difference in the scores of the top two algorithms was minimal (Best score: 0.873; Our score: 0.872). As a consequence, relative success of simple algorithm on undisclosed set C raises questions about the over-fitting of more sophisticated algorithms - question that is hovering above many recently published results of automated methods for medical applications.",2011,0, 5983,Mind-mapping: An effective technique to facilitate requirements engineering in agile software development,"Merging agile with more traditional approaches in software development is a challenging task, especially when requirements are concerned: the main temptation is to let two opposite schools of thought become rigid in their own assumptions, without trying to recognize which advantages could come from either side. Mind mapping seems to provide a suitable solution for both parties: those who develop within an agile method and those who advocate proper requirements engineering practice. In this paper, mind mapping has been discussed as a suitable technique to elicit and represent requirements within the SCRUM model: specifically, we have focused on whether and how mind maps could lead to the development of a suitable product backlog, which in SCRUM plays the role of an initial requirements specification document. In order to experimentally assess how effectively practitioners could rely on a product backlog for their first development sprint, we have identified the adoption of mind maps as the independent variable and the quality of the backlog as the dependent variable, the latter being measured against the ""function points"" metric. Our hypothesis (i.e., mind maps are effective in increasing the quality of product backlogs) has been tested within an existing SCRUM project (the development of a digital library by an academic institution), and several promising data have been obtained and further discussed.",2011,0, 5984,SOSOF: A fuzzy learning approach for Proportional Slowdown Differentiation control,"The Proportional Slowdown Differentiation (PSD) is an important Quality of Service (QoS) model for services in the Cloud. It aims to maintain consistent pre-specified slowdown ratios between the requests of different service classes. In this paper, a novel online fuzzy learning algorithm is proposed to facilitate the PSD control for distributed services. The advantages of the proposed approach are self-optimization of its fuzzy rules and robust control performance. We built a prototype to realize the fuzzy learning algorithm and the PSD control mechanism based on an open source middleware platform. Comprehensive experiments over a wide range of parameters and workloads have been conducted. Experimental results show that the proposed fuzzy learning algorithm can achieve superior performance over several other controllers.",2011,0, 5985,Towards a performance estimate in semi-structured processes,"Semi-structured processes are business workflows, where the execution of the workflow is not completely controlled by a workflow engine, i.e., an implementation of a formal workflow model. Examples are workflows where actors potentially have interaction with customers reporting the result of the interaction in a process aware information system. Building a performance model for resource management in these processes is difficult since the information required for a performance model is only partially recorded. In this paper we propose a systematic approach for the creation of an event log that is suitable for available process mining tools. This event log is created by an incremental cleansing of data. The proposed approach is evaluated in a case study where the quality of the derived event log i assessed by domain experts.",2011,0, 5986,Design pattern prediction techniques: A comparative analysis,"There are many design patterns available in literature to predict refactoring. However literature gives a comprehensive study to evaluate and compare various design patterns so that quality professionals may select an appropriate design pattern. To find a technique which performs better in general is an undesirable problem because behavior of a design pattern also depends on many other features like pre-deployment of design pattern, structural, behavioral etc. we have conducted an empirical survey of various design pattern in terms of various evaluation software metrics. In this paper we have presented comparison of few design patterns on metrics basis.",2011,0, 5987,Semantic Process Management Environment,"As the knowledge-based society has been constructed, the size of work process grows bigger and the amount of the information that has to be analyzed increases. So the necessity of the process management and improvement has been required highly. This study suggests the process management method to support a company's survival strategy to get the competitive power in difficult situation to predict future business environment. The suggested process management method applies ontology for formalizing and sharing the several generalized process management concept. In ontology, several techniques from Six Sigma and PSP are defined for process definition, execution and measurement. With ontology, we provide formal knowledge base for both process management environment and human stakeholders. Also, we can easily improve our environment by extending our process ontology to adapt new management methods.",2011,0, 5988,Impact of SIPS performance on power systems integrity,"An increasing number of utilities are using System Integrity Protection Schemes (SIPS) to minimize the probability of large disturbances and to enhance power system reliability. This trend leads to the use of an increased number of SIPS resulting in additional risks to system security. This paper proposes a procedure based on Markov Modeling for assessing the risk of a SIPS failure or misoperation. The proposed method takes into consideration failures in the three stages of SIPS operation: arming, activation and implementation. This method is illustrated using an example of a Generation Rejection Scheme (GRS) for preventing cascading outages that may lead to load shedding. In addition, system operators tend to have the SIPS always armed to prevent a failure to operate when required. However, this can result in increased probability of SIPS misoperation (operation when not needed). Therefore, the risk introduced to the system by having the SIPS always armed and ready to initiate actions is examined and compared with the risk of automatic or manual arming of SIPS only when required. Sensitivity analysis is also performed to determine how different factors can affect the ability of the SIPS to operate in a dependable and secure manner.",2011,0, 5989,Design of adaptive line protection under smart grid,"Smart grid will bring new opportunities to development of relay protection, new sensor technology is used in smart grid. Simplified algorithm to protect data, reducing data processing time. With the State Grid Corporation of China launched the construction of smart grid, smart grid caused by the characteristics of network reconfiguration, distributed power access technologies such as micro-network operation, to put forward new demands on relay protection, based on local measurement information and a small amount of regional information makes conventional protection face greater difficulties to solve these problems; the same time, research and application on new technologies (such as new sensor technology, clock synchronization and data synchronization technology, computer technology, optical fiber communication technology, etc.) provided a broad space for development of relay protection. Adaptive protection means that the protection must adapt to changing system conditions, the computer relay protection must have a hierarchical configuration of communication lines to exchange information with computer network of other devices. For now, fiber optic communication lines are best medium in large amounts of information transmission and conversion in adaptive protection. Adaptive protection is a protection theory, according to this theory, which allows adjustment of the various protection functions, making them more adapted to practical power system operation. The key idea is to make certain changes to the protection system to respond due to load changes, such as power failures caused by switching operations or changes in the power system. Adaptive protection under the relevant literature to a definition: “Adaptive protection is a basic principle of protection, this principle makes the relay can automatically adjust to various protection functions, or changes to more suitable for a given power system conditions.â€? In addition to conventional protection, b- t also must have a clear adaptive function modules, and only in this case can be called adaptive protection. For the general protection adaptive capacity and detect some aspects of complex fault there are some limitations, hardware circuit of microprocessor line protection device is discussed in this thesis, both hardware and algorithm considers the anti-jamming methods. On the software side, the use of a relatively new method of frequency measurement, dynamically tracking changes of frequency, real-time adjustment of sampling frequency for sampling; and using the new algorithm, improving data accuracy and simplifying the hardware circuit. The adaptive principle is applied to microprocessor line protection, adaptive instantaneous overcurrent protection, overcurrent protection principles, etc, to meet the requirements of rapid change operation mode to improve the performance of line protection. Relay protection needs to adapt to frequent changes in the power system operating mode, correctly remove various failure and equipment, and adaptive relay protection maintains a system of standard features in case of parameter changes. The simulation results show that it is an effective adaptive method.",2011,0, 5990,A method for online analyzing excitation systems performance based on PMU measurements,"A method based on synchronized phasor measurement technology for analyzing and evaluating the dynamic regulating performance of excitation system using dynamic electrical data acquired by phasor measurement unit (PMU) is proposed. Combined with an engineering processing of corresponding excitation system performance parameters, the method realizes the calculation and analysis of the main excitation system performance indexes through detecting and extracting the course of a generator disturbance and its excitation system response. Meanwhile, its whole software system functions are designed and realized based on the system structure of a WAMS. It is concluded through the method introduction and practical project applications that compared with conventional analysis methods, this method has the advantages of online analysis, offline research, simpleness and practicality, convenient use, and its computed results can be treated as an important reference for the evaluation of dynamic regulating performances of a excitation system.",2011,0, 5991,Hierarchical index system for crash-stop service failure detection,"Service Availability is an important method meeting adaptive and general requirements of service failure detection and largely affected by failure detection indexes. Therefore, how to select appropriate indicators of service failure detection is an important issue. However, only small number of indicators in the existing service availability mechanism can't accurately describe the complex situation of a distributed system, thus difficult to meet the QoS requirements of applications. After researching on existing service failure detection algorithms, applications' failures and network survivability, this paper proposes a hierarchical index system for crash-stop service failure detection that considers ability to provide services, network performance, and load brought by failure detection.",2011,0, 5992,A finite queuing model with generalized modified Weibull testing effort for software reliability,"This study incorporates a generalized modified Weibull (GMW) testing effort function (TEF) into failure detection process (FDP) and fault correction process (FCP). Although some researchers have been devoted to model these two processes, the influence of the amount of resources on lag of these two processes is not discussed. The amount of resources consumed can be depicted as testing effort function (TEF), and can largely influence failure detection speed and the time to correct a detected failure. Thus, in this paper, we will integrate a TEF into FDP and FCP. Further, we show that a GMW TEF can be expressed as a TEF curve, and present a finite server queuing (FSQ) model which permits a joint study of FDP and FCP two processes. An actual software failure data set is analyzed to illustrate the effectiveness of proposed model. Experimental results show that the proposed model has a fairly accurate predication capability.",2011,0, 5993,A study on cooling efficiency improvement of thin film transistor Liquid Crystal Display (TFT-LCD) modules,"In recent years, LCD (Liquid Crystal Display) TVs are taking the place of CRT (Cathode Ray Tube) TVs very fast by bringing new display technologies into use. LCD module technology is divided into two main groups; the first one is CCFL (Cold Cathode Fluorescent Lamp) display which was the first type used in LCD TV, the other one is the LED (Light Emitting Diode) module which is the newest display technology comes to make slim TV design. There is a thermal challenge making slim TV design. The purpose of this paper is to investigate the thermal analysis and modeling of a 32"" TFT-LCD LED module, The performance of LCD TV is strongly dependant on thermal effects such as temperature and its distribution on LCD displays The illumination of the display was insured by 180 light emitting diodes (LEDs) located at the top and bottom edges of the modules. Hence, in order to insure good image quality in display and long service life, an adequate thermal management is necessary. For this purpose, a commercially available computational fluid dynamics (CFD) simulation software “FloEFDâ€?was used to predict the temperature distribution. This thermal prediction by computational method was validated by an experimental thermal analysis by attaching 10 thermocouples on the back cover of the modules and measuring the temperatures. Also, thermal camera images of the display by FLIR Thermacam SC 2000 test device were also analyzed.",2011,0, 5994,UIO-based diagnosis of aircraft engine control systems using scilab,"Fault diagnosis is of significant importance to the robustness of aeroengine control systems. This paper makes use of full-order unknown input observers (UIOs) to facilitate the diagnosis of sensor/actuator faults in engine control systems. The built-in “ui-observerâ€?function in Scilab, however, can not give satisfying performance, in terms of observer realization. Hence we rewrite this UIO program in standard Scilab scripts and decouple the effect of unknown disturbances upon state estimation to improve the sensitivity to engine faults. An evaluation platform is created on the basis of the Xcos tool in a Simulink-like manner. All the above work is accomplished in the Scilab environment. Experimental results on an aircraft turbofan engine demonstrate that the suggested UIO diagnostic method has good anti-disturbance ability and can effectively detect and isolate sensor/actuator faults under various fault conditions.",2011,0, 5995,Assessing integrated measurement and evaluation strategies: A case study,"This paper presents a case study aimed at understanding and comparing integrated strategies for software measurement and evaluation, considering a strategy as a resource from the assessed entity standpoint. The evaluation focus is on the quality of the capabilities of a measurement and evaluation strategy taking into account three key aspects: i) the conceptual framework, centered on a terminological base, ii) the explicit specification of the process, and iii) the methodological/technological support. We consider a strategy is integrated if to great extent these three capabilities are met simultaneously. In the illustrated case study two strategies i.e. GQM+Strategies (Goal-Question-Metric), and GOCAME (Goal-Oriented Context-Aware Measurement and Evaluation) are evaluated. The given results allowed us the understanding of strengths and weaknesses for both strategies, and planning improvement actions as well.",2011,0, 5996,Research on the relationship between curvature radius of deflection basin and stress state on bottom of semi-rigid type base,"In China'current specification for the design of asphalt pavement, tensile stress of asphalt layer bottom is one of design indexes. However, the design index can not be detected and verified in practical engineering application. Research and analysis of deflection bowl in this paper show that there is a relationship between tensile stress of each layer bottom and deflections in different positions. The dynamic analysis model of rigid pavement under falling weight deflectometer load was established by utilizing ANSYS for researching the relationship between tensile stress of semi-rigid layer bottom and the curvature radius of deflection basin. This paper tries to seek a testing method to characterize pavement design indexes. It would be helpful to establish a relationship between theoretical calculation and the actual test of engineering. It also contribute to evaluate the performance of pavement and riding quality.",2011,0, 5997,Sampling + DMR: Practical and low-overhead permanent fault detection,"With technology scaling, manufacture-time and in-field permanent faults are becoming a fundamental problem. Multi-core architectures with spares can tolerate them by detecting and isolating faulty cores, but the required fault detection coverage becomes effectively 100% as the number of permanent faults increases. Dual-modular redundancy(DMR) can provide 100% coverage without assuming device-level fault models, but its overhead is excessive. In this paper, we explore a simple and low-overhead mechanism we call Sampling-DMR: run in DMR mode for a small percentage (1% of the time for example) of each periodic execution window (5 million cycles for example). Although Sampling-DMR can leave some errors undetected, we argue the permanent fault coverage is 100% because it can detect all faults eventually. SamplingDMR thus introduces a system paradigm of restricting all permanent faults' effects to small finite windows of error occurrence. We prove an ultimate upper bound exists on total missed errors and develop a probabilistic model to analyze the distribution of the number of undetected errors and detection latency. The model is validated using full gate-level fault injection experiments for an actual processor running full application software. Sampling-DMR outperforms conventional techniques in terms of fault coverage, sustains similar detection latency guarantees, and limits energy and performance overheads to less than 2%.",2011,0, 5998,QoS-aware multipath routing protocol for delay sensitive applications in MANETs A cross-layer approach,"This paper proposes a QoS multipath routing protocol (QMRP) for MANETs based on the single path AODV routing protocol. QMRP establishes node-disjoint paths that experience the lowest delay. Other delay-aware routing protocols do not factor in the projected contribution of the node requesting a route in the total network load. The implication is that the end-to-end (E2E) delay obtained through RREQ is no longer accurate. Unlike its predecessors, QMRP takes into account the projected contribution of the source node in the calculation of E2E delay. To obtain accurate estimate of path delay, QMRP uses cross-layer communication to achieve link and channel-awareness. Performance evaluation of QMRP and comparison with AODV using OPNET show that QMRP outperforms AODV in terms of average throughput, delay and packet loss has been conducted.",2011,0, 5999,Monitoring high performance data streams in vertical markets: Theory and applications in public safety and healthcare,"Over the last several years, monitoring high performance data stream sources has become very important in various vertical markets. For example, in the public safety sector, monitoring and automatically identifying individuals suspected of terrorist or criminal activity without physically interacting with them has become a crucial security function. In the healthcare industry, noninvasive mechanical home ventilation monitoring has allowed patients with chronic respiratory failure to be moved from the hospital to a home setting without jeopardizing quality of life. In order to improve the efficiency of large data stream processing in such applications, we contend that data stream management systems (DSMS) should be introduced into the monitoring infrastructure. We also argue that monitoring tasks should be performed by executing data stream queries defined in Continuous Query Language (CQL), which we have extended with: 1) new operators that allow creation of a sophisticated event-based alerting system through the definition of threshold schemes and threshold activity scheduling, and 2) multimedia support, which allows manipulation of continuous multimedia data streams using a similarity-based join operator which permits correlation of data arriving in multimedia streams with static content stored in a conventional multimedia database. We developed a prototype in order to assess these proposed concepts and verified the effectiveness of our framework in a lab environment.",2011,0, 6000,Rate-distortion optimized image coding allowing lossless conversion to JPEG compliant bitstreams,"This paper proposes an efficient lossy image coding scheme which has a sort of compatibility with the JPEG standard. In this scheme, our encoder directly compresses an image into a specific bitstream. On the other hand, the reconstructed image is given by the standard JPEG decoder with a help of lossless bitstream conversion from the specific bitstream to the JPEG compliant one. This asymmetric structure of encoding and decoding procedures enables us to utilize modern coding techniques such as intra prediction, rate- distortion optimization and arithmetic coding in encoding process, while allowing almost all image manipulation software to directly import image contents without any loss of quality. In other words, the proposed scheme utilizes the JPEG compliant bitstream as an intermediate format for the decoding process. Simulation results indicate that the rate- distortion performance of the proposed scheme is better than those of the current state-of-the-art lossy coding schemes.",2011,0,